high-perf-browser

Optimize web performance through network protocols, resource loading, and browser rendering internals. Use when the user mentions "page load speed", "Core Web Vitals", "HTTP/2", "resource hints", "network latency", or "render blocking". Covers TCP/TLS optimization, caching strategies, WebSocket/SSE, and protocol selection. For UI visual performance, see refactoring-ui. For font loading, see web-typography.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "high-perf-browser" with this command: npx skills add wondelai/skills/wondelai-skills-high-perf-browser

High Performance Browser Networking Framework

A systematic approach to web performance optimization grounded in how browsers, protocols, and networks actually work. Apply these principles when building frontend applications, reviewing performance budgets, configuring servers, or diagnosing slow page loads.

Core Principle

Latency, not bandwidth, is the bottleneck. Most web performance problems stem from too many round trips, not too little throughput. A 5x bandwidth increase yields diminishing returns; a 5x latency reduction transforms the user experience.

The foundation: Every network request passes through DNS resolution, TCP handshake, TLS negotiation, and HTTP exchange before a single byte of content arrives. Each step adds round-trip latency. High-performance applications minimize round trips, parallelize requests, and eliminate unnecessary network hops. Understanding the protocol stack is not optional -- it is the prerequisite for meaningful optimization.

Scoring

Goal: 10/10. When reviewing or building web applications, rate performance 0-10 based on adherence to the principles below. A 10/10 means full alignment with all guidelines; lower scores indicate gaps to address. Always provide the current score and specific improvements needed to reach 10/10.

The High Performance Browser Networking Framework

Six domains for building fast, resilient web applications:

1. Network Fundamentals

Core concept: Every HTTP request pays a latency tax: DNS lookup, TCP three-way handshake, and TLS negotiation -- all before any application data flows. Reducing or eliminating these round trips is the single highest-leverage optimization.

Why it works: Light travels at a finite speed. A packet from New York to London takes ~28ms one way regardless of bandwidth. TCP slow start means new connections begin transmitting slowly. TLS adds 1-2 more round trips. These physics-level constraints cannot be solved with bigger pipes -- only with fewer trips.

Key insights:

  • TCP three-way handshake adds one full RTT before data transfer begins
  • TCP slow start limits initial throughput to ~14KB (10 segments) in the first round trip -- keep critical resources under this threshold
  • TLS 1.2 adds 2 RTTs; TLS 1.3 reduces this to 1 RTT (0-RTT with session resumption)
  • Head-of-line blocking in TCP means one lost packet stalls all streams on that connection
  • Bandwidth-delay product determines in-flight data capacity; high-latency links underutilize bandwidth
  • DNS resolution can add 20-120ms; pre-resolve with dns-prefetch

Code applications:

ContextPatternExample
Connection warmupPre-establish connections to critical origins<link rel="preconnect" href="https://cdn.example.com">
DNS prefetchResolve third-party domains early<link rel="dns-prefetch" href="https://analytics.example.com">
TLS optimizationEnable TLS 1.3 and session resumptionServer config: ssl_protocols TLSv1.3; with session tickets
Initial payloadKeep critical HTML under 14KB compressedInline critical CSS, defer non-essential scripts
Connection reuseKeep-alive connections to avoid repeated handshakesConnection: keep-alive (default in HTTP/1.1+)

See: references/network-fundamentals.md for TCP congestion control, bandwidth-delay product, and TLS handshake details.

2. HTTP Protocol Evolution

Core concept: HTTP has evolved from a simple request-response protocol to a multiplexed, binary, server-push-capable system. Choosing the right protocol version and configuring it properly eliminates entire categories of performance problems.

Why it works: HTTP/1.1 forces browsers into workarounds like domain sharding and sprite sheets because it cannot multiplex requests. HTTP/2 solves multiplexing but inherits TCP head-of-line blocking. HTTP/3 (QUIC) moves to UDP, eliminating head-of-line blocking and enabling connection migration. Each generation removes a bottleneck.

Key insights:

  • HTTP/1.1 allows only one outstanding request per TCP connection; browsers open 6 connections per host as a workaround
  • HTTP/2 multiplexes unlimited streams over a single TCP connection, making domain sharding counterproductive
  • HPACK header compression in HTTP/2 reduces repetitive header overhead by 85-95%
  • HTTP/3 runs over QUIC (UDP), eliminating TCP head-of-line blocking and enabling 0-RTT connection resumption
  • Server Push (HTTP/2) sends resources before the browser requests them -- use sparingly and prefer 103 Early Hints instead
  • Connection coalescing in HTTP/2 lets one connection serve multiple hostnames sharing a certificate

Code applications:

ContextPatternExample
HTTP/2 migrationRemove HTTP/1.1 workaroundsUndo domain sharding, remove sprite sheets, stop concatenating files
Stream prioritizationSignal resource importance to the serverCSS and fonts at highest priority; images at lower priority
103 Early HintsSend preload hints before the full responseServer sends 103 with Link: </style.css>; rel=preload
QUIC/HTTP/3Enable HTTP/3 on CDN or originAdd Alt-Svc: h3=":443" header to advertise HTTP/3 support
Header optimizationMinimize custom headers to reduce overheadAudit cookies and custom headers; remove unnecessary ones

See: references/http-protocols.md for protocol comparison, migration strategies, and server push vs. Early Hints.

3. Resource Loading and Critical Rendering Path

Core concept: The browser must build the DOM, CSSOM, and render tree before painting pixels. Any resource that blocks this pipeline delays first paint. Optimizing the critical rendering path means identifying and eliminating these bottlenecks.

Why it works: CSS is render-blocking: the browser will not paint until all CSS is parsed. JavaScript is parser-blocking by default: <script> halts DOM construction until the script downloads and executes. Fonts can block text rendering for up to 3 seconds. Each blocking resource adds latency directly to time-to-first-paint.

Key insights:

  • Critical rendering path: HTML bytes -> DOM -> CSSOM -> Render Tree -> Layout -> Paint -> Composite
  • CSS blocks rendering; JavaScript blocks parsing -- these have different optimization strategies
  • async downloads scripts in parallel and executes immediately; defer downloads in parallel but executes after DOM parsing
  • <link rel="preload"> fetches critical resources at high priority without blocking rendering
  • <link rel="prefetch"> fetches resources for likely next navigations at low priority
  • Inline critical CSS (above-the-fold styles) and defer the rest to eliminate the render-blocking CSS request
  • Fonts: use font-display: swap to avoid invisible text during font loading

Code applications:

ContextPatternExample
Critical CSSInline above-the-fold styles in <head><style>/* critical styles */</style> + async load full CSS
Script loadingUse defer for most scripts; async for independent scripts<script src="app.js" defer></script>
Resource hintsPreload critical fonts, hero images, above-fold assets<link rel="preload" href="font.woff2" as="font" crossorigin>
Image optimizationLazy-load below-fold images; use modern formats<img loading="lazy" src="photo.avif" srcset="...">
Font loadingPrevent invisible text with font-display@font-face { font-display: swap; }

See: references/resource-loading.md for async/defer behavior, resource hint strategies, and image optimization.

4. Caching Strategies

Core concept: The fastest network request is one that never happens. A layered caching strategy -- browser memory, disk cache, service worker, CDN, and origin -- dramatically reduces load times for repeat visitors and subsequent navigations.

Why it works: Cache-Control headers tell the browser and intermediaries exactly how long a response remains valid. Content-hashed URLs enable aggressive immutable caching. Service workers provide a programmable cache layer that works offline. Each cache hit eliminates a full network round trip.

Key insights:

  • Cache-Control: max-age=31536000, immutable for content-hashed static assets (JS, CSS, images)
  • Cache-Control: no-cache still caches but revalidates every time -- use for HTML documents
  • ETag and Last-Modified enable conditional requests (304 Not Modified) that save bandwidth
  • stale-while-revalidate serves cached content immediately while fetching a fresh copy in the background
  • Service workers intercept fetch requests and can serve from cache, fall back to network, or implement custom strategies
  • CDN caching moves content closer to users, reducing RTT; configure Vary headers correctly to avoid cache pollution

Code applications:

ContextPatternExample
Static assetsLong-lived immutable cache with hash bustingstyle.a1b2c3.css with Cache-Control: max-age=31536000, immutable
HTML documentsRevalidate on every requestCache-Control: no-cache with ETag for conditional requests
API responsesShort TTL with stale-while-revalidateCache-Control: max-age=60, stale-while-revalidate=3600
Offline supportService worker cache-first strategyCache static shell; network-first for dynamic content
CDN configCache at edge with proper Vary headersVary: Accept-Encoding, Accept to prevent serving wrong format

See: references/caching-strategies.md for cache hierarchy, service worker patterns, and CDN configuration.

5. Core Web Vitals Optimization

Core concept: Core Web Vitals -- LCP, INP, and CLS -- are Google's user-centric performance metrics that directly impact search ranking and user experience. Each metric targets a different phase: loading (LCP), interactivity (INP), and visual stability (CLS).

Why it works: These metrics measure what users actually experience, not what servers report. A page can have a fast TTFB but terrible LCP if the hero image loads late. A page can load quickly but feel sluggish if main-thread JavaScript blocks input handling (poor INP). Optimizing for these metrics means optimizing for real user perception.

Key insights:

  • LCP (Largest Contentful Paint): target < 2.5s -- optimize the largest visible element (hero image, heading block, or video poster)
  • INP (Interaction to Next Paint): target < 200ms -- keep main thread free; break long tasks; use requestIdleCallback for non-urgent work
  • CLS (Cumulative Layout Shift): target < 0.1 -- reserve space for dynamic content; set explicit dimensions on images and embeds
  • TTFB (Time to First Byte): target < 800ms -- optimize server response time, use CDN, enable compression
  • FCP (First Contentful Paint): target < 1.8s -- eliminate render-blocking resources, inline critical CSS
  • Measure with Real User Monitoring (RUM) in production, not just synthetic tests in lab conditions

Code applications:

ContextPatternExample
LCP optimizationPreload LCP element; set fetchpriority="high"<img src="hero.webp" fetchpriority="high">
INP optimizationBreak long tasks; yield to main threadscheduler.yield() or setTimeout to chunk work
CLS preventionReserve space for async content<img width="800" height="600"> or CSS aspect-ratio
TTFB reductionCDN, server-side caching, streaming SSREdge rendering with Transfer-Encoding: chunked
Performance budgetSet thresholds and block deploys that exceed themLCP < 2.5s, INP < 200ms, CLS < 0.1 in CI pipeline

See: references/core-web-vitals.md for measurement tools, debugging workflows, and optimization checklists.

6. Real-Time Communication

Core concept: When data must flow continuously between client and server, choosing the right transport -- WebSocket, SSE, or long polling -- determines latency, resource usage, and scalability.

Why it works: HTTP's request-response model creates overhead for real-time data. WebSocket establishes a persistent full-duplex connection with minimal framing overhead (~2 bytes per frame). Server-Sent Events (SSE) provide a simpler server-to-client push over standard HTTP. The right choice depends on whether communication is unidirectional or bidirectional, how frequently data flows, and infrastructure constraints.

Key insights:

  • WebSocket: full-duplex, minimal framing overhead, ideal for chat, gaming, and collaborative editing
  • SSE: server-to-client only, auto-reconnects, works through HTTP proxies, simpler to implement than WebSocket
  • Long polling: fallback when WebSocket/SSE are unavailable; high overhead from repeated HTTP requests
  • WebSocket connections bypass HTTP/2 multiplexing -- each WebSocket is a separate TCP connection
  • Implement heartbeat/ping frames to detect dead connections; mobile networks silently drop idle connections
  • Connection management: exponential backoff on reconnection; queue messages during disconnection

Code applications:

ContextPatternExample
Chat / collaborationWebSocket with heartbeat and reconnectionnew WebSocket('wss://...') with ping every 30s
Live feeds / notificationsSSE for server-to-client streamingnew EventSource('/api/updates') with auto-reconnect
Legacy fallbackLong polling when WebSocket is blockedfetch('/poll') in a loop with timeout
Connection resilienceExponential backoff on reconnectionDelay: 1s, 2s, 4s, 8s... capped at 30s
ScalingUse a pub/sub broker behind WebSocket serversRedis Pub/Sub or NATS for horizontal scaling

See: references/real-time-communication.md for WebSocket lifecycle, SSE patterns, and scaling strategies.

Common Mistakes

MistakeWhy It FailsFix
Adding bandwidth to fix slow pagesLatency, not bandwidth, is the bottleneck for most web trafficReduce round trips: preconnect, cache, CDN
Loading all JS upfrontParser-blocking scripts delay first paint and interactivityCode-split; use defer; lazy-load non-critical modules
No resource hintsBrowser discovers critical resources too late in the parseAdd preconnect, preload for above-fold critical resources
Cache-Control missing or no-store everywhereEvery visit re-downloads all resources from originSet proper max-age for static assets; use content hashing
Ignoring CLSLayout shifts destroy user trust and hurt search rankingSet explicit dimensions on all images, embeds, and ads
Using WebSocket for everythingUnnecessary complexity when SSE or HTTP polling sufficesMatch transport to data flow pattern; SSE for server push
Domain sharding on HTTP/2Defeats multiplexing; creates extra TCP connectionsConsolidate to one origin; let HTTP/2 multiplex
No compressionHTML, CSS, JS transfer at full size, wasting bandwidthEnable Brotli (preferred) or Gzip on server and CDN

Quick Diagnostic

QuestionIf NoAction
Is TTFB under 800ms?Server or network too slowAdd CDN, enable server caching, check backend
Is LCP under 2.5s?Largest element loads too latePreload LCP resource; set fetchpriority="high"
Is INP under 200ms?Main thread blocked during interactionsBreak long tasks; defer non-critical JS
Is CLS under 0.1?Elements shift after initial renderSet explicit dimensions; reserve space for dynamic content
Are static assets cached with content hashes?Repeat visitors re-download everythingAdd hash to filenames; set Cache-Control: immutable
Is HTTP/2 or HTTP/3 enabled?Missing multiplexing and header compressionEnable HTTP/2 on server; add HTTP/3 via CDN
Are render-blocking resources minimized?CSS and sync JS delay first paintInline critical CSS; defer scripts; remove unused CSS
Is compression enabled (Brotli/Gzip)?Transferring uncompressed text resourcesEnable Brotli on server/CDN; fall back to Gzip

Reference Files

Further Reading

This skill is based on Ilya Grigorik's comprehensive guide to browser networking and web performance:

About the Author

Ilya Grigorik is a web performance engineer, author, and developer advocate who spent over a decade at Google working on Chrome, web platform performance, and HTTP standards. He was a co-chair of the W3C Web Performance Working Group and contributed to the development of HTTP/2 and related web standards. His book High Performance Browser Networking (O'Reilly, 2013) is widely regarded as the definitive reference for understanding how browsers interact with the network -- from TCP and TLS fundamentals through HTTP protocol evolution to real-time communication patterns. Grigorik's approach emphasizes that meaningful optimization requires understanding the underlying protocols, not just applying surface-level tricks, and that latency is the fundamental constraint shaping web performance.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

refactoring-ui

No summary provided by upstream source.

Repository SourceNeeds Review
General

cro-methodology

No summary provided by upstream source.

Repository SourceNeeds Review
General

web-typography

No summary provided by upstream source.

Repository SourceNeeds Review
General

top-design

No summary provided by upstream source.

Repository SourceNeeds Review