HTTP/1 vs HTTP/2 vs HTTP/3: Key Differences

HTTP/1 uses single requests, HTTP/2 introduces multiplexing, and HTTP/3 uses QUIC over UDP to further reduce latency.

HTTP/1 vs HTTP/2 vs HTTP/3

HTTP is the protocol that powers every web page load, API call, and browser-based interaction on the internet. Over its three major versions, it has evolved from a simple single-request protocol into a sophisticated system capable of multiplexing streams, compressing headers, and running over an entirely new transport layer. Understanding the differences between HTTP/1, HTTP/2, and HTTP/3 explains why modern web performance looks the way it does and why the protocol continues to evolve.

A Brief History of HTTP

HTTP, which stands for HyperText Transfer Protocol, was created by Tim Berners-Lee in 1989 as part of the original World Wide Web project. The earliest version, HTTP/0.9, was a single-line protocol capable only of transferring HTML documents. HTTP/1.0, formalised in 1996, introduced headers, status codes, and support for different content types. HTTP/1.1, released in 1997, added persistent connections and chunked transfer encoding and remained the dominant version of the protocol for nearly two decades.

As the web grew from simple documents to complex applications loading dozens of assets, the limitations of HTTP/1.1 became a significant constraint on performance. HTTP/2, standardised in 2015, addressed these limitations with multiplexing, header compression, and server push. HTTP/3, which became an official standard in 2022, went further by replacing TCP with a new transport protocol called QUIC, built on UDP, to eliminate the head-of-line blocking problems that remained in HTTP/2.

HTTP/1.1: The Foundation

HTTP/1.1 introduced persistent connections, which allowed a single TCP connection to be reused for multiple requests rather than opening and closing a new connection for every resource. This was a significant improvement over HTTP/1.0, where every request required a full TCP handshake. However, HTTP/1.1 still suffered from a fundamental architectural constraint called head-of-line blocking.

In HTTP/1.1, requests on a single connection must be processed in order. If a browser sends three requests on one connection and the first response is slow to arrive, the second and third requests must wait even if their resources are ready to be sent. This is head-of-line blocking at the HTTP layer. To work around it, browsers open multiple parallel TCP connections to the same server, typically six to eight per domain. Each connection carries its own requests independently, so slowness on one does not block the others.

Opening multiple parallel connections is an effective but inefficient workaround. Each TCP connection requires its own handshake, its own congestion control window that starts small and grows gradually, and its own TLS negotiation if the site uses HTTPS. The overhead of managing six to eight connections per domain adds up significantly, particularly on high-latency connections. Front-end developers responded with techniques like domain sharding, image sprites, CSS and JavaScript bundling, and inlining small resources, all aimed at reducing the number of HTTP requests rather than fixing the underlying protocol limitation.

HTTP/2: Multiplexing and Efficiency

HTTP/2 was developed by Google as SPDY before being standardised by the IETF. Its central innovation is multiplexing, which allows multiple requests and responses to be in flight simultaneously over a single TCP connection without waiting for each other. Instead of being a text-based protocol like HTTP/1.1, HTTP/2 is binary, framing all communication into small units called frames that are tagged with a stream identifier. The receiver reassembles frames from different streams into their respective requests and responses, allowing them to interleave freely.

With multiplexing, the browser no longer needs multiple parallel connections to achieve concurrency. A single HTTP/2 connection can carry hundreds of simultaneous streams, each progressing independently. This eliminates the HTTP-level head-of-line blocking that plagued HTTP/1.1 and removes the need for the workarounds that front-end developers had relied on for years. Bundling, domain sharding, and inlining, which were necessary under HTTP/1.1, can actually hurt performance under HTTP/2 because they prevent the browser and server from granularly prioritising and caching individual resources.

HTTP/2 also introduced header compression using a scheme called HPACK. In HTTP/1.1, headers are sent as plain text with every request, and on a page that makes many requests, the repeated transmission of largely identical headers such as cookies, user agent strings, and accept headers adds significant overhead. HPACK maintains a shared compression table on both the client and server, allowing subsequent requests to reference previously sent headers rather than retransmitting them in full. This substantially reduces the per-request overhead for header-heavy traffic.

Server push was another feature introduced in HTTP/2, allowing the server to proactively send resources to the client before the client has requested them. In theory, a server could push the CSS and JavaScript files alongside the initial HTML response, eliminating the round-trip time for the browser to discover and request those assets. In practice, server push proved difficult to use correctly without over-pushing resources the browser already has cached, and it has been deprecated in many implementations. Most of the intended benefit of server push is now better achieved through the HTTP Early Hints mechanism.

HTTP/2's Remaining Limitation: TCP Head-of-Line Blocking

Despite solving HTTP-level head-of-line blocking, HTTP/2 introduced a subtler version of the same problem at the transport layer. All HTTP/2 streams share a single TCP connection. TCP guarantees that bytes are delivered in order. If a single packet is lost, TCP will not deliver any subsequent packets from that connection to the application until the lost packet has been retransmitted and received, even if those subsequent packets belong to completely different HTTP/2 streams that have nothing to do with the lost data.

This TCP head-of-line blocking can actually make HTTP/2 perform worse than HTTP/1.1 under significant packet loss conditions. HTTP/1.1 with six parallel connections would only see one of those connections stall on a lost packet, while the other five continue. An HTTP/2 connection carrying the same traffic in a single TCP stream stalls entirely until the lost packet is recovered. On high-quality network connections this rarely matters, but on lossy mobile networks or congested links it is a meaningful performance regression.

HTTP/3: Built on QUIC

HTTP/3 addresses TCP head-of-line blocking at its root by replacing TCP with QUIC as the transport protocol. QUIC was developed by Google and standardised by the IETF alongside HTTP/3. Rather than running over TCP, QUIC runs over UDP, the simpler connectionless transport protocol that has no built-in ordering or reliability guarantees. QUIC implements its own reliability, ordering, and congestion control on top of UDP, but critically it does so per stream rather than per connection.

When a packet is lost in a QUIC connection, only the stream or streams that depended on data in that packet are stalled while waiting for retransmission. All other streams in the connection continue flowing unimpeded. This eliminates TCP head-of-line blocking entirely and means that HTTP/3 over QUIC maintains the multiplexing benefits of HTTP/2 without the transport-level blocking problem that undermined it on lossy networks.

QUIC also significantly reduces connection establishment latency. A TCP connection requires a handshake that takes one round-trip before any data can flow. TLS on top of TCP adds another one or two round-trips for the security handshake. QUIC combines the transport and security handshakes into a single process, typically completing in one round-trip for a new connection. For connections to a server the client has connected to before, QUIC supports zero round-trip resumption, where the client can send data immediately in the first packet using session information cached from the previous connection.

Another benefit of QUIC is connection migration. TCP connections are identified by the four-tuple of source IP, source port, destination IP, and destination port. If any of these change, such as when a mobile device switches from Wi-Fi to a cellular network, the TCP connection breaks and must be re-established from scratch. QUIC connections are identified by a connection ID that is independent of the underlying network path. When the device changes networks, the QUIC connection can continue using the new network path without interruption, which is particularly valuable for mobile users moving between networks.

Side-by-Side Comparison

Feature HTTP/1.1 HTTP/2 HTTP/3
Standardised199720152022
Transport protocolTCPTCPQUIC over UDP
FormatPlain textBinary framesBinary frames over QUIC
MultiplexingNo, one request at a time per connectionYes, multiple streams per connectionYes, independent streams per connection
HTTP head-of-line blockingYesNoNo
TCP head-of-line blockingPartially mitigated by multiple connectionsYes, all streams blocked by one lost packetNo, loss only affects the relevant stream
Header compressionNoYes, HPACKYes, QPACK
Connection establishment1 RTT TCP + 1-2 RTT TLS1 RTT TCP + 1-2 RTT TLS1 RTT or 0 RTT for returning connections
Server pushNoYes, largely deprecatedYes, largely deprecated
Connection migrationNoNoYes, via QUIC connection ID
EncryptionOptional via HTTPSOptional but all implementations require TLSAlways encrypted, TLS 1.3 built into QUIC

Performance in Practice

The performance gains from HTTP/2 over HTTP/1.1 are most visible on pages that load many small assets, which is the typical profile of a modern web application. By eliminating the need for multiple parallel connections and removing HTTP-level head-of-line blocking, HTTP/2 reduces the time spent waiting for assets to be fetched sequentially. On high-quality network connections with low packet loss, HTTP/2 consistently outperforms HTTP/1.1.

The performance gains from HTTP/3 over HTTP/2 are most visible on high-latency or lossy networks, which is most relevant to mobile users. On a fast broadband connection with negligible packet loss, the difference between HTTP/2 and HTTP/3 is often imperceptible. On a congested mobile network where packet loss is more frequent, HTTP/3's per-stream loss recovery and faster connection establishment can produce meaningful improvements in page load time and perceived responsiveness.

It is worth noting that HTTP/3 adoption requires infrastructure that supports QUIC, which runs over UDP. Some corporate firewalls and network configurations block UDP traffic on ports other than DNS, which would prevent HTTP/3 connections from being established. HTTP/3 clients handle this gracefully by falling back to HTTP/2 or HTTP/1.1 when QUIC is blocked. The Alt-Svc response header is used by servers to advertise HTTP/3 support, and clients that successfully connect via HTTP/3 will prefer it for subsequent requests.

Adoption and Browser Support

HTTP/2 is now supported by all major browsers and is served by the vast majority of popular websites. Most major web servers including Nginx, Apache, and Caddy support HTTP/2 natively, and cloud platforms and CDNs have offered HTTP/2 support for years. Enabling HTTP/2 on an existing server typically requires only a configuration change and has no impact on clients that do not support it, as the protocol falls back to HTTP/1.1 automatically through TLS negotiation.

HTTP/3 support has grown rapidly since its standardisation in 2022. All major browsers including Chrome, Firefox, Safari, and Edge support HTTP/3. Cloudflare, Google, and Meta have deployed HTTP/3 across their infrastructure. Nginx supports HTTP/3 in recent versions and Caddy has had experimental QUIC support for several years. Adoption continues to grow as server software matures and operators become more comfortable deploying QUIC-based infrastructure.

What This Means for Developers

For most developers, the practical implications of the shift from HTTP/1.1 to HTTP/2 and HTTP/3 centre on unlearning some optimisation techniques that were necessary under HTTP/1.1 but are unnecessary or counterproductive under newer versions of the protocol.

  • Bundling JavaScript and CSS: Under HTTP/1.1, bundling many files into one reduced the number of requests and was a significant performance win. Under HTTP/2 and HTTP/3, fewer larger bundles can actually be worse for caching because a change to any part of the bundle invalidates the entire cached file. Smaller granular files that change independently cache better and benefit from multiplexing.
  • Domain sharding: Splitting assets across multiple subdomains to bypass the browser's per-domain connection limit was a common HTTP/1.1 optimisation. Under HTTP/2, this is counterproductive because it prevents the browser from using a single efficient multiplexed connection and forces it to establish multiple connections instead.
  • Inlining small assets: Inlining small CSS, JavaScript, and images directly into HTML was another HTTP/1.1 technique to reduce requests. Under HTTP/2, the overhead of an additional request is much smaller because of multiplexing, and external resources can be cached independently. Aggressive inlining prevents caching of resources that rarely change.
  • Enable HTTP/2 on your server: If you manage your own server, ensure HTTP/2 is enabled. Most modern web servers support it with a simple configuration option and it requires no changes to application code.
  • Use a CDN with HTTP/3 support: CDNs like Cloudflare handle the QUIC infrastructure complexity and automatically serve HTTP/3 to clients that support it, making HTTP/3 accessible without managing QUIC on your own servers.

Frequently Asked Questions

  1. Do I need to change my application code to support HTTP/2 or HTTP/3?
    No. HTTP/2 and HTTP/3 are transport-level changes that are handled by the web server and the browser. Your application code produces the same HTML, JSON, and other responses regardless of which HTTP version is used to deliver them. The protocol negotiation happens automatically during the connection setup through a TLS extension called ALPN for HTTP/2 and through the Alt-Svc header or HTTPS DNS records for HTTP/3. Enabling a newer HTTP version is a server configuration change, not an application change.
  2. Is HTTP/3 faster than HTTP/2 for everyone?
    Not necessarily. On fast, reliable networks with low packet loss, HTTP/2 and HTTP/3 perform very similarly. HTTP/3's advantages are most pronounced on high-latency connections, lossy mobile networks, and situations involving network transitions between Wi-Fi and cellular. For users on stable broadband connections, the difference is often imperceptible. HTTP/3 is a meaningful improvement at the tail end of the performance distribution, benefiting the users with the worst network conditions most significantly.
  3. Why does HTTP/3 use UDP instead of TCP?
    TCP's strict in-order delivery guarantee is the root cause of transport-level head-of-line blocking in HTTP/2. Modifying TCP to change this fundamental behaviour is not practical because TCP is implemented in operating system kernels that update slowly and inconsistently across the billions of devices on the internet. Building QUIC on top of UDP allowed the protocol designers to implement reliability and ordering per stream rather than per connection, and to iterate and deploy the protocol through application-level updates rather than waiting for OS kernel updates.
  4. Can HTTP/3 be blocked by firewalls?
    Yes. QUIC runs over UDP, and some corporate firewalls block all UDP traffic except on port 53 for DNS. When HTTP/3 connection attempts are blocked or fail, clients fall back automatically to HTTP/2 or HTTP/1.1. This fallback behaviour is built into the protocol design specifically because HTTP/3 was expected to face deployment challenges in environments with restrictive network policies. Servers advertise HTTP/3 support through the Alt-Svc response header so clients know to try it, but the fallback ensures connectivity even when it cannot be used.
  5. Does HTTP/2 require HTTPS?
    Technically no. The HTTP/2 specification allows unencrypted HTTP/2 connections. In practice however, every major browser only implements HTTP/2 over TLS. This means that for browser-based web traffic, HTTP/2 effectively requires HTTPS. The reasoning is that plaintext HTTP/2 would be vulnerable to the same interception and manipulation as plaintext HTTP/1.1, and the performance benefits of HTTP/2 are best paired with the security benefits of TLS. HTTP/3 goes further and makes encryption mandatory at the specification level since TLS 1.3 is built directly into the QUIC protocol.
  6. How can I check which HTTP version my site is using?
    The easiest way is through browser developer tools. In Chrome or Firefox, open DevTools, go to the Network tab, right-click the column headers, and enable the Protocol column. Each request will show h1 for HTTP/1.1, h2 for HTTP/2, and h3 for HTTP/3. You can also use the curl command line tool with the verbose flag to see the protocol negotiated for a specific request, or use online tools that analyse HTTP headers and report which version is being served.

Conclusion

The evolution from HTTP/1.1 to HTTP/2 to HTTP/3 represents a sustained effort to address the performance bottlenecks that emerged as the web grew from simple document delivery into a platform for complex applications. HTTP/1.1 laid the foundation but its sequential request model forced inefficient workarounds. HTTP/2 solved the HTTP-layer concurrency problem with multiplexing and binary framing but inherited TCP's head-of-line blocking. HTTP/3 and QUIC address that final bottleneck by moving to a transport protocol that handles loss per stream rather than per connection and reduces connection setup latency significantly. For developers, the most actionable takeaway is that optimisation strategies designed for HTTP/1.1 should be reconsidered under HTTP/2 and HTTP/3, and that enabling newer protocol versions on your server and CDN is one of the lowest-effort performance improvements available. To go deeper, explore HTTP vs HTTPS, HTTP requests and responses, TCP vs UDP, and CDN.