skip to content
luminary.blog
by Oz Akan
person with connections

HTTP/3 vs HTTP/2: When the Upgrade Actually Matters

A practical comparison of HTTP/3 and HTTP/2, covering where QUIC's advantages are real, where HTTP/2 is still fine, and how to decide for your stack.

/ 6 min read

Table of Contents

The web runs on HTTP, and if you’re building anything today, you’ve probably seen HTTP/3 creeping into your stack. Cloudflare supports it. CDNs advertise it. Your browser already supports it. Oh really? Should you be making infrastructure decisions around it?

The honest answer: it depends on where your users are and how they connect. Let me walk through what actually changed, where the wins are real, and where HTTP/2 is still perfectly fine.

What Changed Under the Hood

HTTP/2 and HTTP/3 look identical at the application layer. Same semantics, same headers, same request-response model. The difference is entirely in the transport.

HTTP/2 runs over TCP with TLS layered on top. TCP gives you reliable, ordered delivery which sounds great until you realize that “ordered” means a single lost packet freezes every stream multiplexed on that connection. This is head-of-line (HOL) blocking at the transport level, and it’s the Achilles’ heel of HTTP/2 under real-world conditions.

HTTP/3 replaces TCP entirely with QUIC, a protocol built on UDP that bakes TLS 1.3 directly into its handshake. Each stream is independently flow-controlled, so a lost packet on one stream doesn’t stall the others. The connection is identified by a connection ID rather than the IP/port tuple, so it survives network changes.

When a mobile user walks from Wi-Fi into cellular, an HTTP/2 connection dies. New TCP handshake, new TLS negotiation, new slow start. With HTTP/3, the connection migrates transparently because QUIC doesn’t care that the source IP changed.

Where HTTP/3 Wins

The gains aren’t uniform. They’re conditional on network quality, and that’s the key insight most comparisons gloss over.

Connection setup is where you see the most consistent improvement. HTTP/2 needs two to three round trips before the first byte of application data: TCP SYN/ACK, then the TLS handshake. HTTP/3 collapses this into a single round trip. For repeat visitors, QUIC supports 0-RTT resumption — the client sends data with its very first packet. On a 150ms RTT (Round-Trip Time) link, that’s 300ms saved before a single byte of your content loads.

Lossy networks are where the difference becomes dramatic. Published benchmarks from Cloudflare, Akamai, and academic studies converge on a consistent pattern: on clean, low-latency wired connections, HTTP/2 and HTTP/3 perform nearly identically. Sometimes HTTP/2 is marginally more efficient because TCP benefits from decades of kernel-level optimization, while QUIC typically runs in user space with slightly higher CPU overhead.

But introduce 2% packet loss — a completely normal condition on mobile or congested Wi-Fi — HTTP/3 delivers page loads 30–55% (numbers might change) faster in these scenarios. The independent stream recovery means your CSS isn’t waiting for a retransmit that has nothing to do with it. In HTTP/2, loss in one TCP segment blocks all higher‑level streams until TCP retransmits and reorders the data; in HTTP/3, each QUIC stream has independent flow control and retransmission, so loss on one stream does not stall others at the transport level.

Mobile users benefit the most. The combination of higher latency, intermittent packet loss, and frequent network transitions makes HTTP/3’s design advantages compound. Studies show up to 88% improvement (take it with a grain of salt) in throughput during network migration scenarios.

Where HTTP/2 Is Still Fine

If your architecture is primarily backend services talking to each other inside a VPC or data center, HTTP/2 is not holding you back. The RTT between services in the same availability zone is sub-millisecond, packet loss is negligible, and connections don’t migrate. The handshake savings are irrelevant when connections are long-lived and pooled.

HTTP/2 also has a slight edge in raw CPU efficiency at high throughput. QUIC’s user-space implementation means more context switches and more CPU per byte. If you’re pushing multi-gigabit traffic through a reverse proxy, that overhead is worth considering — though QUIC implementations are improving rapidly, and hardware offload support is emerging.

For REST APIs consumed by server-side clients in stable environments, HTTP/2 with connection pooling and keep-alives gives you excellent performance without the operational complexity of rolling out QUIC.

Practical Guidance for Your Stack

Here’s how I’d think about it depending on what you’re building:

Public-facing web apps and SPAs — enable HTTP/3 on your CDN and load balancer. This is low-effort if you’re already behind Cloudflare, AWS CloudFront, or similar. Your mobile users will thank you, and desktop users won’t notice a difference (which is fine).

Mobile-first applications — HTTP/3 should be a priority. The connection migration alone eliminates an entire class of reliability issues. If your app downloads assets or syncs data in the background, the resilience improvement is substantial.

Service-to-service communication — stay on HTTP/2. You’re not solving a real problem by introducing QUIC here, and you’d be adding operational surface area for no measurable gain. gRPC over HTTP/2 remains a solid choice for internal APIs.

Global distribution with users in high-latency regions — HTTP/3 makes a meaningful difference. Real-world measurements show the biggest improvements in regions with higher baseline latency and less reliable infrastructure. If your users are in Southeast Asia, South America, or sub-Saharan Africa, HTTP/3 narrows the experience gap.

Video streaming and real-time media — the independent stream recovery is a natural fit. Published data from early QUIC deployments showed up to 18% fewer rebuffer events for video streaming. If you’re building anything latency-sensitive that hits consumer networks, HTTP/3 helps.

The CPU Question

This comes up often enough to address directly. Yes, QUIC uses more CPU than TCP. The kernel has had decades to optimize TCP, and QUIC runs in user space. In benchmarks at very high throughput, you’ll see a measurable difference.

But for most applications, this isn’t the bottleneck. Your application logic, database queries, and serialization cost far more than the transport layer. And the QUIC ecosystem is maturing fast — kernel-bypass implementations, io_uring integration, and hardware offload are all active areas of development.

Unless you’re operating at the scale where you’re counting cycles per packet, the CPU overhead of QUIC is noise in your overall resource profile.

Bottom Line

HTTP/3 isn’t a magic speed boost. On a clean network with low latency, you won’t see a difference. But networks aren’t clean — especially the last mile to your users’ devices. HTTP/3 was designed for the internet as it actually is: lossy, mobile, and unpredictable.

If you’re serving end users over the public internet, HTTP/3 gives you measurably better reliability and latency where it matters most. If you’re building internal services in a controlled environment, HTTP/2 remains an excellent choice.

The good news: this isn’t an either-or decision. Modern servers and CDNs negotiate the best available protocol automatically. Enable HTTP/3, let your infrastructure advertise it, and let clients that support it benefit while everything else falls back to HTTP/2 seamlessly.