HTTP/2 and HTTP/3 for REST APIs

Multiplexing, header compression, QUIC, and 0-RTT — how modern HTTP improves API performance

Last Updated:

HTTP/1.1 Problems

HTTP/1.1 was designed in 1997. Modern REST APIs — often making dozens of parallel requests — hit its fundamental limits daily:

ProblemImpact
Head-of-line (HOL) blockingOne slow response blocks all subsequent responses in the same TCP connection
6-connection browser limitBrowsers open max 6 TCP connections per origin — queuing all excess requests
Repeated headersHTTP/1.1 sends all headers (Cookie, Authorization, User-Agent) with every request — often >1KB of overhead per request
No multiplexingOnly one request per TCP connection at a time (unless pipelining, which is rarely used)

HTTP/2 Features

HTTP/2 (RFC 9113, updated from RFC 7540) solves these problems while remaining fully compatible with HTTP/1.1 semantics — same methods, status codes, and headers.

FeatureHTTP/1.1HTTP/2
ProtocolTextBinary frames
ConnectionsMultiple TCP per originSingle TCP with multiplexed streams
HOL blocking❌ Per-connection blocking✅ Stream-level (TCP level remains)
Header compression❌ None✅ HPACK (references repeated headers)
TLSOptionalRequired in practice (all browsers)
gRPC❌ Not supported✅ Built on HTTP/2

Multiplexing

HTTP/2 allows multiple requests and responses over a single TCP connection simultaneously. Instead of needing 6 parallel connections, one HTTP/2 connection handles all streams. This reduces TCP handshake overhead and makes domain sharding an anti-pattern.

HPACK Header Compression

HTTP/2 builds a dynamic table of previously sent headers. Instead of sending Authorization: Bearer eyJ... on every request, it sends a table index — reducing API header overhead by 80–90%.

HTTP/2 in Node.js

// Native Node.js HTTP/2 server
const http2 = require('node:http2');
const fs    = require('node:fs');

const server = http2.createSecureServer({
  key:  fs.readFileSync('server.key'),
  cert: fs.readFileSync('server.crt')
});

server.on('stream', (stream, headers) => {
  const method = headers[':method'];
  const path   = headers[':path'];

  if (method === 'GET' && path === '/api/users') {
    stream.respond({
      ':status': 200,
      'content-type': 'application/json',
      'cache-control': 'max-age=60'
    });
    stream.end(JSON.stringify({ users: [] }));
    return;
  }

  stream.respond({ ':status': 404 });
  stream.end();
});

server.listen(443, () => console.log('HTTP/2 server on :443'));
// HTTP/2 with Express (using http2-express-bridge)
npm install http2-express-bridge express spdy

const http2Express = require('http2-express-bridge');
const express = require('express');
const http2   = require('node:http2');
const fs      = require('node:fs');

const app = http2Express(express);

app.get('/api/users', (req, res) => {
  res.json({ users: [] });
});

const server = http2.createSecureServer(
  { key: fs.readFileSync('key.pem'), cert: fs.readFileSync('cert.pem') },
  app
);

server.listen(443);

HTTP/3 + QUIC

HTTP/3 (RFC 9114) runs over QUIC (RFC 9000) — a UDP-based transport protocol that eliminates the TCP handshake entirely and solves TCP-level HOL blocking.

FeatureHTTP/2 (TCP)HTTP/3 (QUIC/UDP)
TransportTCPQUIC (UDP)
Handshake1-RTT TCP + 1-RTT TLS = 2 RTT0-RTT to 1-RTT (combined TLS 1.3)
HOL blockingTCP level blocks all streams✅ Eliminated — each stream independent
Connection migration❌ IP change = reconnect✅ Connection ID survives IP change (mobile!)
Congestion controlTCP CUBIC/BBRQUIC BBR — optimized for high latency

0-RTT Connection Resumption

QUIC can resume a previous connection in 0 round trips — the first data packet is sent immediately without waiting for a handshake. This dramatically reduces latency for mobile clients that frequently switch networks.

Performance Benchmarks

ScenarioHTTP/1.1HTTP/2HTTP/3
30 parallel API requests~800ms (queued)~120ms (multiplexed)~110ms
Mobile on lossy network (2% packet loss)Baseline~30% faster~65% faster (no TCP retransmit)
Same connection, subsequent request1-RTT0-RTT (TLS resumed)0-RTT (QUIC resumption)

For REST APIs serving browsers or mobile apps, HTTP/2 provides significant wins. HTTP/3 provides the biggest gains on mobile networks with packet loss. For server-to-server API calls on a low-latency LAN, the difference is negligible.

Enabling in Production

Nginx HTTP/2 + HTTP/3

server {
    listen 443 ssl;
    listen 443 quic;               # HTTP/3 (requires Nginx 1.25+ with quic module)
    http2 on;

    ssl_certificate     /etc/nginx/ssl/cert.pem;
    ssl_certificate_key /etc/nginx/ssl/key.pem;
    ssl_protocols TLSv1.3;         # HTTP/3 requires TLS 1.3

    # Advertise HTTP/3 to clients
    add_header Alt-Svc 'h3=":443"; ma=86400';

    location /api/ {
        proxy_pass http://backend;
        proxy_http_version 1.1;    # internal proxy over HTTP/1.1 is fine
    }
}

Cloudflare (Zero Config)

Enable HTTP/2 and HTTP/3 in the Cloudflare dashboard under Speed → Optimization. All traffic from browsers to Cloudflare uses HTTP/3; Cloudflare to your origin can use HTTP/1.1 or HTTP/2.

Impact on REST API Design

  • No more domain sharding — splitting assets across cdn1.example.com, cdn2.example.com was an HTTP/1.1 hack. HTTP/2 multiplexes everything over one connection.
  • Fewer bundling tradeoffs — HTTP/2 handles many small requests efficiently; you no longer need to concatenate all API calls into bulk endpoints purely for HTTP overhead reduction.
  • gRPC runs on HTTP/2 — if you're evaluating gRPC vs REST, know that gRPC's performance advantage comes partly from HTTP/2 multiplexing.
  • Connection pooling still matters — HTTP/2 reuses connections, so your backend connection pool should allow persistent connections to upstream services.