HTTP3 at your fingertips!

Sagar Desarda
5 min readMay 1, 2020
Speed thrills

As the specs for HTTP/2 were finalized by IETF in 2015, we saw that the entire web experience got a whole lot faster. According to W3Techs, 44.1% of the websites in the world are on HTTP/2. While the adoption continues to grow, there were some inherent challenges like TCP Head-Of-Line blocking. There was also a need to further optimize the protocol to further improve the web performance especially knowing the next major areas for business growth are geos where internet connectivity isn’t like in the West; namely Africa, India, etc.

Source: Google

HTTP-over-QUIC which has been in an experimental stage almost entire time we have had HTTP/2 is now essentially HTTP/3.

Lowdown on the new HTTP/3 protocol

To understand HTTP/3; it is first important to understand, what is QUIC? It stands for ‘Quick UDP Internet Connection.’

QUIC is a transport layer protocol which is meant to reduce latency as compared to TCP. It is similar to TCP+TLS+HTTP/2 implemented on UDP. Since TCP is implemented in operating system kernels, firewalls, load balancers, NATs etc.; it is not exactly feasible to make changes to the TCP protocol itself. QUIC, on the other hand is built on top of UDP and doesn’t have any such limitations.

Some of the key features of HTTP/3 are:

  • Multiplexing without TCP Head-Of-Line blocking
  • Faster connection establishment
  • Header compression using QPACK algorithm
  • Improved congestion control
  • In-order delivery of packets in a single stream
  • Leverages TLS 1.3
  • More effective server push, with client’s consent
  • Prioritization

Multiplexing without TCP Head-Of-Line blocking

With HTTP/2, the client can send multiple requests to the browser over the same TCP connection without having to wait for the previous requests to complete. While this is an improvement over HTTP 1.1, it is still a problem because one lost packet will results in all the streams waiting until that packet is retransmitted and received.

Similar to a slow mover here which causes the queue to back up, one lost packet in HTTP/2 results in the entire stream waiting for that packet to be delivered.

By implementing HTTP/3 (QUIC over HTTPS), if you lose packets between the client and the server, only the particular stream that lost the packet gets blocked as UDP doesn’t care about the order of delivery. As a result, you only have that small stream which is now impacted; instead of having the entire connection impacted.

Seamless Experience

If you look at connection identity with say, http2; the way you establish connections is between IP addresses and the corresponding port (example: 192.168.1.2:443).

QUIC has implemented it’s own connection identity (Connection UUID) independent of IPs and ports. This makes the switching from WIFI to cellular much more seamless and smooth as you don’t need to renegotiate your connection or TLS. Your earlier connection would still be valid.

Header compression using QPACK algorithm

HTTP/2 supports HPACK header compression algorithm which was developed considering attacks like CRIME. HTTP/3 leverages QPACK which is similar to HPACK, but is modified so that it can work with streams that are delivered out of order.

Improved congestion control

The congestion control algorithms are moved from the kernel space to the user space which makes it easier to run and test them. It also introduces the NewReno algorithm which halves the congestion window, sets the slow start threshold to the new congestion window and then enters the recovery period. This greatly improves the improves the packet handling when 1–2 of them are lost.

TLS 1.3

TLS 1.3 is most-secure and speeds up the connection with a feature called as 0-RTT (zero round trip reduction). It also mandates that the implementation of PFS (Perfect Forward Secrecy) making it impossible to passively monitor encrypted traffic.

Source: ssl2buy.com

Some of the older, insecure ciphers such as AES-CBC, SHA-1, DES, RC4, MD5, 3-DES, EXPORT-strength ciphers (responsible for FREAK and LogJam) and arbitrary Diffie-Hellman groups are left out with the 1.3 implementation.

Prioritization

One of the HTTP/3 stream frames is called PRIORITY. It is used to set priority and dependency on a stream. The frame can set a specific stream to depend on another specific stream and it can set a “weight” on a given stream. A dependent stream should only be allocated resources if either all of the streams that it depends on are closed or it is not possible to make progress on them. A stream weight is value between 1 and 256 and it is specified that streams with the same parent should be allocated resources proportionally based on their weight.

Server Push

A server push is essentially sending resources to the client before the client requests for them. It is intended at improving the performance of the applications in a way that, the client ideally would already have the content for which it was about to make a request for.

In HTTP/3, the client can set a limit on how many push requests it can accept and would inform the server about it. If the server goes over that limit, it would result in a connection error. If the server deems it likely that the client would need an extra resource that it hasn’t made a request for, it an send a PUSH_PROMISE frame and then send the actual response over a new stream.

Even if the pushes are accepted by the client, it can still send a CANCEL_PUSH frame to the server and cancel each individual pushed steam, if it feels the need to do so.

The path ahead looks exciting!

That said, all is not rosy. There are fair share of challenges as well with the main steam people still think that UDP is not reliable, many ISPs, organizations etc. have blocked or de-prioritized UDP traffic or that it takes up lot of CPU resources. There are also some concerns that the performance could be degraded by attacks like Server Config Replay Attack. HTTP/3 is still in draft state, but as the work goes on to further improve and optimize it; the upgrade to HTTP/3 would exciting!

--

--

Sagar Desarda

The views in the articles are mine alone and do not represent my employer.