The Hypertext transfer protocol (HTTP) is one of the most widely accepted application level protocol in the world of internet. It sprung into life in 1991 and was the first rudimentary version – HTTP 0.9, and evolved into HTTP 1.0 and then to HTTP 1.1 over time. Each version resolved various protocol ambiguities of the former versions resulting in the sustainability of the HTTP 1.1 in the cyber world for a decent time span. However, with the advent of newer techniques and revolutions in the world of Web, it kept receiving iterative improvements which led to the emerging of HTTP 2 in the year 2015. As far as the future of this protocol is concerned, HTTP 3  is the major version of the same that has been introduced very recently in 2019.

In this blog, we will learn about both HTTP 2 and HTTP 3 and a brief overview of the evolution of HTTP protocol in general.

Evolution of HTTP

HTTP (version 0.9) began with the goal of a simple one-line protocol where the request consisted of a single line including the

  •  GET method
  • And, the path of the requested URL

A single hypertext document constituted the response of the request- no header or any other type of metadata, just the basic HTML.

It couldn’t get any simpler, right?

However, HTTP 1.1 overcame various shortcomings of the previous versions by introducing multiple feature enhancements like:

  • Pipelined connections
  • Compressions/decompression
  • Greater bandwidth saving by providing cache support etc.
  • Support for various methods – GET, POST, HEAD, PUT, DELETE, TRACE, OPTIONS


Limitations of HTTP 1.1

The main standing problem with the successful 1.1 version of HTTP was of multiple requests! This version was restricted to process a single request per TCP connection. As a result of which the browsers were left with no other choice than to use multiple TCP connections in the case of processing more than one request simultaneously.

Doing that, however, wasn’t a very good idea as it causes duplication of data. To counteract this, it was required to resort to other protocols for extracting the relevant information free of errors. As the web applications grew in terms of scope, complexity, and functionality it led to the various loopholes in context to performance and security.

Why was HTTP 2 developed?

To overcome the shortcomings of previous HTTP versions, HTTP 2 was developed for improved performance and robustness.

About HTTP 2

The main goal of this version of HTTP was to minimize the latency in the processing of browser requests. It was majorly achieved with the introduction of the following capabilities: –


This is the fundamental feature of HTTP 2 which allows multiple requests to be sent and received asynchronously over the single TCP connection.

How it works?

HTTP 2 is a binary protocol. Multiple HTTP 2 requests are broken down into frames which are the binary pieces of the data. Each request and response is associated with a unique identifier called a stream id. This helps in identifying which request/response a specific frame is associated with.

  • The first step is done by the client in the form of dividing the request in binary frames and assigning the stream id of the request to the frames.
  • It then proceeds to establish a TCP connection with the server and the client then starts sending the frames to the server
  • Once the server is ready with a response, it bifurcates it into frames and provides a similar stream id.
  • These frames are then sent over the same connection back to the client.

This process happens parallelly which reduces the overall network latency manifold.


HTTP 2 uses header compression (HPACK) for improved performance. The former versions of HTTP transmitted requests/responses in the form of plain text whereas this version transmitted it in the form of binary format thereby reducing the amount of data and simplifying it for the client to interpret the information sent, in turn enhancing the page performance.

Resource Prioritization

As the name suggests, this feature allows the loading of essential resources first. This allows the developers to associate dependency levels for their piece of code thus facilitating the visitors only the code required by a specific web page.

Server Push

This feature allows the server to send resources to the client before it requests them. Pushing server responses to the client cache proactively enhances the performance.

Quick overview of QUIC

QUIC stands for Quick UDP Internet Connections. It is a transport layer level protocol designed with the intention to reduce the latency compared to that of TCP. It greatly reduces overhead during connection setup. As TLS is demanded by most of the HTTP connections, QUIC makes the exchange setup keys and supported protocol part of the initial handshake process itself.
QUIC uses UDP as it’s foundation which excludes loss recovery. This means if an error occurs in one stream the protocol stack can continue serving other streams independently. This greatly enhances performance.

In QUIC, the packets are encrypted in a single manner, so that it does not result in the encrypted version of the data waiting for partial packets. This is not possible under TCP, because the encryptions are in a byte stream and the protocol stack is not aware of higher-layer boundaries within this stream. It can be negotiated by the layers on top, but QUIC, on the other hand, aims to do all of this in one handshake process only.

One concern about the transition from TCP to UDP is that TCP is widely adopted and many of the “middle-levels” in the internet world are tuned for TCP and rate-limit or even block UDP. Google carried out a lot of experiments to characterize this and found that only a few connections were blocked in this manner. This led to the use of a fallback-to-TCP system; Chromium’s network stack opens both a QUIC and traditional TCP connection at the same time, which allows it to fallback with zero latency.

QUIC forms the basis for HTTP 3 in general.

About HTTP 3 and how is it different from HTTP 2?

HTTP 3 is the more evolved version of the HTTP 2 protocol. This version would be like its predecessor in most ways but would differ over the fact that HTTP 3 would be done upon QUIC.

But before we proceed further on this, we should know about why we want to upgrade from HTTP 2 in the first place. The primary issue with HTTP 2 is the fact that in cases of slower networks where the packets gradually drop, and the network quality is degraded the single HTTP 2 connection slows down the entire process and hence blocking additional data transfer. In addition to this, protocol ossification was another problem in which the devices that are configured to only accept the TCP or UDP between client and servers would not allow any deviation like- protocol updates, new functionality implementation. Even the slightest of the change was rejected in a jiffy because the devices do not want to deal with them.

QUIC (Quick UDP Internet Connections) looks like a plausible solution to the intrinsic issue’s peculiar to HTTP 2 as it is very fast, secure and not prone to issues like ossification.

HTTP 3 is the next iteration of the conventional and trusted HTTP protocol family. No doubt it would be very similar to HTTP 2, but it also offers a few significant new features. One of the most substantial reasons for favoring HTTP 3 is the kind of impact it would have on the world of API’s and Internet of Things (IoT). The network latency, transmission errors with the packet loss is the primary reason for their inoperability in the real world.

With the advent of HTTP 3, many of these issues would be nullified to a large extent which would provide an efficient and stable environment for them to function. Unfortunately, there aren’t any core function API’s available as of now like what we have for TCP. Also, to enable HTTP 3, some specific libraries and implementations are required. This might lead to the inflexibility in business and HTTP 3 implementation over QUIC.


  • Both the protocols offer Server push mechanisms.
  • They also offer multiplexing over a single connection via streams
  • Prioritization is also done based on streams
  • Both protocols make use of header compression, HPACK and QPACK are similar in design


Based on QUIC – transport layer protocol which handles streams on its own TCP oriented – which handles stream in the HTTP layer
Much quicker handshakes due to QUIC Comparatively slower than HTTP 3 (TCP + TLS)
Does not exist in an insecure or unencrypted version However, this version of the HTTP can be implemented and used without HTTPS
Has more reliable early data support due to the involvement of QUIC’s 0-RTT handshakes While TCP Fast Open and TLS usually sends less data and often faces issues.
This is based on QUIC, so it needs a specific header response in the first place to inform the client about its negotiation Can be negotiated directly in a TLS handshake


Given the pros of HTTP 3, conceptually it looks more promising than HTTP 2, overruling its shortcoming of protocol ossification and the overall delayed transmission process in case of the networks with higher latency along with the added benefits it has for the API and IoT world.

However, the usage of proprietary libraries with a specific protocol can be quite risky and should be analyzed thoroughly against the benefits delivered to see if it is the best suited solution for that particular use case.

HTTP 3 support has been added as recently as 26 September 2019 to Google Chrome, Mozilla Firefox and Cloudflare. It would be too early to comment upon the sustainability of this protocol version as of now. Will this be the protocol of the future for the Cyber world or another architecture improvement?

About The Author