Move to HTTP/2: On the Wire for High Speed Connection Performance


The Hypertext Transfer Protocol (HTTP) is the systematic rules for managing the transfer of different kinds of files such as text, images, music, videos and other multimedia files used by the world wide web. It explains how files are formatted and transmitted between the server and the browser.

Internet protocols like HTTP are a collection of protocols which serves as an application layer performing various functions such as fetching the URLs in your browser, which sends an HTTP command to the web server controlling the transmission of the requested web pages. The application layers sits on top of many “other” layers which are abstractions dealing with core network functionality.

HTTP is considered as a stateless protocol due to the process of independent execution of its command without any knowledge about it prior. HTTP is currently being used in many new technologies, including JavaScript, ActiveX and Java.

HTTP/1.1: Where It All Started

HTTP was developed by the Internet Engineering Task Force (IETF) and World Wide Consortium back in 1999 leading in the publication of a series of Requests for Comments (RFCs) with its first version as HTTP/1.1 prior to the HTTP/0.9 which is a one-line protocol to bootstrap the web in an informational standard.

HTTP/1.1 existed for several years, however its age has begun to appear. Loading a webpage is more intensive than ever since HTTP practically enables only one request per TCP connection. Being a generic, stateless, protocol which can be used for many tasks, the browsers in the old times have used multiple TCP connections to issue parallel requests. However, TCP congestion control is efficiently refuted, leading to congestion events that hurt performance and the network.

The large number of requests sent a lot of duplicated data “on the wire.” This means that HTTP/1.1 requests have a lot of overhead associated with them and can’t function well when fetching large amount of resources to load a modern website that hurts the performance.

This had pushed the industry to find ways to hack the web by spiriting, data in-lining and domain sharing which are indications of fundamental problems in the protocol itself, and had cause a plenty of problems in the connection when used.

People “developing a website” don’t much care about HTTP beyond being thankful that it exists but it is important to understand this in a web developer’s perspective.


Google’s SPDY

SPDY (pronounced speedy) is an open networking experimental protocol, developed at Google which was announced in mid-2009, whose main goal was to try to reduce web page load latency and improving web security by addressing some of the well-known performance limitations of HTTP/1.1.

Some of the specific goals of Google’s SPDY is:


  • Reduce deployment complexity by avoiding changes in network arrangement
  • Reduction of page load time (PLT) by 50 percent, making more efficient use of the underlying TCP connection
  • Avoid the need for any changes to content by website authors
  • Keep the protocol in partnership with the open-source community
  • Collect and validate experimental protocol data efficiency

After the initial announcement, Google’s software engineers shared the result on the experiment protocol which had a positive feedback with the improvement in performance–pages loaded up to 55 percent faster.

In 2012, the experimental protocol was initiated in Chrome, Firefox and Opera, and a growing number of sites started deploying SPDY within their organization. With this line of technology the HTTP Working Group (HTTP-WG) pushed a new trend of technology to take the protocol system from SPDY, develop and enhance them, and deliver an official “HTTP/2” standard. This is the point time when SPDY and HTTP/2 would continue to mature in parallel.

Fast-forward to 2015 when the Internet Engineering Steering Group (IESG) reviewed and accepted the new HTTP/2 standard for publication. Just right after the approval, the Google Chrome team announced their schedule to deprecate SPDY and NPN extension for TLS.

From there the HTTP/2 standard has been one of the best and most broadly tested standards with tons of tested and production-ready client and server executions made.

Say Hello to HTTP/2

HTTP/2 is in the spotlight. As the standard and approved “on the wire” protocol, all popular browsers have committed to support it, or have already supported it for their users, and many popular sites such as Google, Facebook and Twitter are already taking advantage of HTTP/2 to deliver better performance. In a short amount of time a few the HTTP/2 and HPACK standards were approved in early 2015, their usage on the web has already surpassed that of SPDY.

HTTP/2

HTTP/2 does not alter the application semantics of HTTP in any approach. The core concepts, such as HTTP methods, status codes and header fields, remains identical. However, HTTP/2 will make applications more robust with its unusual combination by letting users to undo some of the HTTP/1.1.

HTTP/2 is concerned with enhancing the quantity over the older versions of the protocol and they do this by replacing a request-response model with which anticipates what the users are going to need, sending it to them before the request is made in different request-response lanes or also known as multiplexing. As a result it’s a lot faster as it is sending back not just the response to the request, but the responses to requests you haven’t made yet, but will soon need to make.

HTTP/2 not HTTP/1.2?

The specification of the new HTTP describes an optimized expression of the semantics of the Hypertext Transfer Protocol referred to as HTTP version 2 (HTTP/2) and not HTTP/1.2. This is because HTTP/2 offers a new binary framing layer that is not regressively compatible with the version HTTP/1.1 servers and clients. Though there might be some similarities from the older version, you won’t really appreciate the differences until you work with raw TCP sockets.

HTTP/2 Features

So, what’s new in HTTP/2, and why or how will it help you and your application or website?

Let’s take a look at the new protocol, its features, and new capabilities it has at its disposal to further enhance applications.

New Binary Framing Layer

HTTP/2 uses new binary framing layer that is not regressively compatible with the version HTTP/1.1 which determines how the HTTP messages are formatted and transferred between the client and server. This is a new updated encrypting mechanism between the socket and the higher HTTP API.

Multiplexing

As mentioned above, HTTP/1.1 uses multiple TCP connections for multiple parallel requests to improve connection speed. However, HTTP/1.1 requests have a lot of excessive data associated with them and can’t function well when fetching large amount of resources that hurts the performance.

The new binary framing layer took off these limitations and allows full request and response multiplexing enabling both client and server to split the HTTP message into different frames. This has improved the speed of connection and the loading of data across the web.

Stream Prioritization

Stream prioritization enables users to have more control over the playback of content by allowing users to prioritize order from multiple streams to be multiplexed. HTTP/2 allows each stream to have an associated weight and dependency. As a result, the server can use this data to prioritize stream processing by managing the allocation of memory and once the response data is available, it will deliver the right bandwidth to guarantee the best delivery of high-priority responses to the client.

One Connection Per Origin

Using the new binary framing layer, HTTP/2 does not use multiple TCP connections to multiplex bin parallel. This means that each stream is split into many frames that can be ordered at times and only one connection per origin is required. This gives a lot of advantage in the speed of connection and performance.

Header Compression

Headers defines the type of resources transferred across the World Wide Web and HTTP/1.1 send this via plain text with a reasonable amount bytes or kilobytes which leads to data overhead. HTTP/2 reduces the request and response header metadata using the HPACK compression format using a single GZIP context in each direction. However, this was removed after some time due to major attacks on the application. Trying to find a better solution HTTP/2 develop a new, header-specific compression scheme which is a safe and much effective compression.

Server Push

One of the best new feature of HTTP/2 is the capabilities to send multiple responses from servers for a single client request. This means that servers can push extra resources to the client without forcing the client to request for each of them. This reduces the extra latency by letting the server push the related resources onward.

Difference between HTTP/1.1 and HTTP/2

There are some major significant differences between the two versions of HTTP. This might not obvious for a while but once users work with raw TCP sockets the incredible differences will be obvious especially when it comes to performance.

Below are some of the noticeable differences between the two versions.

  • HTTP/2 leaves most of HTTP 1.1’s high level syntax
  • HTTP/2 is binary, instead of textual
  • HTTP/2 is fully multiplexed, instead of ordered and blocking
  • HTTP/2 allows servers to “push” responses into client caches
  • HTTP/2 uses header compression to reduce overhead
  • HTTP/2 can use one connection per origin for parallelism

HTTP/2 is Better

The main goal for HTTP/2 is to reduce latency by enabling full request and response multiplexing, it has actually reached the goal as it loads webpages much faster.

HTTP/2 is better

Here’s a real time published demo created by the Akamai team, which displays the impact on your download of many small tiles making up the Akamai Spinning Globe. As you can see for the old version of HTTP, latency is 271ms and the load time is 20.80s. While HTTP/2 has a latency of 0ms and 6.67s load time. How awesome it is!

Testing the HTTP Upgrade to Switch to HTTP/2

Using the HTTP Upgrade, you can establish an HTTP/2 connection with non-encrypted network but may depend on the browser. If the server does not support HTTP/2, then it can switch with HTTP/1.1 response. To enable this the client must use HTTP Upgrade to deal with the protocol. Remember that this might work but the connection may fail and the client will might fall back to a TLS tunnel.

Consider these standard HTTP request below:
GET /page HTTP/1.1
Host: example.com:8000
Connection: Upgrade, HTTP2-Settings
Upgrade: h2c
HTTP2-Settings: (SETTINGS payload)
HTTP/1.1 200 OK
Content-length: 250
Content-type: text/html
(… HTTP/1.1 response …)
(or)
HTTP/1.1 101 Switching Protocols
Connection: Upgrade
Upgrade: h2c
(… HTTP/2 response …)

Switching to HTTP/2

Although HTTP/2 is really promising and many companies might already be using it unknowingly, there is not really an exact start date for it and won’t be possible to do in an instant. This means that HTTP/1.1 will still be in in used for some more years. There are millions of servers that must be taken care off and switching to new binary framing will also include updating each network, libraries and browsers which might take both money and time.

According to Wikipedia’s HTTP/2 development milestone IESG submitted the approval of HTTP/2 to publish as Proposed Standard on February 17, 2015, and there is the Publish HTTP/2 as RFC 7540 on May 14, 2015 but no official release regarding the standard switching.

HTTP/2 development

All we know now is that HTTP/2 works on all modern browsers and they’ve committed to support it while supporting HTTP/1.1 at the same time with minimal arbitration for a large amount of existing users.

2 Comments

  1. Roderick Apr 29, 4:33 pm
  2. Vanila May 3, 3:38 pm