Everything you need to know about HTTP 2

HTTP allows a browser to connect with a Web server to load a pages, same is the case with HTTP 2. As Mark Nottingham, chairman of the IETF working group behind creating the standards,  said on his blog that the HTTP 2 specifications have been formally approved. From here, the specs go through a request for comment phase and then published. So HTTP 2 is out for implementation. See this HTTP 2 implementations. The HTTP standard is getting an overhaul and while faster Web pages are a big win for the first major revision since 1999, better encryption may have a more lasting impact. HTTP 2 is promising faster loading speed. By reading draft of HTTP 2, I have noticed that HTTP 2 standard is based on SPDY, which was introduced by Google and adopted by other browsers (Such Firefox, Chrome). This post will be explaining you, all the required details you need to know about HTTP 2. So lets begin..

What is HTTP 2 ?
HTTP 2 is a replacement for how HTTP is expressed “on the wire”. It is not a ground-up rewrite of the protocol; HTTP methods, status codes and semantics are the same, and it should be possible to use the same APIs as HTTP/1.x (possibly with some small additions) to represent the protocol. The focus of the protocol is on performance; specifically, end-user perceived latency, network and server resource usage.  One major goal is to allow the use of a single connection from browsers to a Web site. The basis of the work was SPDY, but HTTP/2 has evolved to take the community’s input into account, incorporating several improvements in the process. HTTP 2 provides an optimized transport for HTTP semantics.  HTTP 2  supports all of the core features of HTTP/1.1, but aims to be more efficient in several ways. The basic protocol unit in HTTP 2  is a frame.  Each frame type serves a different purpose. For example, HEADERS and DATA frames form the basis of HTTP requests and responses; other frame types like SETTINGS, WINDOW_UPDATE, and PUSH_PROMISE are used in support of other HTTP 2  features. Multiplexing of requests is achieved by having each HTTP request-response exchange associated with its own stream. Streams are largely independent of each  other, so a blocked or stalled request or response does not prevent progress on other streams. HTTP 2  adds a new interaction mode, whereby a server can push responses to a client.  Server push allows a server to speculatively send data to a client that the server anticipates the client will need, trading off some network usage against a potential latency gain. The server does this by synthesizing a request, which it sends as a PUSH_PROMISE frame.  The server is then able to send a response to the synthetic request on a separate stream. Because HTTP header fields used in a connection can contain large amounts of redundant data, frames that contain them are compressed.  This has especially advantageous impact upon request sizes in the common case, allowing many requests to be compressed into one packet.

Who made HTTP 2 ?
HTTP 2  was developed by the IETF’s HTTP Working Group, which maintains the HTTP protocol. It’s made up of a number of HTTP implementers, users, network operators and HTTP experts.  A large number of people have contributed to the effort, but the most active participants include engineers from “big” projects like Firefox, Chrome, Twitter, Microsoft’s HTTP stack, Curl and Akamai, as well as a number of HTTP implementers in languages like Python, Ruby and NodeJS.

What’s the relationship with SPDY?
HTTP 2 was first discussed when it became apparent that SPDY was gaining traction with implementers (like Mozilla and nginx), and was showing significant
improvements over HTTP/1.x. After a call for proposals and a selection process, SPDY/2 was chosen as the basis for HTTP 2. Since then, there have been a  number of changes, based on discussion in the Working Group and feedback from implementers. Throughout the process, the core developers of SPDY have been involved in the development of HTTP 2, including both Mike Belshe and Roberto Peon. In February 2015, Google announced its plans to remove support for SPDY in favor of HTTP 2 .

Is it HTTP 2 .0 or HTTP 2 ?
The HTTP Working Group decided to drop the minor version (“.0”) because it has caused a lot of confusion in HTTP/1.x. In other words, the HTTP version only indicates wire compatibility, not feature sets or “marketing.”

What are the key differences to HTTP/1.x?
At a high level, HTTP 2  is binary, instead of textual, it is fully multiplexed, instead of ordered and blocking can therefore use one connection for  parallelism uses header compression to reduce overhead allows servers to “push” responses proactively into client caches.

Why is HTTP 2  binary?
Binary protocols are more efficient to parse, more compact “on the wire”, and most importantly, they are much less error-prone,  compared to textual protocols like HTTP/1.x, because they often have a number of affordances to “help” with things like whitespace handling, capitalization, line endings, blank links and so on. For example, HTTP/1.1 defines four different ways to parse a message; in HTTP 2 , there’s just one code path. Note that HTTP 2  isn’t usable through telnet.

Why is HTTP 2  multiplexed?
HTTP/1.x has a problem called “head-of-line blocking,” where effectively only one request can be outstanding on a connection at a time. HTTP/1.1 tried to  fix this with pipelining, but it didn’t completely address the problem (a large or slow response can still block others behind it). Additionally, pipelining has been found very difficult to deploy, because many intermediaries and servers don’t process it correctly. This forces clients to use a number of heuristics (often guessing) to determine what requests to put on which connection to the origin when; since it’s common for a page to load 10 times (or more)  the number of available connections, this can severely impact performance, often resulting in a “waterfall” of blocked requests. Multiplexing addresses these problems by allowing multiple request and response messages to be in flight at the same time; it’s even possible to intermingle parts of one message with  another on the wire. This, in turn, allows a client to use just one connection per origin to load a page.

Why just one TCP connection?
With HTTP/1, browsers open between four and eight connections per origin. Since many sites use multiple origins, this could mean that a single page load opens more than thirty connections. One application opening so many connections simultaneously breaks a lot of the assumptions that TCP was built upon; since each connection will start a flood of data in the response, there’s a real risk that buffers in the intervening network will overflow, causing a congestion event and retransmits. Additionally, using so many connections unfairly monopolizes network resources, “stealing” them from other, better-behaved applications (e.g., VoIP).

What’s the benefit of Server Push?
When a browser requests a page, the server sends the HTML in the response, and then needs to wait for the browser to parse the HTML and  issue requests for all of the embedded assets before it can start sending the JavaScript, images and CSS. Server Push allows the server to avoid this round trip of delay by  “pushing” the responses it thinks the client will need into its cache.

Why do we need header compression?
According to Patrick McManus from Mozilla, if we assume that a page has about 80 assets (which is conservative in today’s Web), and each request has 1400 bytes of headers (again, not uncommon, thanks to Cookies, Referer, etc.), it takes at least 7-8 round trips to get the headers out “on the wire.” That’s not counting response time – that’s just to get them out of the client. This is because of TCP’s Slow Start mechanism, which paces packets out on new connections based on how many packets have been acknowledged – effectively limiting the number of packets that can be sent for the first few round trips. In comparison, even mild compression on headers allows those requests to get onto the wire within one roundtrip – perhaps even one packet. This overhead is considerable, especially when you consider the impact upon mobile clients, which typically see round-trip latency of several hundred milliseconds, even under good conditions.

Does HTTP 2  require encryption?
No. After extensive discussion, the Working Group did not have consensus to require the use of encryption (e.g., TLS) for the new protocol. However, some implementations have stated that they will only support HTTP 2 when it is used over an encrypted connection.

What does HTTP 2 do to improve security?
HTTP 2 defines a profile of TLS that is required; this includes the version, a cipher suite blacklist, and extensions used. There is also discussion of additional mechanisms, such as using TLS for HTTP:// URLs (so-called “opportunistic encryption”);

Can I use HTTP 2  now?
HTTP 2  is currently available in Firefox and Chrome for testing, using the “h2-14” protocol identifier. There are also several servers available  (including a test server from Akamai, Google and Twitter’s main sites), and a number of Open Source implementations that you can deploy and test. see here: https://github.com/http2/http2-spec/wiki/Implementations

Will HTTP 2 replace HTTP/1.x ?
The goal of the Working Group is that typical uses of HTTP/1.x can use HTTP 2 and see some benefit. They do not want to force world to use HTTP 2,  so HTTP/1.x is likely to still be in use for quite some time.

Will there be a HTTP/3 ?
According to  HTTP working group, If the negotiation mechanism introduced by HTTP 2 works well, then they will be supporting new version of HTTP.

What is HTTP 2 frame ?
The smallest unit of communication within an HTTP 2 connection, consisting of a header and a variable-length sequence of octets structured according to  the frame type.

From where I should learn more about HTTP 2 ?
Following are some important links from where you can find important details about HTTP 2 :


Popular posts from this blog

MATLAB code for Circular Convolution using Matrix method

Positive number pipe in angular 2+