I tweeted this the other day, and the internet was not pleased:
— Troy Hunt (@troyhunt) July 8, 2016
In fact, a bunch of the internet was pretty upset. “It’s not fair!”, they cried. “You’re comparing apples and oranges!”, they raged.
No, it’s not fair, the internet is not fair. But that’s just how the web is today and whilst you might not like that it’s not fair, that’s the ballgame we’re playing. When it comes to performance tests, I don’t care about “fair”, I only care about one thing:
Let’s take just a moment to put how fast into context. Here’s the test from the tweet above over HTTP:
The content kinda staggers in bit by bit as we’ve become accustomed to on the web these days. But now let’s run the HTTPS test:
Whoa! This is awesome! Job done, HTTPS is fast and HTTP is crap, nothing more to see here.
Well, almost, let’s address the “It’s not fair” whingers. The HTTPS test is faster because it uses HTTP/2 whist the HTTP test only uses HTTP/1.1. The naysayers are upset because they think the test should be comparing both the secure and insecure scheme across the same version of the protocol. Now we could do that with the old protocol, but here’s the problem with doing it across the newer protocol:
Hey, look at that, every current browser that supports HTTP/2 has got a little “2” annotation on it. Accordingly, that means this:
Only supports HTTP2 over TLS (https)
So in other words, if you wanna go fast you can only do it over the secure protocol, not the one that sends everything in the clear because no browser supports it. HTTP/2 is able to do this courtesy of Multiplexing so we’re talking about asyncing a bunch of requests via binary streams across the one TCP connection. What that means is the difference between this way of loading images in the old version of the protocol:
And this way in the new version:
Get it? The old one is the very classic “waterfall” of requests occurring with minimal asynchronicity whilst the new one is more of a “cascade” of requests all happening at the same time. That’s why you see a bunch of the images appearing in large batches in the animation earlier on as opposed to staggering in per the insecure protocol.
Now the naysayers will lament that the test is unrealistic because you’ve got 360 little images all loading on the one page. But it doesn’t really matter because you could do it with 36 and the multiplexing is still going to make it way faster. Or perhaps more realistically, a couple of megs worth of chunky images, CSS, JS and all the other crap so many websites load today. They all get the perf benefits that HTTP/2 offers and some of them may well show even greater differences than observed in this test; those 360 little images only add up to 0.62MB whereas the average web page is now 2.3MB. You’re also looking at somewhere in the order of 100 requests too so the comparison tests above may even be erring on the conservative side.
Of course the web server also has to support HTTP/2, so that means you can’t get it on IIS yet (we’ll see it soon when Windows Server 2016 ships with IIS 10) unless you wrap CloudFlare around it (like this blog), which can serve its cached content over the newer version of the protocol. CloudFlare also has a little speed comparison test on that page with both protocol versions served over HTTPS:
The neat thing about this approach is that even if the origin website (the one CloudFlare is serving traffic from) doesn’t support HTTP/2 (and Ghost Pro which this blog is on does support it), you can still get super-fast HTTP/2 speeds. Here’s how to see it in action first hand: I have a website running on cloudflareonazure.com which I use in my Getting Started with CloudFlare Security Pluralsight course (don’t worry that the site has mixed content, that’s both intentional and not the point). This site is an Azure website which presently only runs on HTTP/1.1. Now let’s drop into the Chrome dev tools, over to the network tab then right-click on a column heading to turn on the protocol column:
I also turned on the domain column so that I could clearly show you this:
Here we have requests going to a domain hosted on Azure which can’t talk HTTP/2 yet the protocol being returned is “h2” which is the identifier for HTTPS over HTTP/2. We see this because all requests are routed through CloudFlare which can talk h2. Now of course if CloudFlare needs to pull content from an origin that doesn’t talk h2 then there’ll still be a bottleneck in the connection, but many requests won’t come from the origin anyway. Two thirds of the traffic on this blog is served directly from their cache so that can come down over h2 and make a significant difference to speeds even when the origin is stuck on HTTP/1.1.
And lastly, for those who really, really want to live under the illusion that the web is “fair” and a head-to-head match of HTTP and HTTPS over 1.1 would yield a fundamentally different result in favour of going insecure, have a read of Is TLS Fast Yet. Even over the outgoing version of the protocol, the “encryption is slow” argument has gone the way of the marquee tag and remains an artefact held onto only by those living in the past. Actually, bugger it, if you really want to test both schemes over HTTP/1.1 then issue the requests with a header that only accepts 1.1 and see how that goes:
I just gave that a run with Fiddler open (which doesn’t support HTTP/2 thus strips support for it from the request header) and it was still faster. Do it back to back a few times and the results will fluctuate (minor differences in connection quality etc), but you won’t find a smoking gun pointing to how slow HTTPS is, even over HTTP/1.
This is all simply a test of “what’s the fastest we can go over HTTP versus what’s the fastest we can go over HTTPS”. I don’t want fair, I want fast. If you wanna go fast, serve content over HTTPS using HTTP/2.