http/2 – CSS-Tricks https://css-tricks.com Tips, Tricks, and Techniques on using Cascading Style Sheets. Fri, 19 Apr 2019 21:51:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://i0.wp.com/css-tricks.com/wp-content/uploads/2021/07/star.png?fit=32%2C32&ssl=1 http/2 – CSS-Tricks https://css-tricks.com 32 32 45537868 Musings on HTTP/2 and Bundling https://css-tricks.com/musings-on-http2-and-bundling/ https://css-tricks.com/musings-on-http2-and-bundling/#comments Mon, 17 Jul 2017 12:43:59 +0000 http://css-tricks.com/?p=256704 HTTP/2 has been one of my areas of interest. In fact, I’ve written a few articles about it just in the last year. In one of those articles I made this unchecked assertion:

If the user is on HTTP/2: You’ll


Musings on HTTP/2 and Bundling originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
HTTP/2 has been one of my areas of interest. In fact, I’ve written a few articles about it just in the last year. In one of those articles I made this unchecked assertion:

If the user is on HTTP/2: You’ll serve more and smaller assets. You’ll avoid stuff like image sprites, inlined CSS, and scripts, and concatenated style sheets and scripts.

I wasn’t the only one to say this, though, in all fairness to Rachel, she qualifies her assertion with caveats in her article. To be fair, it’s not bad advice in theory. HTTP/2’s multiplexing ability gives us leeway to avoid bundling without suffering the ill effects of head-of-line blocking (something we’re painfully familiar with in HTTP/1 environments). Unraveling some of these HTTP/1-specific optimizations can make development easier, too. In a time when web development seems more complicated than ever, who wouldn’t appreciate a little more simplicity?

As with anything that seems simple in theory, putting something into practice can be a messy affair. As time has progressed, I’ve received great feedback from thoughtful readers on this subject that has made me re-think my unchecked assertions on what practices make the most sense for HTTP/2 environments.

The case against bundling

The debate over unbundling assets for HTTP/2 centers primarily around caching. The premise is if you serve more (and smaller) assets instead of a giant bundle, caching efficiency for return users with primed caches will be better. Makes sense. If one small asset changes and the cache entry for it is invalidated, it will be downloaded again on the next visit. However, if only one tiny part of a bundle changes, the entire giant bundle has to be downloaded again. Not exactly optimal.

Why unbundling could be suboptimal

There are times when unraveling bundles makes sense. For instance, code splitting promotes smaller and more numerous assets that are loaded only for specific parts of a site/app. This makes perfect sense. Rather than loading your site’s entire JS bundle up front, you chunk it out into smaller pieces that you load on demand. This keeps the payloads of individual pages low. It also minimizes parsing time. This is good, because excessive parsing can make for a janky and unpleasant experience as a page paints and becomes interactive, but has not yet not fully loaded.

But there’s a drawback to this we sometimes miss when we split assets too finely: Compression ratios. Generally speaking, smaller assets don’t compress as well as larger ones. In fact, if some assets are too small, some server configurations will avoid compressing them altogether, as there are no practical gains to be made. Let’s look at how well some popular JavaScript libraries compress:

Filename Uncompressed Size Gzip (Ratio %) Brotli (Ratio %)
jquery-ui-1.12.1.min.js 247.72 KB 66.47 KB (26.83%) 55.8 KB (22.53%)
angular-1.6.4.min.js 163.21 KB 57.13 KB (35%) 49.99 KB (30.63%)
react-0.14.3.min.js 118.44 KB 30.62 KB (25.85%) 25.1 KB (21.19%
jquery-3.2.1.min.js 84.63 KB 29.49 KB (34.85%) 26.63 KB (31.45%)
vue-2.3.3.min.js 77.16 KB 28.18 KB (36.52%)
zepto-1.2.0.min.js 25.77 KB 9.57 KB (37.14%)
preact-8.1.0.min.js 7.92 KB 3.31 KB (41.79%) 3.01 KB (38.01%)
rlite-2.0.1.min.js 1.07 KB 0.59 KB (55.14%) 0.5 KB (46.73%)

Sure, this comparison table is overkill, but it illustrates a key point: Large files, as a rule of thumb, tend to yield higher compression ratios than smaller ones. When you split a large bundle into teeny tiny chunks, you won’t get as much benefit from compression.

Of course, there’s more to performance than asset size. In the case of JavaScript, we may want to tip our hand toward smaller page/template-specific files because the initial load of a specific page will be more streamlined with regard to both file size and parse time. Even if those smaller assets don’t compress as well individually. Personally, that would be my inclination if I were building an app. On traditional, synchronous “site”-like experiences, I’m not as inclined to pursue code-splitting.

Yet, there’s more to consider than JavaScript. Take SVG sprites, for example. Where these assets are concerned, bundling appears more sensible. Especially for large sprite sets. I performed a basic test on a very large icon set of 223 icons. In one test, I served a sprited version of the icon set. In the other, I served each icon as individual assets. In the test with the SVG sprite, the total size of the icon set represents just under 10 KB of compressed data. In the test with the unbundled assets, the total size of the same icon set was 115 KB of compressed data. Even with multiplexing, there’s simply no way 115 KB can be served faster than 10 KB on any given connection. The compression doesn’t go far enough on the individualized icons to make up the difference. Technical aside: The SVG images were optimized by SVGO in both tests.

Side note: One astute commenter has pointed out that Firefox dev tools show that in the unsprited test, approximately 38 KB of data was transferred. That could affect how you optimize. Just something to keep in mind.

Browsers that don’t support HTTP/2

Yep, this is a thing. Opera Mini in particular seems to be a holdout in this regard, and depending on your users, this may not be an audience segment to ignore. While around 80% of people globally surf with browsers that can support HTTP/2, that number declines in some corners of the world. Shy of 50% of all users in India, for example, use a browser that can communicate to HTTP/2 servers (according to caniuse, anyway). This is at least the picture for now, and support is trending upward, but we’re a long ways from ubiquitous support for the protocol in browsers.

What happens when a user talks to an HTTP/2 server with a browser that doesn’t support it? The server falls back to HTTP/1. This means you’re back to the old paradigms of performance optimization. So again, do your homework. Check your analytics and see where your users are coming from. Better yet, leverage caniuse.com‘s ability to analyze your analytics and see what your audience supports.

The reality check

Would any sane developer architect their front end code to load 223 separate SVG images? I hope not, but nothing really surprises me anymore. In all but the most complex and feature-rich applications, you’d be hard-pressed to find so much iconography. But, it could make more sense for you to coalesce those icons in a sprite and load it up front and reap the benefits of faster rendering on subsequent page navigations.

Which leads me to the inevitable conclusion: In the nooks and crannies of the web performance discipline there are no simple answers, except “do your research”. Rely on analytics to decide if bundling is a good idea for your HTTP/2-driven site. Do you have a lot of users that only go to one or two pages and leave? Maybe don’t waste your time bundling stuff. Do your users navigate deeply throughout your site and spend significant time there? Maybe bundle.

This much is clear to me: If you move your HTTP/1-optimized site to an HTTP/2 host and change nothing in your client-side architecture, it’s not going to be a big deal. So don’t trust blanket statements some web developer writing blog posts (i.e., me). Figure out how your users behave, what optimizations makes the best sense for your situation, and adjust your code accordingly. Good luck!


Cover of Web Performance in Action

Jeremy Wagner is the author of Web Performance in Action, an upcoming title from Manning Publications. Use coupon code sswagner to save 42%.

Check him out on Twitter: @malchata


Musings on HTTP/2 and Bundling originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/musings-on-http2-and-bundling/feed/ 9 256704
ES6 modules support lands in browsers: is it time to rethink bundling? https://css-tricks.com/es6-modules-support-lands-browsers-time-rethink-bundling/ Sat, 15 Apr 2017 12:25:40 +0000 http://css-tricks.com/?p=253773 Modules, as in, this kind of syntax right in JavaScript:

import { myCounter, someOtherThing } from 'utilities';

Which we’d normally use Webpack to bundle, but now is supported in Safari Technology Preview, Firefox Nightly (flag), and Edge.

It’s designed to …


ES6 modules support lands in browsers: is it time to rethink bundling? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Modules, as in, this kind of syntax right in JavaScript:

import { myCounter, someOtherThing } from 'utilities';

Which we’d normally use Webpack to bundle, but now is supported in Safari Technology Preview, Firefox Nightly (flag), and Edge.

It’s designed to support progressive enhancement, as you can safely link to a bundled version and a non-bundled version without having browsers download both.

Stefan Judis shows:

<!-- in case ES6 modules are supported -->
<script src="app/index.js" type="module"></script>
<!-- in case ES6 modules aren't supported -->
<script src="dist/bundle.js" defer nomodule></script>

Not bundling means simpler build processes, which is great, but forgoing all the other cool stuff a tool like Webpack can do, like “tree shaking”. Also, all those imports are individual HTTP requests, which may not be as big of a deal in HTTP/2, but still isn’t great:

Khan Academy discovered the same thing a while ago when experimenting with HTTP/2. The idea of shipping smaller files is great to guarantee perfect cache hit ratios, but at the end, it’s always a tradeoff and it’s depending on several factors. For a large code base splitting the code into several chunks (a vendor and an app bundle) makes sense, but shipping thousands of tiny files that can’t be compressed properly is not the right approach.

Preprocessing build steps are likely here to stay. Native tech can learn from them, but we might as well leverage what both are good at.

To Shared LinkPermalink on CSS-Tricks


ES6 modules support lands in browsers: is it time to rethink bundling? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
253773
HTTP/2 – A Real-World Performance Test and Analysis https://css-tricks.com/http2-real-world-performance-test-analysis/ https://css-tricks.com/http2-real-world-performance-test-analysis/#comments Tue, 21 Feb 2017 13:24:31 +0000 http://css-tricks.com/?p=251495 Perhaps you’ve heard of HTTP/2? It’s not just an idea, it’s a real technology and slowly but surely, hosting companies and CDN services have been releasing it to their servers. Much has been said about the benefits of using HTTP/2 …


HTTP/2 – A Real-World Performance Test and Analysis originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Perhaps you’ve heard of HTTP/2? It’s not just an idea, it’s a real technology and slowly but surely, hosting companies and CDN services have been releasing it to their servers. Much has been said about the benefits of using HTTP/2 instead of HTTP1.x, but the proof the the pudding is in the eating.

Today we’re going to perform a few real-world tests, perform some timings and see what results we can extract out of all this.

Why HTTP/2?

If you haven’t read about HTTP/2, may I suggest you have a look at a few articles. There’s the HTTP/2 faq which gives you all the nitty gritty technical details whilst I’ve also written a few articles about HTTP/2 myself where I try to tone-down the tech and focus mostly on the why and the how of HTTP/2.

In a nutshell, HTTP/2 has been released to address the inherent problems of HTTP1.x

  1. HTTP/2 is binary instead of textual like HTTP1.x – this makes it transfer and parsing of data over HTTP/2 inherently more machine-friendly, thus faster, more efficient and less error prone.
  2. HTTP/2 is fully multiplexed allowing multiple files and requests to be transferred at the same time, as opposed to HTTP1.x which only accepted one single request / connection at a time.
  3. HTTP/2 uses the same connection for transferring different files and requests, avoiding the heavy operation of opening a new connection for every file which needs to be transferred between a client and a server.
  4. HTTP/2 has header compression built-in which is another way of removing several of the overheads associated with HTTP1.x having to retrieve several different resources from the same or multiple web servers.
  5. HTTP/2 allows servers to push required resources proactively rather than waiting for the client browser to request files when it thinks it need them.

These things are the best (if simplistic) depiction of how HTTP/2 is better than HTTP1.x. Rather than the browser having to go back to the server to fetch every single resource, it’s picking up all the resources and transferring them at once.

An semi-scientific test of HTTP/2 performance

Theory is great, but it’s more convincing if we can see some real-data and real performance improvements of HTTP/2 over HTTP1.x We’re going to run a few tests to determine whether we see a marked improvement in performance.

Why are we calling this a semi-scientific test?

If this were a lab, or even a development environment where we wanted to demonstrate exact results, we’d be eliminating all variables and just test the performance of the same HTML content, one using HTTP1.x and one using HTTP/2.

Yet (most of us) don’t live in a development environment. Our web applications and sites operate in the real world, in environments where fluctuations occur for all sorts of valid reasons. So while lab testing is great and is definitely required, for this test we’re going out in the real-world and running some tests on a (simulated) real website and compare their performance.

We’re going to be using a default one-page Bootstrap template (Zebre) for several reasons:

  1. It’s a very real-world example of what modern website looks like today
  2. It’s got quite a varied set of resources which are typical of sites today and which would typically go through a number of optimizations for performance under HTTP1.x circumstances
    • 25 images
    • 6 JS scripts
    • 7 CSS files
  3. It’s based on WordPress so we’ll be able to perform a number of HTTP1.x based optimizations to push its performance as far as it can go
  4. It was given out for free in January by ThemeForest. This was great timing, what better real-world test than using a premium theme by an elite author on ThemeForest?

We’ll be running these tests on a brand new account powered by Kinsta managed WordPress hosting who we’ve discovered lately, and whose performance we really find great. We do this because we want to avoid the stressed environments of shared hosting accounts. To reduce the external influence of other sites operating on the same account at the same time, this environment will be used solely for the purpose of this test.

We ran the tests on the lowest plan because we just need to test a single WordPress site. In reality, unlike most hosting services, there is no difference in speed/performance of the plans. The larger plans just have the capacity for more sites. We then set up one of the domains we hoard (iwantovisit.com) and installed WordPress on it.

We’ve also chosen to run these tests on WordPress.

The reason for doing that is for a bit of convenience rather than anything else. Doing all of these tests on manual HTML would require quite a lot of time to complete. We’d rather use that time to do more extensive and constructive tests.

Using WordPress, we can enable such plugins as:

  • A caching plugin (to remove generation time discrepancies as much as possible)
  • Combination and minification plugin to perform optimizations based on HTTP1.x
  • CDN plugin to easily integrate with a CDN whilst performing HTTP/2 tests integrated with a CDN

We setup the Zebre theme and installed several plugins. Once again, this makes the test very realistic. You’re hardly going to find any WordPress sites without a bunch of plugins installed. We installed the following:

We also imported the Zebre theme demo data to have a nicely populated theme with plenty of images, making this site an ideal candidate for HTTP/2 testing.

The final thing we did was to make sure there is page caching in place. We just want to make sure we were not suffering from drastic fluctuations due to page generation times. The great thing is that with Kinsta there’s no needed for any kind of caching plugin as page caching is fully built into the service at the server-level.

The final page looked a little like this:

That’s a Zebra!

And this is the below the fold:

We’re ready for the first tests.

Test 1 – HTTP1 – caching but no other optimizations

Let’s start running some tests to make sure we have a good test bed and get some baseline results.

We’re running these tests with only WordPress caching – no other optimizations.

Testing Site Location Page Load time Total Page Size Requests
GTMetrix Vancouver 3.3s 7.3MB 82
Pingdom tools New York 1.25s 7.3MB 82

There’s clearly something fishy going on. The load times are much too different. Oh yes: Google Cloud platform, Central US servers east are located in Iowa, making the test location of Pingdom tools New York much closer than Vancouver, skewing the results in favor of New York.

You probably know that if you want to improve the performance of your site, there is one very simple solution: host your site or application as physically close as possible to the location of your visitors. That’s the same concept CDNs use to boost performance. The closer the visitors to the server location of the site, the better the loading time.

For that reason, we’re going to run two types of tests. One is going to have a very close location between the hosting service and the test location. For the other, we’re going to choose to amplify the problem of distance. We’re thus going to perform a trans-atlantic trip with our testing, from the US to Europe, and see whether the HTTP/2 optimizations results in better performance or not.

Let’s try to find a similar testing location on both test services. Dallas, Texas is a common testing ground, so we’ll use that for the physically close location. For the second location, we’re going to use London and Stockholm, since there isn’ a shared European location.

Testing Site Location Page Load time Total Page Size Requests
Pingdom tools Dallas 2.15s 7.3MB 82

That’s better. Let’s run another couple of tests.

Testing Site Location Page Load time Total Page Size Requests
GTMetrix Dallas 1.6s 7.3MB 83
Pingdom tools Dallas 1.74s 7.3MB 82
GTMetrix London 2.6s 7.3MB 82
Pingdom tools Stockholm 2.4s 7.3MB 82

You might notice there are a few fluctuations in the requests. We believe these are coming from external scripts being called, which sometimes differ in the number of requests they generate. In fact, although the loading times seem to vary by about a second, by taking a look at the waterfall graph, we can see that the assets on the site are delivered pretty consistently. It’s the external assets (specifically: fonts) which fluctuate widely.

We can see clearly also how the distance affects the loading time significantly by about a second.

Before we continue, you’ll also notice that our speed optimization score is miserable. That’s why for our second round of tests we’re going to perform a number of speed optimizations.

Test 2 – HTTP1 with performance optimizations and caching

Now, given that we know that HTTP1.x is very inefficient in the handling of requests, we’re going to do a round of performance optimizations.

We’re going to install HummingBird from WPMUDEV on the WordPress installation. This is a plugin which handles page load optimizations without caching. Exactly what we need.

We’ll be enabling most of the optimizations which focus on reducing requests and combining files as much as possible.

  • Minification of CSS and JS files
  • Combining of CSS and JS files
  • Enabling of GZIP compression
  • Enabling of browser caching

We’re not going to optimize the images because this would totally skew the results.

As you can see below, following our optimization, we have a near perfect score for everything except images. We’re going to leave the images unoptimized on purpose so that we retain their large size and have a good “load” to carry.

Let’s flush the caches and perform a second run of tests. Immediately we can see a drastic improvement.

Never mind the C on YSlow. It’s because we’re not using a CDN and some of the external resources (the fonts) cannot be browser cached.

Testing Site Location Page Load time Total Page Size Requests
GTMetrix Dallas 1.9s 7.25MB 56
Pingdom tools Dallas 1.6s 7.2MB 56
GTMetrix London 2.7s 7.25MB 56
Pingdom tools Stockholm 2.28s 7.3MB 56

We can see quite a nice improvement on the site. Next up, we’re going to enable HTTPS on the site. This is a prerequisite for setting up HTTP/2.

Test 3 – HTTP/2 without optimizations and caching

We’ll be using the Let’s Encrypt functionality to create a free SSL certificate. This is built into Kinsta, which means setting up HTTPS should be pretty straightforward.

Once we’ve generated an HTTPS certificate, we’ll be using the Really Simple SSL WordPress plugin to force HTTPS across the site.

This plugin checks whether a secure certificate for the domain exists on your server, if it does, it forces HTTPS across your WordPress site. Really and truly, this plugin makes implementing HTTPS on your site a breeze. If you’re performing a migration from HTTP to HTTPS, do not forget to perform a full 301 redirection from HTTP to HTTPS, so that you don’t lose any traffic or search engine rankings whilst forcing HTTPS on your site.

Once we’ve fully enabled and tested HTTPS on our website, you might need to do a little magic to start serving resources over HTTP/2, although most servers today will switch you directly to HTTP/2 if you are running an SSL site.

Kinsta runs on Nginx, and enables HTTP/2 by default on SSL sites, so enabling SSL is enough to switch the whole site to HTTP/2.

Once we’ve performed the configuration our site should now be served on HTTP/2. To confirm that the site is running on HTTP/2, we’ve installed this nifty chrome extension which checks which protocols are supported by our site.

Once we’ve confirmed that HTTP/2 is up and running nicely on the site, we can run another batch of tests.

Testing Site Location Page Load time Total Page Size Requests
GTMetrix Dallas 2.7s 7.24MB 82
Pingdom tools* Dallas 2.04s 7.3MB 82
GTMetrix London 2.4s 7.24MB 82
Pingdom tools* Stockholm 2.69s 7.3MB 82

*Unfortunately, Pingdom tools uses Chrome 39 to perform the tests. This version of Chrome does not have HTTP/2 support so we won’t be able to realistically calculate the speed improvements. We’ll run the tests regardless because we can have a benchmark to compare with.

Test 4 – HTTP/2 with performance optimizations and caching

Now that we’ve seen HTTP/2 without any performance optimizations, it’s also a good idea to actually check whether HTTP1 based performance optimizations can and will make any difference when we have HTTP/2 enabled.

There are two ways of thinking about this:

  • Against: To perform optimizations aimed at reducing connections and size, we are adding performance overhead to the site (whilst the server performs minification and combination of files), therefore there is a negative effect on the performance.
  • In favor: Performing such minification and combination of files and other optimizations will have a performance improvement regardless of protocol, particularly minification which is essentially reducing the size of resources which need to be delivered. Any performance overhead can be mitigated using caching.
Testing Site Location Page Load time Total Page Size Requests
GTMetrix Dallas 1.0s 6.94MB 42
Pingdom tools** Dallas 1.45s 7.3MB 56
GTMetrix London 2.5s 7.21MB 56
Pingdom tools** Stockholm 2.46s 7.3MB 56

**HTTP/2 not supported

Test 5 – CDN with performance optimizations and caching (no HTTP/2)

You’ve probably seen over and over again how one of the main ways to improve the performance of a site is to implement a CDN (Content Delivery Network).

But why should a CDN still be required if we are now using HTTP/2?

There is still going to be a need for a CDN, even with HTTP/2 in place. The reason is that besides a CDN improving performance from an infrastructure point of view (more powerful servers to handle the load of traffic), a CDN actually reduces the distance that the heaviest resources of your website need to travel.

By using a CDN, resources such as images, CSS and JS files are going to be served from a location which is (typically) physically closer to your end user that your website’s hosting server.

This has an implicit performance advantage: the less content needs to travel, the faster your website will load. This is something which we’ve already encountered in our initial tests above. Physically closer test locations perform much better in loading times.

For our tests, we’re going to run our website on an Incapsula CDN server, one of the CDN services which we’ve been using for our sites lately. Of course, any CDN will have the same or similar benefits.

There are a couple of ways that your typical CDN will work:

  • URL rewrite: You install a plugin or write code such that the address of resources are rewritten such that they are served from the CDN rather than your site’s URL
  • Reverse proxy: you make DNS changes such that the CDN handles the bulk of your traffic. The CDN service then sends the requests for dynamic content to your web server.
Testing Site Location Page Load time Total Page Size Requests
GTMetrix Dallas 1.5s 7.21MB 61
Pingdom tools Dallas 1.65s 7.3MB 61
GTMetrix London 2.2s 7.21MB 61
Pingdom tools Stockholm 1.24s 7.3MB 61

Test 6 – CDN with performance optimizations and caching and HTTP/2

The final test which we’re going to perform is implementing all possible optimizations we can. That means we’re running a CDN using HTTP/2 on a site running HTTP/2, where all page-load optimizations have been performed.

Testing Site Location Page Load time Total Page Size Requests
GTMetrix Dallas 0.9s 6.91MB 44
Pingdom tools** Dallas 1.6s 7.3MB 61
GTMetrix London 1.9s 6.90MB 44
Pingdom tools** Stockholm 1.41s 7.3MB 61

**HTTP/2 not supported

Nice! We’ve got a sub-second loading time for a 7Mb sized website! That’s an impressive result if you ask me!

We can clearly see what a positive effect HTTP/2 is having on the site – when comparing the loading times, you can see that there is a 0.5 second difference on the loading times. Given that we’re operating in an environment which loads in less than 2 seconds in the worst-case scenario, a 0.5 second difference is a HUGE improvement.

This is the result which we were actually hoping for.

Yes, HTTP/2 does make a real difference.

Conclusion – Analysis of HTTP/2 performance

Although we tried as much as possible to eliminate fluctuations, there are going to be quite a few inaccuracies in our setup, but there is a very clear trend. HTTP/2 is faster and is the recommended way forward. It does make up for the performance overhead which is introduced with HTTPS sites.

Our conclusions are therefore:

  1. HTTP/2 is faster in terms of performance and site loading time than HTTP1.x.
  2. Minification and other ways of reducing the size of the web page being served is always going to provide more benefits than the overhead required to perform this “minification”.
  3. Reducing the distance between the server and the client will always provide page loading time performance benefits so using a CDN is still a necessity if you want to push the performance envelope of your site, whether you’ve enabled HTTP/2 or not.

What do you think of our results? Have you already implemented HTTP/2? Have you seen better loading times too?


HTTP/2 – A Real-World Performance Test and Analysis originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/http2-real-world-performance-test-analysis/feed/ 9 251495
Modernizing our Progressive Enhancement Delivery https://css-tricks.com/modernizing-progressive-enhancement-delivery/ Sun, 15 Jan 2017 15:29:26 +0000 http://css-tricks.com/?p=250028 Scott Jehl, explaining one of the performance improvements he made to the Filament Group site:

Inlining is a measurably-worthwhile workaround, but it’s still a workaround. Fortunately, HTTP/2’s Server Push feature brings the performance benefits of inlining without sacrificing cacheability for


Modernizing our Progressive Enhancement Delivery originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Scott Jehl, explaining one of the performance improvements he made to the Filament Group site:

Inlining is a measurably-worthwhile workaround, but it’s still a workaround. Fortunately, HTTP/2’s Server Push feature brings the performance benefits of inlining without sacrificing cacheability for each file. With Server Push, we can respond to requests for a particular file by immediately sending additional files we know that file depends upon. In other words, the server can respond to a request for `index.html` with `index.html`, `css/site.css`, and `js/site.js`!

Server push seems like one of those big-win things that really incentivize the switch to H2. We have an article about being extra careful about caching and server push.

To Shared LinkPermalink on CSS-Tricks


Modernizing our Progressive Enhancement Delivery originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
250028
Creating a Cache-aware HTTP/2 Server Push Mechanism https://css-tricks.com/cache-aware-server-push/ https://css-tricks.com/cache-aware-server-push/#comments Mon, 28 Nov 2016 15:16:41 +0000 http://css-tricks.com/?p=247880 If you’ve been reading at all about HTTP/2, then you’ve likely heard about server push. If not, here’s the gist of it: Server push lets you preemptively send an asset when the client requests another. To use it, you need …


Creating a Cache-aware HTTP/2 Server Push Mechanism originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
If you’ve been reading at all about HTTP/2, then you’ve likely heard about server push. If not, here’s the gist of it: Server push lets you preemptively send an asset when the client requests another. To use it, you need an HTTP/2-capable web server, and then you just set a Link header for the asset you want to push like so:

Link: </css/styles.css>; rel=preload

If this rule is set as a response header for an HTML resource, say index.html, the server will not only transmit index.html, but also styles.css in reply. This eliminates the return trip latency from the server, meaning that the document can render faster. At least in this scenario when CSS is pushed. You can push whatever your little heart desires.

One issue with server push that some developers have speculated over is that it may not be cache-aware in all situations, depending on any number of factors. Browsers do have the capability to reject pushes, and some servers have their own mitigation mechanisms. For example, Apache’s mod_http2 module has the H2PushDiarySize directive which attempts to address this problem. H2O Server has a thing called “Cache-aware Server Push” that stores a fingerprint of the pushed assets in a cookie. This is great news, but only if you can actually use H2O Server, which may not be an option for you, depending on your application requirements.

If you’re using an HTTP/2 server that hasn’t solved this problem yet, don’t sweat it. You can easily solve this problem on your own with a little back end code.

A super basic cache-aware server push solution

Let’s say you have a website running on HTTP/2 and you’re pushing a couple assets, like a CSS file and a JavaScript file. Let’s also say that this content rarely changes, and these assets have a long max-age time in their Cache-Control header. If this describes your situation, then there’s this quick and dirty back end solution you can use:

if (!isset($_COOKIE["h2pushes"])) {
    $pushString = "Link: </css/styles.css>; rel=preload,";
    $pushString .= "</js/scripts.js>; rel=preload";
    header($pushString);
    setcookie("h2pushes", "h2pushes", 0, 2592000, "", ".myradwebsite.com", true);
}

This PHP-centric example will check for the existence of a cookie named h2pushes. If the visitor is not a known user, the cookie check will predictably fail. When that happens, the appropriate Link headers will be created, and sent with the response using the header function. After the headers have been set, setcookie is used to create a cookie that will prevent potential redundant pushes should the user return. In this example, the expiry time for the cookie is 30 days (2,592,000 seconds). When the cookie expires (or is deleted), the process reoccurs.

This isn’t strictly “cache-aware” in the sense that the server knows for sure if the asset is cached on the client side, but the logic follows. The cookie is only set if the user has visited the page. By the time it’s set, assets have been pushed, and caching policies set by the Cache-Control header are in effect. This works great. Great, that is, until you have to change an asset.

A more flexible cache-aware server push solution

What if you run a website that uses server push, but assets change frequently? You want to ensure that redundant pushes don’t occur, but you also want to push assets if they have changed, or maybe you want to push additional assets later on. This requires a bit more code than our previous solution:

function pushAssets() {
    $pushes = array(
        "/css/styles.css" => substr(md5_file("/var/www/css/styles.css"), 0, 8),
        "/js/scripts.js" => substr(md5_file("/var/www/js/scripts.js"), 0, 8)
    );

    if (!isset($_COOKIE["h2pushes"])) {
        $pushString = buildPushString($pushes);
        header($pushString);
        setcookie("h2pushes", json_encode($pushes), 0, 2592000, "", ".myradwebsite.com", true);
    } else {
        $serializedPushes = json_encode($pushes);

        if ($serializedPushes !== $_COOKIE["h2pushes"]) {
            $oldPushes = json_decode($_COOKIE["h2pushes"], true);
            $diff = array_diff_assoc($pushes, $oldPushes);
            $pushString = buildPushString($diff);
            header($pushString);
            setcookie("h2pushes", json_encode($pushes), 0, 2592000, "", ".myradwebsite.com", true);
        }
    }
}

function buildPushString($pushes) {
    $pushString = "Link: ";

    foreach($pushes as $asset => $version) {
        $pushString .= "<" . $asset . ">; rel=preload";

        if ($asset !== end($pushes)) {
            $pushString .= ",";
        }
    }

    return $pushString;
}

// Push those assets!
pushAssets();

Okay, so maybe it’s more than just a bit of code, but it’s still grokkable. We start by defining a function named pushAssets that will drive the cache-aware push behavior. This function begins by defining an array of assets we want to push. Because we want to re-push assets if they change, we need to fingerprint them for comparison later on. For example, if you’re serving a file named styles.css, but you change it, you’ll version the asset with a query string (e.g., /css/styles.css?v=1) to ensure that the browser won’t serve a stale version of it. In this case, we’re using the md5_file function to create a checksum of the asset based on its contents. Because md5 checksums are 32 bytes, we use substr to shorten it to 8. Whenever these assets change, the checksum will change, which means that assets will automatically be versioned.

Now for the main event: Like before, we’ll check for the presence of the h2pushes cookie. If it doesn’t exist, we’ll use the buildPushString helper function to build the Link header string from the assets we’ve specified in the $pushes array, and set the headers with the header function. Then we’ll create the cookie, but this time we’ll create a storable representation of the $pushes array with the json_encode function, and store that value in the cookie. We could serialize this value, but this presents a potentially serious security risk when we unserialize it later, so we should stick with something safer like json_encode.

Now comes the interesting part: What to do with returning visitors. If it turns out that the visitor is returning and has an h2pushes cookie, we json_encode the $pushes array and compare the value of this JSON-encoded array to the one stored in the h2pushes cookie. If there’s no difference, we do nothing else and merrily go on our way. If there’s a discrepancy, though, we need to find out what has changed. To do this, we’ll use the json_decode function to convert the h2pushes cookie value back into an array, and use array_diff_assoc to find the differences between the $pushes array and the JSON-decoded $oldPushes array.

With the differences returned from array_diff_assoc, we use the buildPushString helper function to once again build a string of resources to push again. The headers are sent, and the cookie value is updated with the JSON-encoded contents of the $pushes array. Congratulations. You just learned how to create your own cache-aware server push mechanism!

Conclusion

With a bit of ingenuity, it’s not too difficult to push assets in a way that minimizes redundant pushes for repeat visitors. If you don’t have the luxury of being able to use a web server like H2O, this solution may work well enough for your purposes. It’s currently in use on my own website, and it seems to work pretty well. It’s very low maintenance, too. I can change assets on my site, and with the fingerprinting mechanism used, asset references update themselves, and pushes adapt to changes in assets without me having to do any extra work.

One thing to remember is that as browsers mature, they will likely become better at recognizing when they should reject pushes, and serve from the cache. If browsers fail at perfecting this behavior, HTTP/2 servers will likely implement some cache-aware pushing mechanism for the user much like H2O does. Until that day comes, however, this may be something for you to consider. While written in PHP, porting this code to another back-end language should be trivial.

Happy pushing!


Cover of Web Performance in Action

Jeremy Wagner is the author of Web Performance in Action, an upcoming title from Manning Publications. Use coupon code csstripc to save 38% off it, or any other Manning book.

Check him out on Twitter: @malchata


Creating a Cache-aware HTTP/2 Server Push Mechanism originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/cache-aware-server-push/feed/ 5 247880
A Case Study on Boosting Front-End Performance https://css-tricks.com/case-study-boosting-front-end-performance/ https://css-tricks.com/case-study-boosting-front-end-performance/#comments Thu, 18 Aug 2016 13:21:49 +0000 http://css-tricks.com/?p=244593 At De Voorhoede we try to boost front-end performance as much as possible for our clients. It is not so easy to convince every client to follow all of our performance guidelines. We try to convince them by talking to …


A Case Study on Boosting Front-End Performance originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
At De Voorhoede we try to boost front-end performance as much as possible for our clients. It is not so easy to convince every client to follow all of our performance guidelines. We try to convince them by talking to them in their own language, and explain the importance of performance for conversion or compare their performance to their main competitors.

Incidentally, we recently updated our site. Apart from doing a complete design overhaul, this was the ideal opportunity to push performance to the max. Our goal was to take control, focus on performance, be flexible for the future and make it fun to write content for our site. Here’s how we mastered front-end performance for our site. Enjoy!

Design for performance

In our projects, we have daily discussions with designers and product owners about balancing aesthetics and performance. For our own site, this was easy. We believe that a good user experience starts with delivering content as fast as possible. That means performance > aesthetics.

Good content, layout, images, and interactivity are essential for engaging your audience, but each of these elements have impact on page load time and the end-user experience. In every step, we looked at how we could get a nice user experience and design while having minimum impact on performance.

Content first

We want to serve the core content (text with the essential HTML and CSS) to our visitors as fast as possible. Every page should support the primary purpose of the content: get the message across. Enhancements, meaning JavaScript, complete CSS, web fonts, images and analytics are inferior to the core content.

Take control

After defining the standards we set for our ideal site, we concluded that we needed full control over every aspect of the site. We chose to build our own static site generator, including asset pipeline, and host it ourselves.

Static site generator

We’ve written our own static site generator in Node.js. It takes Markdown files with short JSON page meta descriptions to generate the complete site structure with all of its assets. It can also be accompanied by a HTML file for including page-specific JavaScript.

See below a simplified meta description and markdown file for a blog post, used to generate the actual HTML.

The JSON meta description:

{
  "keywords": ["performance", "critical rendering path", "static site", "..."],
  "publishDate": "2016-08-12",
  "authors": ["Declan"]
}

And the Markdown file:

# A case study on boosting front-end performance
At [De Voorhoede](https://www.voorhoede.nl/en/) we try to boost front-end performance...

## Design for performance

In our projects we have daily discussions...

Image delivery

The average webpage is a whopping 2406kb of which 1535kb are images. With images taking up such a big part of the average website, it is also one of the best targets for performance wins.

Average bytes per page by content type for July 2016 from httparchive.org

WebP

WebP is a modern image format that provides superior lossless and lossy compression for images on the web. WebP images can be substantially smaller than images of other formats: sometimes they are up to 25% smaller than their JPEG counterpart. WebP is overlooked a lot and not often used. At the time of writing, WebP support is limited to Chrome, Opera and Android (still over 50% of our users), but we can degrade gracefully to JPG/PNG.

<picture> element

Using the picture element we can degrade gracefully from WebP to a more widely supported format like JPEG:

<picture>
  <source type="image/webp" srcset="image-l.webp" media="(min-width: 640px)">
  <source type="image/webp" srcset="image-m.webp" media="(min-width: 320px)">
  <source type="image/webp" srcset="image-s.webp">
  <source srcset="image-l.jpg" media="(min-width: 640px)">
  <source srcset="image-m.jpg" media="(min-width: 320px)">
  <source srcset="image-s.jpg">
  <img alt="Description of the image" src="image-l.jpg">
</picture>

We use picturefill by Scott Jehl to polyfill browsers not supporting the <picture> element and to get consistent behaviour across all browsers.

We use the <img> as a fallback for browsers not supporting the <picture> element and/or JavaScript. Using the image’s largest instance makes sure it still looks good in the fallback scenario.

Generate

While the image delivery approach was in place, we still had to figure out how to painlessly implement it. I love the picture element for what it can do, but I hate writing the snippet above. Especially if I have to include it while writing content. We don’t want to bother with generating 6 instances of every image, optimising the images and writing <picture> elements in our markdown. So we:

  • generate multiple instances of the original images in our build process, both in the input format (JPG, PNG) as in WebP. We use gulp responsive to do so.
  • minify the generated images
  • write ![Description of the image](http://placehold.it/350x150.png) in our markdown files.
  • use custom written Markdown renderers during the build process to compile conventional markdown image declarations to full blown <picture> elements.

SVG animations

We chose a distinct graphic style for our site, in which SVG illustrations play a major role. We did this for several reasons.

  • Firstly, SVG’s (vector images) tend to be smaller than bitmap images;
  • Secondly SVG’s are responsive by nature and scale perfectly while always staying super crisp. So no need for image generation and <picture> elements;
  • Last but not least we can animate and alter them by CSS! A perfect example of designing for performance. All our portfolio pages have a custom made animated SVG that is reused on the overview page. It serves as a recurring style for all our portfolio items making the design consistent, while having very little impact on performance.

Check out this animation and how we can alter it with CSS.

See the Pen Change inline svg styling by De Voorhoede (@voorhoede) on CodePen.

Custom web fonts

Before diving in, here’s a short primer on browser behaviour regarding custom web fonts. When the browser comes across a @font-face definition in CSS that points to a font not available on the user’s computer, it will try to download this font file. While the download happens, most browsers don’t display the text using this font. At all. This phenomenon is called the “Flash of Invisible Text” or FOIT. If you know what to look for, you will find it almost everywhere on the web. And if you ask me, it is bad for the end-user experience. It delays the user in reaching their core goal: reading the content.

We can however force the browser to change its behaviour into a “Flash of Unstyled Content” or FOUT. We tell the browser to use an ubiquitous font at first, like Arial or Georgia. Once the custom web font is downloaded it will replace the standard font and re-render all text. If the custom font fails to load, the content is still perfectly readable. While some might consider this a fallback, we see custom fonts as an enhancement. Even without it, the site looks fine and works 100%.

On the left a blogpost with the custom web font, on the right a blogpost with our “fallback” font.

Using custom web fonts can benefit the user experience, as long as you optimise and serve them responsibly.

Font subsetting

Subsetting is by far the quickest win in improving webfont performance. I would recommend it to every web developer using custom fonts. You can go all out with subsetting if you have complete control over the content and know which characters will be displayed. But even just subsetting your font to “Western languages” will have a huge impact on file size. For example, our Noto Regular WOFF font, which is 246KB by default, drops to 31KB when subsetted to Western languages. We used the Font squirrel webfont generator which is really easy to use.

Font face observer

Font face observer by Bram Stein is an awesome helper script for checking whether fonts are loaded. It is agnostic as to how you load your fonts, be it via a webfont service or hosting them yourself. After the font face observer script notifies us that all custom web fonts are loaded, we add a fonts-loaded class to the <html> element. We style our pages accordingly:

html {
  font-family: Georgia, serif;
}

html.fonts-loaded {
  font-family: Noto, Georgia, serif;
}

Note: For brevity, I did not include the @font-face declaration for Noto in the CSS above.

We also set a cookie to remember that all fonts are loaded, and therefore live in the browser’s cache. We use this cookie for repeating views, which I will explain a bit later.

In the near future we probably do not need Bram Stein’s JavaScript to get this behaviour. The CSS Working Group has proposed a new @font-face descriptor (called font-display), where the property value controls how a downloadable font renders before it is fully loaded. The CSS statement font-display: swap; would give us the same behaviour as the approach above. Read more on the font-display property.

Lazy load JS and CSS

Generally speaking we have an approach of loading in assets as soon as possible. We eliminate render blocking requests and optimise for the first view, leveraging the browser cache for repeated views.

Lazy load JS

By design, we do not have a lot of JavaScript in our site. For what we do have, and what we intend to use in the future, we developed a JavaScript workflow.

JavaScript in the <head> blocks rendering, and we don’t want that. JavaScript should only enhance the user experience; it is not critical for our visitors. The easy way to fix the render blocking JavaScript is to place the script in the tail of your web page. The downside is that it will only start downloading the script after the complete HTML is downloaded.

An alternative could be to add the script to the head and defer the script execution by adding the defer attribute to the <script> tag. This makes the script non-blocking as the browser downloads it almost immediately, without executing the code until the page is loaded.

There is just one thing left, we don’t use libraries like jQuery and thus our JavaScript depends on vanilla JavaScript features. We only want to load JavaScript in browsers supporting these features (i.e. mustard cutting). The end result looks like this:

<script>
// Mustard Cutting
if ('querySelector' in document && 'addEventListener' in window) {
  document.write('<script src="index.js" defer><\/script>');
}
</script>

We place this little inline script in the head of our page detecting whether the vanilla JavaScript document.querySelector and window.addEventListener features are supported. If so, we load the script by writing the script tag directly to the page, and use the defer attribute to make it non-blocking.

Lazy load CSS

For the first view the biggest render blocking resource for our site is CSS. Browsers delay page rendering until the full CSS file referenced in the <head> is downloaded and parsed. This behaviour is deliberate, otherwise the browser would need to recalculate layouts and repaint all the time during rendering.

To prevent CSS from render blocking, we need to asynchronously load the CSS file. We use the awesome loadCSS function by the Filament Group. It will give you a callback when the CSS file is loaded, where we set a cookie stating that the CSS is loaded. We use this cookie for repeating views, which I will explain a bit later.

There is one ‘problem’ with loading in CSS asynchronously, in that while the HTML is being rendered really fast it will look like plain HTML with no CSS applied, until the full CSS is downloaded and parsed. This is where critical CSS comes in.

Critical CSS

Critical CSS can be described as the minimum amount of blocking CSS to make a page appear recognisable for the user. We focus on ‘above the fold’ content. Obviously the location of the fold differs greatly between devices, so we make a best guess.

Manually determining this critical CSS is a time consuming process, especially during future style changes. There are several nifty scripts for generating critical CSS in your build process. We used the magnificent critical npm module by Addy Osmani.

See below our homepage rendered with critical CSS and rendered with the full CSS. Notice the fold where below the fold the page is still sort of unstyled.

On the left the homepage rendered with only critical CSS, on the right the homepage rendered with the full CSS. The red line representing the fold.

The Server

We host de Voorhoede site ourselves, because we wanted to have control over the server environment. We also wanted to experiment how we could boost performance by changing server configuration. At this time we have an Apache web server and we serve our site over HTTPS.

Configuration

To boost performance and security we did a little research on how to configure the server.

We use H5BP boilerplate Apache configuration, which is a great start for improving performance and security for your Apache web server. They have configurations for other server environments as well.

We turned on GZIP for most of our HTML, CSS and JavaScript. We set caching headers neatly for all our resources. Read about that below in the file level caching section.

HTTPS

Serving your site over HTTPS can have a performance impact for your site. The performance penalty is mainly from setting up the SSL handshake, introducing a lot of latency. But — as always — we can do something about that!

HTTP Strict Transport Security is a HTTP header that lets the server tell the browser that it should only be communicated with using HTTPS. This way it prevents HTTP requests from being redirected to HTTPS. All attempts to access the site using HTTP should automatically be converted. That saves us a roundtrip!

TLS false start allows the client to start sending encrypted data immediately after the first TLS roundtrip. This optimization reduces handshake overhead for new TLS connections to one roundtrip. Once the client knows the encryption key it can begin transmitting application data. The rest of the handshake is spent confirming that nobody has tampered with the handshake records, and can be done in parallel.

TLS session resumption saves us another roundtrip by making sure that if the browser and the server have communicated over TLS in the past, the browser can remember the session identifier and the next time it sets up a connection, that identifier can be reused, saving a round trip.

I sound like a DevOps engineer, but I’m not. I just read some things and watched some videos. I loved Mythbusting HTTPS: Squashing security’s urban legends by Emily Stark from Google I/O 2016.

Use of Cookies

We don’t have a server side language, just a static Apache web server. But an Apache web server can still do server side includes (SSI) and read out cookies. By making smart use of cookies and serving HTML that is partially rewritten by Apache, we can boost front-end performance. Take this example below (our actual code is a little more complex, but boils down to the same ideas):

<!-- #if expr="($HTTP_COOKIE!=/css-loaded/) || ($HTTP_COOKIE=/.*css-loaded=([^;]+);?.*/ && ${1} != '0d82f.css' )"-->

<noscript><link rel="stylesheet" href="0d82f.css"></noscript>
<script>
(function() {
  function loadCSS(url) {...}
  function onloadCSS(stylesheet, callback) {...}
  function setCookie(name, value, expInDays) {...}

  var stylesheet = loadCSS('0d82f.css');
  onloadCSS(stylesheet, function() {
    setCookie('css-loaded', '0d82f', 100);
  });
}());
</script>

<style>/* Critical CSS here */</style>

<!-- #else -->
<link rel="stylesheet" href="0d82f.css">
<!-- #endif -->

The Apache server side logic are the comment looking lines starting with <!-- #. Let’s look at this step by step:

  • $HTTP_COOKIE!=/css-loaded/ checks if no CSS cache cookie exists yet.
  • $HTTP_COOKIE=/.*css-loaded=([^;]+);?.*/ && ${1} != '0d82f.css' checks if the cached CSS version is not the current version.
  • If <!-- #if expr="..." --> evaluates to true we assume this is the visitor’s first view.
  • For the first view we add a <noscript> tag with a render blocking <link rel="stylesheet">. We do this, because we will load in the full CSS asynchronously with JavaScript. If JavaScript would be disabled, this would not be possible. This means that as a fallback, we load CSS ‘by the numbers’, ie. in a blocking manner.
  • We add an inline script with functions for lazy loading the CSS, an onloadCSS callback and set cookies.
  • In the same script we load in the full CSS asynchronously.
  • In the onloadCSS callback we set a cookie with the version hash as cookie value.
  • After the script we add an inline stylesheet with the critical CSS. This will be render blocking, but it will be very small and prevent the page from being displayed as plain unstyled HTML.
  • The <!-- #else --> statement (meaning the css-loaded cookie is present) represents the visitor’s repeating views. Because we can assume to some degree that the CSS file is loaded previously we can leverage browser cache and serve the stylesheet in a blocking manner. It will be served from the cache and load almost instantly.

The same approach is used for loading in fonts asynchronously for the first view, assuming we can serve them from browser cache for repeating views.

See here our cookies used to differentiate between first and repeated views.

File level caching

Since we depend heavily on browser caching for repeating views, we need to make sure we cache properly. Ideally we want to cache assets (CSS, JS, fonts, images) forever, only invalidating the cache when a file actually changes. Cache is invalidated if the request URL is unique. We git tag our site when we release a new version, so the easiest way would be to add a query parameter to request URLs with the code base version, like `https://www.voorhoede.nl/assets/css/main.css?v=1.0.4`. But.

The disadvantage of this approach is that when we would write a new blog post (which is part of our code base, not externally stored in a CMS), cache for all of our assets would be invalidated, while no changes have been made to those assets.

While trying to level up our approach, we stumbled upon gulp-rev and gulp-rev-replace. These scripts helped us to add revisioning per file by appending a content hash to our filenames. This means the request URL only changes when the actual file has changed. Now we have per-file cache invalidation. This makes my heart go boom boom!

Result

If you’ve come this far (awesome!) you probably want to know the result. Testing how performant your site is can be done with tooling like PageSpeed Insights for very practical tips and WebPagetest for extensive network analysis. I think the best way to test your site rendering performance is by watching your page evolve while throttling your connection insanely. That means: throttle in a probably unrealistic manner. In Google Chrome you can throttle your connection (via the inspector > Network tab) and see how requests are slowly being loaded in while your page builds up.

So see here how our homepage loads on a throttled 50KB/s GPRS connection.

Network analysis for the site and how the page evolves for the first visit.

Notice how we get the first render at 2.27s on a 50KB/s GPRS network, represented by the first image from the filmstrip and the corresponding yellow line on the waterfall view. The yellow line is drawn right after the HTML has been downloaded. The HTML contains the critical CSS, making sure the page looks usable. All other blocking resources are being lazily loaded, so we can interact with the page while the rest is being downloaded. This is exactly what we wanted!

Another thing to notice is that custom fonts are never loaded on connections this slow. The font face observer automatically takes care of this, but if we wouldn’t load in fonts asynchronously you would be staring at FOIT for a while in most browsers.

The full CSS file is only loaded in after 8 seconds. Conversely, if we’d loaded the full CSS in a blocking manner instead of having critical CSS inline, we would have been staring at a white page for 8 seconds.

If you’re curious how these times compare to other websites with less of a focus on performance, go for it. Load times will go through the roof!

Testing our site against the tools mentioned earlier shows some nice results as well. PageSpeed insights gives us a 100/100 score for mobile performance, how awesome is that?!

When we look at WebPagetest we get the following result:

WebPagetest results for voorhoede.nl.

We can see that our server performs well and that the SpeedIndex for the first view is 693. This means our page is usable after 693ms on a cable connection. Looking good!

Roadmap

We are not done yet and are constantly iterating on our approach. We will focus in the near future on:

  • HTTP/2: It’s here and we are currently experimenting with it. A lot of things described in this article are best practices based on the limitations of HTTP/1.1. In short: HTTP/1.1 dates from 1999 when table layouts and inline styles were super awesome. HTTP/1.1 was never designed for 2.6 MB webpages with 200 requests. To alleviate our poor old protocol’s pains we concatenate JS and CSS, inline critical CSS, use data URL’s for small images, et cetera. Everything to save requests. Since HTTP/2 can run multiple requests in parallel over the same TCP connection, all this concatenation and reducing of requests might even prove to be an antipattern. We will move to HTTP/2 when we are done with running experiments.
  • Service Workers: This is a modern browser JavaScript API that is run in the background. It enables a lot of features that were not available for websites before, like offline support, push notifications, background sync and more. We are playing around with Service Workers, but we still need to implement it in our own site. I guarantee you, we will!
  • CDN: So, we wanted control and hosted the site ourselves. Yes, yes, and now we want to move to a CDN to get rid of network latency caused by the physical distance between client and server. Although our clients are mostly based in the Netherlands, we want to reach the worldwide front-end community in a way that reflects what we do best: quality, performance, and moving the web forward.

Thanks for reading! Please visit our site to see the end result. Do you have comments or questions? Let us know via Twitter. And if you enjoy building fast websites, why not join us?


A Case Study on Boosting Front-End Performance originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/case-study-boosting-front-end-performance/feed/ 23 244593
[Talk] HTTP/2 is Here, Now Let’s Make it Easy https://css-tricks.com/http2-is-here-now-lets-make-it-easy/ Thu, 28 Jan 2016 15:22:20 +0000 http://css-tricks.com/?p=237363 Rebecca Murphey’s talk from dotJS 2015 explores all the various gotchas surrounding HTTP/2. For instance, how will servers support it? How does our front-end process change to benefit the most from this new protocol? How will developers know that everything …


[Talk] HTTP/2 is Here, Now Let’s Make it Easy originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Rebecca Murphey’s talk from dotJS 2015 explores all the various gotchas surrounding HTTP/2. For instance, how will servers support it? How does our front-end process change to benefit the most from this new protocol? How will developers know that everything is working correctly?

To Shared LinkPermalink on CSS-Tricks


[Talk] HTTP/2 is Here, Now Let’s Make it Easy originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
237363
Building for HTTP/2 https://css-tricks.com/building-for-http2/ Sun, 29 Nov 2015 17:56:05 +0000 http://css-tricks.com/?p=235366 Rebecca Murphey:

This is everything-you-thought-you-knew-is-wrong kind of stuff. In an HTTP/2 world, there are few benefits to concatenating a bunch of JS files together, and in many cases the practice will be actively harmful. Domain sharding becomes an anti-pattern. Throwing


Building for HTTP/2 originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Rebecca Murphey:

This is everything-you-thought-you-knew-is-wrong kind of stuff. In an HTTP/2 world, there are few benefits to concatenating a bunch of JS files together, and in many cases the practice will be actively harmful. Domain sharding becomes an anti-pattern. Throwing a bunch of <script> tags in your HTML is suddenly not a laughably terrible idea. Inlining of resources is a thing of the past. Browser caching — and cache busting — can occur on a per-module basis.

I can’t help but think that web development might actually make sense some day.

To Shared LinkPermalink on CSS-Tricks


Building for HTTP/2 originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
235366
Making a Simple Site Work Offline with ServiceWorker https://css-tricks.com/serviceworker-for-offline/ https://css-tricks.com/serviceworker-for-offline/#comments Tue, 10 Nov 2015 14:51:13 +0000 http://css-tricks.com/?p=210533 I’ve been playing around with ServiceWorker a lot recently, so when Chris asked me to write an article about it I couldn’t have been more thrilled. ServiceWorker is the most impactful modern web technology since Ajax. It’s an API that …


Making a Simple Site Work Offline with ServiceWorker originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I’ve been playing around with ServiceWorker a lot recently, so when Chris asked me to write an article about it I couldn’t have been more thrilled. ServiceWorker is the most impactful modern web technology since Ajax. It’s an API that lives inside the browser and sits between your web pages and your application servers. Once installed and activated, a ServiceWorker can programmatically determine how to respond to requests for resources from your origin, even when the browser is offline. ServiceWorker can be used to power the so-called “Offline First” web.

ServiceWorker is a progressive technology, and in this article, I’ll show you how to take a website and make it available offline for humans who are using a modern browser while leaving humans with unsupported browsers unaffected.

Here’s a silent, 26 second video of a supporting browser (Chrome) going offline and the final demo site still working:

If you would like to just look at the code, there is a Simple Offline Site repo we’ve built for this. You can see the entire thing as a CodePen Project, and it’s even a full on demo website.

Browser Support

Today, ServiceWorker has browser support in Google Chrome, Opera, and in Firefox behind a configuration flag. Microsoft is likely to work on it soon. There’s no official word from Apple’s Safari yet.

Jake Archibald has a page tracking the support of all the ServiceWorker-related technologies.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

Desktop

ChromeFirefoxIEEdgeSafari
4544No1711.1

Mobile / Tablet

Android ChromeAndroid FirefoxAndroidiOS Safari
10810710811.3-11.4

Given the fact that you can implement this stuff in a progressive enhancement style (doesn’t affect unsupported browsers), it’s a great opportunity to get ahead of the pack. The ones that are supported are going to greatly appreciate it.

Before getting started, I should point out a couple of things for you to take into account.

Secure Connections Only

You should know that there are a few hard requirements when it comes to ServiceWorker. First and foremost, your site needs to be served over a secure connection. If you’re still serving your site over HTTP, it might be a good excuse to implement HTTPS.

HTTPS is required for ServiceWorker

You could use a CDN proxy like CloudFlare to serve traffic securely. Remember to find and fix mixed content warnings as some browsers may warn your customers about your site being unsafe, otherwise.

While the spec for HTTP/2 doesn’t inherently enforce encrypted connections, browsers intend to implement HTTP/2 and similar technologies only over HTTPS. The ServiceWorker specification, on the other hand, recommends browser implementation over HTTPS. Browsers have also hinted at marking sites served over unencrypted connections as insecure. Search engines penalize unencrypted results.

“HTTPS only” is the browsers way of saying “this is important, you should do this”.

A Promise Based API

The future of web browser API implementations is Promise-heavy. The fetch API, for example, sprinkles sweet Promise-based sugar on top of XMLHttpRequest. ServiceWorker makes occasional use of fetch, but there’s also worker registration, caching, and message passing, all of which are Promise-based.

Whether or not you are a fan of promises, they are here to stay, so you better get used to them.

Registering Your First ServiceWorker

I worked together with Chris on the simplest possible practical demonstration of how to use ServiceWorker. He implemented a simple website (static HTML, CSS, JavaScript, and images) and asked me to add offline support. I felt like that would be a great opportunity to display how easy and unobtrusive it is to add offline capabilities to an existing website.

If you’d like to skip to the end, take a look at the this commit to the demo site on GitHub.

The first step is to register the ServiceWorker. Instead of blindly attempting the registration, we feature-detect that ServiceWorker is available.

if ('serviceWorker' in navigator) {

}

The following piece of code demonstrates how we would install a ServiceWorker. The JavaScript resource passed to .register will be executed in the context of a ServiceWorker. Note how registration returns a Promise so that you can track whether or not the ServiceWorker registration was successful. I preceded logging statements with CLIENT: to make it visually easier for me to figure out whether a logging statement was coming from a web page or the ServiceWorker script.

// ServiceWorker is a progressive technology. Ignore unsupported browsers
if ('serviceWorker' in navigator) {
  console.log('CLIENT: service worker registration in progress.');
  navigator.serviceWorker.register('/service-worker.js').then(function() {
    console.log('CLIENT: service worker registration complete.');
  }, function() {
    console.log('CLIENT: service worker registration failure.');
  });
} else {
  console.log('CLIENT: service worker is not supported.');
}

The endpoint to the service-worker.js file is quite important. If the script were served from, say, /js/service-worker.js then the ServiceWorker would only be able to intercept requests in the /js/ context, but it’d be blind to resources like /other. This is typically an issue because you usually scope your JavaScript files in a /js/, /public/, /assets/, or similar “directory”, whereas you’ll want to serve the ServiceWorker script from the domain root in most cases.

That was, in fact, the only necessary change to your web application code, provided that you had already implemented HTTPS. At this point, supporting browsers will issue a request for /service-worker.js and attempt to install the worker.

How should you structure the service-worker.js file, then?

Putting Together A ServiceWorker

ServiceWorker is event-driven and your code should aim to be stateless. That’s because when a ServiceWorker isn’t being used it’s shut down, losing all state. You have no control over that, so it’s best to avoid any long-term dependence on the in-memory state.

Below, I listed the most notable events you’ll have to handle in a ServiceWorker.

  • The install event fires when a ServiceWorker is first fetched. This is your chance to prime the ServiceWorker cache with the fundamental resources that should be available even while users are offline.
  • The fetch event fires whenever a request originates from your ServiceWorker scope, and you’ll get a chance to intercept the request and respond immediately, without going to the network.
  • The activate event fires after a successful installation. You can use it to phase out older versions of the worker. We’ll look at a basic example where we deleted stale cache entries.

Let’s go over each event and look at examples of how they could be handled.

Installing Your ServiceWorker

A version number is useful when updating the worker logic, allowing you to remove outdated cache entries during the activation step, as we’ll see a bit later. We’ll use the following version number as a prefix when creating cache stores.

var version = 'v1::';

You can use addEventListener to register an event handler for the install event. Using event.waitUntil blocks the installation process on the provided p promise. If the promise is rejected because, for instance, one of the resources failed to be downloaded, the service worker won’t be installed. Here, you can leverage the promise returned from opening a cache with caches.open(name) and then mapping that into cache.addAll(resources), which downloads and stores responses for the provided resources.

self.addEventListener("install", function(event) {
  console.log('WORKER: install event in progress.');
  event.waitUntil(
    /* The caches built-in is a promise-based API that helps you cache responses,
       as well as finding and deleting them.
    */
    caches
      /* You can open a cache by name, and this method returns a promise. We use
         a versioned cache name here so that we can remove old cache entries in
         one fell swoop later, when phasing out an older service worker.
      */
      .open(version + 'fundamentals')
      .then(function(cache) {
        /* After the cache is opened, we can fill it with the offline fundamentals.
           The method below will add all resources we've indicated to the cache,
           after making HTTP requests for each of them.
        */
        return cache.addAll([
          '/',
          '/css/global.css',
          '/js/global.js'
        ]);
      })
      .then(function() {
        console.log('WORKER: install completed');
      })
  );
});

Once the install step succeeds, the activate event fires. This helps us phase out an older ServiceWorker, and we’ll look at it later. For now, let’s focus on the fetch event, which is a bit more interesting.

Intercepting Fetch Requests

The fetch event fires whenever a page controlled by this service worker requests a resource. This isn’t limited to fetch or even XMLHttpRequest. Instead, it comprehends even the request for the HTML page on first load, as well as JS and CSS resources, fonts, any images, etc. Note also that requests made against other origins will also be caught by the fetch handler of the ServiceWorker. For instance, requests made against i.imgur.com – the CDN for a popular image hosting site – would also be caught by our service worker as long as the request originated on one of the clients (e.g browser tabs) controlled by the worker.

Just like install, we can block the fetch event by passing a promise to event.respondWith(p), and when the promise fulfills the worker will respond with that instead of the default action of going to the network. We can use caches.match to look for cached responses, and return those responses instead of going to the network.

As described in the comments, here we’re using an “eventually fresh” caching pattern where we return whatever is stored on the cache but always try to fetch a resource again from the network regardless, to keep the cache updated. If the response we served to the user is stale, they’ll get a fresh response the next time they request the resource. If the network request fails, it’ll try to recover by attempting to serve a hardcoded Response.

self.addEventListener("fetch", function(event) {
  console.log('WORKER: fetch event in progress.');

  /* We should only cache GET requests, and deal with the rest of method in the
     client-side, by handling failed POST,PUT,PATCH,etc. requests.
  */
  if (event.request.method !== 'GET') {
    /* If we don't block the event as shown below, then the request will go to
       the network as usual.
    */
    console.log('WORKER: fetch event ignored.', event.request.method, event.request.url);
    return;
  }
  /* Similar to event.waitUntil in that it blocks the fetch event on a promise.
     Fulfillment result will be used as the response, and rejection will end in a
     HTTP response indicating failure.
  */
  event.respondWith(
    caches
      /* This method returns a promise that resolves to a cache entry matching
         the request. Once the promise is settled, we can then provide a response
         to the fetch request.
      */
      .match(event.request)
      .then(function(cached) {
        /* Even if the response is in our cache, we go to the network as well.
           This pattern is known for producing "eventually fresh" responses,
           where we return cached responses immediately, and meanwhile pull
           a network response and store that in the cache.
           Read more:
           https://ponyfoo.com/articles/progressive-networking-serviceworker
        */
        var networked = fetch(event.request)
          // We handle the network request with success and failure scenarios.
          .then(fetchedFromNetwork, unableToResolve)
          // We should catch errors on the fetchedFromNetwork handler as well.
          .catch(unableToResolve);

        /* We return the cached response immediately if there is one, and fall
           back to waiting on the network as usual.
        */
        console.log('WORKER: fetch event', cached ? '(cached)' : '(network)', event.request.url);
        return cached || networked;

        function fetchedFromNetwork(response) {
          /* We copy the response before replying to the network request.
             This is the response that will be stored on the ServiceWorker cache.
          */
          var cacheCopy = response.clone();

          console.log('WORKER: fetch response from network.', event.request.url);

          caches
            // We open a cache to store the response for this request.
            .open(version + 'pages')
            .then(function add(cache) {
              /* We store the response for this request. It'll later become
                 available to caches.match(event.request) calls, when looking
                 for cached responses.
              */
              cache.put(event.request, cacheCopy);
            })
            .then(function() {
              console.log('WORKER: fetch response stored in cache.', event.request.url);
            });

          // Return the response so that the promise is settled in fulfillment.
          return response;
        }

        /* When this method is called, it means we were unable to produce a response
           from either the cache or the network. This is our opportunity to produce
           a meaningful response even when all else fails. It's the last chance, so
           you probably want to display a "Service Unavailable" view or a generic
           error response.
        */
        function unableToResolve () {
          /* There's a couple of things we can do here.
             - Test the Accept header and then return one of the `offlineFundamentals`
               e.g: `return caches.match('/some/cached/image.png')`
             - You should also consider the origin. It's easier to decide what
               "unavailable" means for requests against your origins than for requests
               against a third party, such as an ad provider
             - Generate a Response programmaticaly, as shown below, and return that
          */

          console.log('WORKER: fetch request failed in both cache and network.');

          /* Here we're creating a response programmatically. The first parameter is the
             response body, and the second one defines the options for the response.
          */
          return new Response('<h1>Service Unavailable</h1>', {
            status: 503,
            statusText: 'Service Unavailable',
            headers: new Headers({
              'Content-Type': 'text/html'
            })
          });
        }
      })
  );
});

There’s several more strategies, some of which I discuss in an article about ServiceWorker strategies on my blog.

As promised, let’s look at the code you can use to phase out older versions of your ServiceWorker script.

Phasing Out Older ServiceWorker Versions

The activate event fires after a service worker has been successfully installed. It is most useful when phasing out an older version of a service worker, as at this point you know that the new worker was installed correctly. In this example, we delete old caches that don’t match the version for the worker we just finished installing.

self.addEventListener("activate", function(event) {
  /* Just like with the install event, event.waitUntil blocks activate on a promise.
     Activation will fail unless the promise is fulfilled.
  */
  console.log('WORKER: activate event in progress.');

  event.waitUntil(
    caches
      /* This method returns a promise which will resolve to an array of available
         cache keys.
      */
      .keys()
      .then(function (keys) {
        // We return a promise that settles when all outdated caches are deleted.
        return Promise.all(
          keys
            .filter(function (key) {
              // Filter by keys that don't start with the latest version prefix.
              return !key.startsWith(version);
            })
            .map(function (key) {
              /* Return a promise that's fulfilled
                 when each outdated cache is deleted.
              */
              return caches.delete(key);
            })
        );
      })
      .then(function() {
        console.log('WORKER: activate completed.');
      })
  );
});

Reminder: there is a Simple Offline Site repo we’ve built for this. You can see the entire thing as a CodePen Project, and it’s even a full on demo website.


Making a Simple Site Work Offline with ServiceWorker originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/serviceworker-for-offline/feed/ 14 210533
Moving to HTTPS on WordPress https://css-tricks.com/moving-to-https-on-wordpress/ https://css-tricks.com/moving-to-https-on-wordpress/#comments Fri, 06 Mar 2015 17:59:44 +0000 http://css-tricks.com/?p=197099 I just recently took CSS-Tricks “HTTPS everywhere”. That is, every URL on this site enforces the HTTPS (SSL) protocol. Non-secure HTTP requests get redirected to HTTPS. Here are some notes on that journey.

Why do it?

  • General security. When you


Moving to HTTPS on WordPress originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I just recently took CSS-Tricks “HTTPS everywhere”. That is, every URL on this site enforces the HTTPS (SSL) protocol. Non-secure HTTP requests get redirected to HTTPS. Here are some notes on that journey.

Why do it?

  • General security. When you enforce HTTPS, you’re guaranteeing no information passed between the server and client can be intercepted and stolen or messed with in any way. That’s great for a site like this that has a login system and accepts credit cards in some places and things like that.
  • The site as it is intended. I’ve heard of examples like hotel WiFi system and even ISPs that mess with HTTP traffic and do things like insert their own advertising code. Can’t do that over HTTPS.
  • SEO. Google says you’ll rank higher. Horse’s mouth.
  • Prereq. I can’t seem to find any good links on this, but I’m under the assumption that HTTPS is required for some/all of the stuff for SPDY / HTTP/2 – which everyone agrees is awesome and super fast. I want to make sure I’m ready so I can start moving forward on that.
  • Geek cred. Duh.

1. Get an SSL certificate

Not optional. This is how it works. I’ve done this a bunch of times in my life and it’s never overly comfortable. You just need to follow instructions closely. I’ve bought them in the past from discount providers and actually had it work fine, but it’s a more manual process.

CSS-Tricks is happily on Media Temple, and they provide SSL as a service, and I’m more than happy to pay for that to have it installed painlessly by professionals.

When the SSL certificate is installed properly, you should be able to visit your site (any URL) at either HTTP or HTTPS and have it come up fine. There may be errors on HTTPS though, and we’ll get to that.

2. Start with the Admin

In WordPress-land, you might as well get HTTPS going in the admin area first. It’s set up to handle it and there probably won’t be any errors. (I keep saying “errors”, I mostly mean “mixed content warnings” which I promise we’ll get to.)

To force HTTPS in the admin area, put this line in your wp-config.php file at the root of your WordPress install:

define('FORCE_SSL_ADMIN', true);

Make sure you test that HTTPS is working properly first! Go to https://yoursite.com/wp-admin/ to check. Otherwise you’ll be forcing URLs that don’t work and that’s bad. If you have trouble, remove that line right away.

All goes well, you’ll get a secure connection:

3. Try to get one page working on the front end

The next step is to get your front end on HTTPS. Forcing it all right away is probably going to be tough, so just start with one target page. For me, it was the signup page for The Lodge. That page can take credit cards, so really, it had to be HTTPS. This was the motivator for me early on to get this set up.

There is a plugin that can help with this: WordPress HTTPS (SSL). With that plugin, you get a checkbox on Posts/Pages to force it to be SSL.

4. Mop up Mixed Content Warnings

What you’re really trying to avoid is this:

Mixed Content Warning

That’s like: “Hey nice trying being HTTPS but you aren’t fully so NO GREEN LOCK FOR YOU!”

If you open the console, you’ll likely get messaging like this:

In this case, it was some images being used in a CodePen embed with an HTTP src.

But it could be anything. HTTP <script>s, HTTP CSS <link>s, HTTP <iframe>s. Anything that ends up making an HTTP request that isn’t HTTPS will trigger the error.

You just need to fix them. All.

5. Protocol Relative URLs! (or just relative URLs)

You know, those ones that start with //, like this:

<img src="//example.com/image.jpg" alt="image">

Those are your friend. They will load that resource with whatever protocol the current page is. And links that are just relative to begin with will be fine, like:

<img src="/images/image.jpg" alt="image">

I ended up finding them all over the place. I even had Custom Fields to fix:

6. Harder problem: images within legacy content

There are thousands and thousands of pages on this site, and lots and lots of images within those pages. Right in the content itself. There are a bunch on this very page you’re looking at. The problem is those images had fully qualified HTTP links on them.

I didn’t relish the idea of using some WordPress filter on content to kinda hotswap those URL’s out – I just wanted to fix the issue. I hired Jason Witt to help me with this. The first thing we did was run some SQL on the database to fix the URL’s. Essentially fixing the src of images to be protocol relative.

After backing up the database, and testing it locally, this worked great:

UPDATE wp_posts 
SET    post_content = ( Replace (post_content, 'src="http://', 'src="//') )
WHERE  Instr(post_content, 'jpeg') > 0 
        OR Instr(post_content, 'jpg') > 0 
        OR Instr(post_content, 'gif') > 0 
        OR Instr(post_content, 'png') > 0;

And another to catch single-quoted ones, just in case:

UPDATE wp_posts 
SET   post_content = ( Replace (post_content, "src='http://", "src='//") )
WHERE  Instr(post_content, 'jpeg') > 0 
        OR Instr(post_content, 'jpg') > 0 
        OR Instr(post_content, 'gif') > 0 
        OR Instr(post_content, 'png') > 0;

We even fixed the Custom Fields in much the same way:

UPDATE wp_postmeta 
SET meta_value=(REPLACE (meta_value, 'iframe src="http://','iframe src="//'));

7. Make sure new images are protocol relative

With the legacy content mopped up, we need to make sure new content stays OK. That means fixing the media uploader/inserter thingy so it inserts images with protocol relative URLs. Fortunately I already customize that to insert with <figure> tags, so it was just an adjustment of that. Jason ultimately helped me move a lot of my custom functions.php code into a plugin, so this is a little out of context, but I think you’ll get the picture and see the relevant filters and stuff:

class CTF_Insert_Figure {

  /**
   * Initialize the class
   */
  public function __construct() {
    add_filter( 'image_send_to_editor', array( $this, 'insert_figure' ), 10, 9 );
  }
  
  /**
     * Insert the figure tag to attched images in posts
     *
     * @since  1.0.0
     * @access public
     * @return string return custom output for inserted images in posts
     */
  public function insert_figure($html, $id, $caption, $title, $align, $url) {
    // remove protocol
    $url = str_replace(array('http://','https://'), '//', $url);
    $html5 = "<figure id='post-$id' class='align-$align media-$id'>";
    $html5 .= "<img src='$url' alt='$title' />";
    if ($caption) {
        $html5 .= "<figcaption>$caption</figcaption>";
    }
    $html5 .= "</figure>";
    return $html5;
  }
}

8. Your CDN needs SSL too

If you have a CDN set up (I use MaxCDN through W3 Total Cache) that means the CDN is totally different server and URL and all that and it needs to be able to serve over HTTPS as well. Fortunately MaxCDN handles it.

9. Start forcing HTTPS everywhere

This is what I do in the .htaccess file at the root:

# Force HTTPS
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

Once that is in place, you can turn off the plugin and remove the bit from the wp-config.php file, as those are redundant now.

You’ll also want to change the URL settings in General Settings to be https:// – so all the links in WordPress are constructed properly and redirects don’t need to come into play:

10. Keep mopping up

Inevitably after going HTTPS everywhere, you’ll find pages with mixed content warnings. You just need to keep investigating and fixing.

Good luck! (and let me know if I’ve fouled anything up)


Update! Speaking of mopping up… Content Security Policy (CSP) is a technology that might be incredibly helpful for you here. There is a directive for it, upgrade-insecure-requests, that can force the browser to use HTTPS for any HTTP resources it finds. A bit of a sledgehammer, but could be super useful.


Moving to HTTPS on WordPress originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/moving-to-https-on-wordpress/feed/ 36 197099