service workers – CSS-Tricks https://css-tricks.com Tips, Tricks, and Techniques on using Cascading Style Sheets. Thu, 03 Nov 2022 12:57:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://i0.wp.com/css-tricks.com/wp-content/uploads/2021/07/star.png?fit=32%2C32&ssl=1 service workers – CSS-Tricks https://css-tricks.com 32 32 45537868 The Difference Between Web Sockets, Web Workers, and Service Workers https://css-tricks.com/the-difference-between-web-sockets-web-workers-and-service-workers/ https://css-tricks.com/the-difference-between-web-sockets-web-workers-and-service-workers/#comments Thu, 03 Nov 2022 12:56:58 +0000 https://css-tricks.com/?p=374731 Web Sockets, Web Workers, Service Workers… these are terms you may have read or overheard. Maybe not all of them, but likely at least one of them. And even if you have a good handle on front-end development, there’s a …


The Difference Between Web Sockets, Web Workers, and Service Workers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Web Sockets, Web Workers, Service Workers… these are terms you may have read or overheard. Maybe not all of them, but likely at least one of them. And even if you have a good handle on front-end development, there’s a good chance you need to look up what they mean. Or maybe you’re like me and mix them up from time to time. The terms all look and sound awful similar and it’s really easy to get them confused.

So, let’s break them down together and distinguish Web Sockets, Web Workers, and Service Workers. Not in the nitty-gritty sense where we do a deep dive and get hands-on experience with each one — more like a little helper to bookmark the next time I you need a refresher.

Quick reference

We’ll start with a high-level overview for a quick compare and contrast.

FeatureWhat it is
Web SocketEstablishes an open and persistent two-way connection between the browser and server to send and receive messages over a single connection triggered by events.
Web WorkerAllows scripts to run in the background in separate threads to prevent scripts from blocking one another on the main thread.
Service WorkerA type of Web Worker that creates a background service that acts middleware for handling network requests between the browser and server, even in offline situations.

Web Sockets

A Web Socket is a two-way communication protocol. Think of this like an ongoing call between you and your friend that won’t end unless one of you decides to hang up. The only difference is that you are the browser and your friend is the server. The client sends a request to the server and the server responds by processing the client’s request and vice-versa.

Illustration of two women representing the browser and server, respectively. Arrows between them show the flow of communication in an active connection.

The communication is based on events. A WebSocket object is established and connects to a server, and messages between the server trigger events that send and receive them.

This means that when the initial connection is made, we have a client-server communication where a connection is initiated and kept alive until either the client or server chooses to terminate it by sending a CloseEvent. That makes Web Sockets ideal for applications that require continuous and direct communication between a client and a server. Most definitions I’ve seen call out chat apps as a common use case — you type a message, send it to the server, trigger an event, and the server responds with data without having to ping the server over and again.

Consider this scenario: You’re on your way out and you decide to switch on Google Maps. You probably already know how Google Maps works, but if you don’t, it finds your location automatically after you connect to the app and keeps track of it wherever you go. It uses real-time data transmission to keep track of your location as long as this connection is alive. That’s a Web Socket establishing a persistent two-way conversation between the browser and server to keep that data up to date. A sports app with real-time scores might also make use of Web Sockets this way.

The big difference between Web Sockets and Web Workers (and, by extension as we’ll see, Service Workers) is that they have direct access to the DOM. Whereas Web Workers (and Service Workers) run on separate threads, Web Sockets are part of the main thread which gives them the ability to manipulate the DOM.

There are tools and services to help establish and maintain Web Socket connections, including: SocketCluster, AsyncAPI, cowboy, WebSocket King, Channels, and Gorilla WebSocket. MDN has a running list that includes other services.

More Web Sockets information

Web Workers

Consider a scenario where you need to perform a bunch of complex calculations while at the same time making changes to the DOM. JavaScript is a single-threaded application and running more than one script might disrupt the user interface you are trying to make changes to as well as the complex calculation being performed.

This is where the Web Workers come into play.

Web Workers allow scripts to run in the background in separate threads to prevent scripts from blocking one another on the main thread. That makes them great for enhancing the performance of applications that require intensive operations since those operations can be performed in the background on separate threads without affecting the user interface from rendering. But they’re not so great at accessing the DOM because, unlike Web Sockets, a web worker runs outside the main thread in its own thread.

A Web Worker is an object that executes a script file by using a Worker object to carry out the tasks. And when we talk about workers, they tend to fall into one of three types:

  • Dedicated Workers: A dedicated worker is only within reach by the script that calls it. It still executes the tasks of a typical web worker, such as its multi-threading scripts.
  • Shared Workers: A shared worker is the opposite of a dedicated worker. It can be accessed by multiple scripts and can practically perform any task that a web worker executes as long as they exist in the same domain as the worker.
  • Service Workers: A service worker acts as a network proxy between an app, the browser, and the server, allowing scripts to run even in the event when the network goes offline. We’re going to get to this in the next section.

More Web Workers information

Service Workers

There are some things we have no control over as developers, and one of those things is a user’s network connection. Whatever network a user connects to is what it is. We can only do our best to optimize our apps so they perform the best they can on any connection that happens to be used.

Service Workers are one of the things we can do to progressively enhance an app’s performance. A service worker sits between the app, the browser, and the server, providing a secure connection that runs in the background on a separate thread, thanks to — you guessed it — Web Workers. As we learned in the last section, Service Workers are one of three types of Web Workers.

So, why would you ever need a service worker sitting between your app and the user’s browser? Again, we have no control over the user’s network connection. Say the connection gives out for some unknown reason. That would break communication between the browser and the server, preventing data from being passed back and forth. A service worker maintains the connection, acting as an async proxy that is capable of intercepting requests and executing tasks — even after the network connection is lost.

A gear cog icon labeled Service Worker in between a browser icon labeled client and a cloud icon labeled server.

This is the main driver of what’s often referred to as “offline-first” development. We can store assets in the local cache instead of the network, provide critical information if the user goes offline, prefetch things so they’re ready when the user needs them, and provide fallbacks in response to network errors. They’re fully asynchronous but, unlike Web Sockets, they have no access to the DOM since they run on their own threads.

The other big thing to know about Service Workers is that they intercept every single request and response from your app. As such, they have some security implications, most notably that they follow a same-origin policy. So, that means no running a service worker from a CDN or third-party service. They also require a secure HTTPS connection, which means you’ll need a SSL certificate for them to run.

More Service Workers information

Wrapping up

That’s a super high-level explanation of the differences (and similarities) between Web Sockets, Web Workers, and Service Workers. Again, the terminology and concepts are similar enough to mix one up with another, but hopefully, this gives you a better idea of how to distinguish them.

We kicked things off with a quick reference table. Here’s the same thing, but slightly expanded to draw thicker comparisons.

FeatureWhat it isMultithreaded?HTTPS?DOM access?
Web SocketEstablishes an open and persistent two-way connection between the browser and server to send and receive messages over a single connection triggered by events.Runs on the main threadNot requiredYes
Web WorkerAllows scripts to run in the background in separate threads to prevent scripts from blocking one another on the main thread.Runs on a separate threadRequiredNo
Service WorkerA type of Web Worker that creates a background service that acts middleware for handling network requests between the browser and server, even in offline situations.Runs on a separate threadRequiredNo

The Difference Between Web Sockets, Web Workers, and Service Workers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/the-difference-between-web-sockets-web-workers-and-service-workers/feed/ 3 374731
Making a Site Work Offline Using the VitePWA Plugin https://css-tricks.com/vitepwa-plugin-offline-service-worker/ https://css-tricks.com/vitepwa-plugin-offline-service-worker/#comments Tue, 18 Jan 2022 14:30:26 +0000 https://css-tricks.com/?p=361001 The VitePWA plugin from Anthony Fu is a fantastic tool for your Vite-powered sites. It helps you add a service worker that handles:

  • offline support
  • caching assets and content
  • prompting the user when new content is available
  • …and other goodies!


Making a Site Work Offline Using the VitePWA Plugin originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
The VitePWA plugin from Anthony Fu is a fantastic tool for your Vite-powered sites. It helps you add a service worker that handles:

  • offline support
  • caching assets and content
  • prompting the user when new content is available
  • …and other goodies!

We’ll walk through the concept of service workers together, then jump right into making one with the VitePWA plugin.

New to Vite? Check out my prior post for an introduction.

Table of Contents

  1. Service workers, introduced
  2. Versioning and manifests
  3. Our first service worker
  4. What about offline functionality?
  5. How service workers update
  6. A better way to update content
  7. Runtime caching
  8. Adding your own service worker content
  9. Wrapping up

Service workers, introduced

Before getting into the VitePWA plugin, let’s briefly talk about the Service Worker itself.

A service worker is a background process that runs on a separate thread in your web application. Service workers have the ability to intercept network requests and do… anything. The possibilities are surprisingly wide. For example, you could intercept requests for TypeScript files and compile them on the fly. Or you could intercept requests for video files and perform an advanced transcoding that the browser doesn’t currently support. More commonly though, a service worker is used to cache assets, both to improve a site’s performance and enable it to do something when it’s offline.

When someone first lands on your site, the service worker the VitePWA plugin creates installs, and caches all of your HTML, CSS, and JavaScript files by leveraging the Cache Storage API. The result is that, on subsequent visits to your site, the browser will load those resources from cache, rather than needing to make network requests. And even on the first visit to your site, since the service worker just pre-cached everything, the next place your user clicks will probably be pre-cached already, allowing the browser to completely bypass a network request.

Versioning and manifests

You might be wondering what happens with a service worker when your code is updated. If your service worker is caching, say, a foo.js file, and you modify that file, you want the service worker to pull down the updated version, the next time a user visits the site.

But in practice you don’t have a foo.js file. Usually, a build system will create something like foo-ABC123.js, where “ABC123” is a hash of the file. If you update foo.js, the next deployment of your site may send over foo-XYZ987.js. How does the service worker handle this?

It turns out the Service Worker API is an extremely low-level primitive. If you’re looking for a native turnkey solution between it and the cache API, you’ll be disappointed. Basically, the creation of your service worker needs to be automated, in part, and connected to the build system. You’d need to see all the assets your build created, hard-code those file names into the service worker, have code to pre-cache them, and more importantly, keep track of the files that are cached.

If code updates, the service worker file also changes, containing the new filenames, complete with hashes. When a user makes their next visit to the app, the new service worker will need to install, and compare the new file manifest with the manifest that’s currently in cache, ejecting files that are no longer needed, while caching the new content.

This is an absurd amount of work and incredibly difficult to get right. While it can be a fun project, in practice you’ll want to use an established product to generate your service worker — and the best product around is Workbox, which is from the folks at Google.

Even Workbox is a bit of a low-level primitive. It needs detailed information about the files you’re pre-caching, which are buried in your build tool. This is why we use the VitePWA plugin. It uses Workbox under the hood, and configures it with all the info it needs about the bundles that Vite creates. Unsurprisingly, there are also webpack and Rollup plugins if you happen to prefer working with those bundlers.

Our first service worker

I’ll assume you already have a Vite-based site. If not, feel free to create one from any of the available templates.

First, we install the VitePWA plugin:

npm i vite-plugin-pwa

We’ll import the plugin in our Vite config:

import { VitePWA } from "vite-plugin-pwa"

Then we put it to use in the config as well:

plugins: [
  VitePWA()

We’ll add more options in a bit, but that’s all we need to create a surprisingly useful service worker. Now let’s register it somewhere in the entry of our application with this code:

import { registerSW } from "virtual:pwa-register";

if ("serviceWorker" in navigator) {
  // && !/localhost/.test(window.location)) {
  registerSW();
}

Don’t let the code that’s commented out throw you for a loop. It’s extremely important, in fact, as it prevents the service worker from running in development. We only want to install the service worker anywhere that’s not on the localhost where we’re developing, that is, unless we’re developing the service worker itself, in which case we can comment out that check (and revert before pushing code to the main branch).

Let’s go ahead and open a fresh browser, launch DevTools, navigate to the Network tab, and run the web app. Everything should load as you’d normally expect. The difference is that you should see a whole slew of network requests in DevTools.

A screenshot of DevTools listing all of the network requests for the currant app using the VitePWA plugin. There are a total of 16 various JavaScript and CSS files.

That’s Workbox pre-caching the bundles. Things are working!

What about offline functionality?

So, our service worker is pre-caching all of our bundled assets. That means it will serve those assets from cache without even needing to hit the network. Does that mean our service worker could serve assets even when the user has no network access? Indeed, it does!

And, believe it or not, it’s already done. Give it a try by opening the Network tab in DevTools and telling Chrome to simulate offline mode, like this.

Screenshot of the DevTools UO to simulate an offline connection with the select menu open. The No throttling option is currently checked but the Offline option is highlighted in light blue.
The “No throttling” option is the default selection. Click that and select the “Offline” option to simulate an offline connection.

Let’s refresh the page. You should see everything load. Of course, if you’re running any network requests, you’ll see them hang forever since you’re offline. Even here, though, there are things you can do. Modern browsers ship with their own internal, persistent database called IndexedDB. There’s nothing stopping you from writing your own code to sync some data to there, then write some custom service worker code to intercept network requests, determine if the user is offline, and then serve equivalent content from IndexedDB if it’s in there.

But a much simpler option is to detect if the user is offline, show a message about being offline, and then bypass the data requests. This is a topic unto itself, which I’ve written about in much greater detail.

Before showing you how to write, and integrate your own service worker content, let’s take a closer look at our existing service worker. In particular, let’s see how it manages updating/changing content. This is surprisingly tricky and easy to mess up, even with the VitePWA plugin.

Before moving on, make sure you tell Chrome DevTools to put you back online.

How service workers update

Take a closer look at what happens to our site when we change the content. We’ll go ahead and remove our existing service worker, which we can do in the Application tab of DevTools, under Storage.

Screenshot showing the Storage panel of DevTools. The DevTools menu is a panel on the left and the app usage is displayed in a panel on the right, showing that 508 kilobytes of data total is used, where 392 kilobytes are cached and 16.4 are service workers. A button to clear site data is below the Usage stats with a deep blue label and a light gray background.

Click the “Clear site data” button to get a clean slate. While I’m at it, I’m going to remove most of the routes of my own site so there’s fewer resources, then let Vite rebuild the app.

Look in the generated sw.js to see the generated Workbox service worker. There should be a pre-cache manifest inside of it. Mine looks like this:

A dark mode screenshot showing a list of eight asset urls inside of a precacheAndRoute function.

If sw.js is minified, run it through Prettier to make it easier to read.

Now let’s run the site and see what’s in our cache:

Let’s focus on the settings.js file. Vite generated assets/settings.ccb080c2.js based on the hash of its contents. Workbox, being independent of Vite, generated its own hash of the same file. If that same file name were to be generated with different content, then a new service worker would be re-generated, with a different pre-cache manifest (same file, but different revision) and Workbox would know to cache the new version, and remove the old when it’s no longer needed.

Again, the filenames will always be different since we’re using a bundler that injects hash codes into our file names, but Workbox supports dev environments which don’t do that.

Since the time writing, the VitePWA plugin has been updated and no longer injects these revision hashes. If you’re attempting to follow along with the steps in this article, this specific step might be slightly different from your actual experience. See this GitHub issue for more context.

If we update our settings.js file, then Vite will create a new file in our build, with a new hash code, which Workbox will treat as a new file. Let’s see this in action. After changing the file and re-running the Vite build, our pre-cache manifest looks like this:

Now, when we refresh the page, the prior service worker is still running and loading the prior file. Then, the new service worker, with the new pre-cache manifest is downloaded and pre-cached.

A DevTools screenshot showing a table of pre-cached assets processed by the VitePWA plugin and Workbox.
The new pre-cached manifest is displayed in the list of cached assets. Notice that both versions of our settings file are there (and both versions of a few other assets were affected as well): the old version, since that’s what’s still being run, and the new version, since the new service worker has pre-cached it.

Note the corollary here: our old content is still being served to the user since the old service worker is still running. The user is unable to see the change we just made, even if they refresh because the service worker, by default, guarantees any and all tabs with this web app are running the same version. If you want the browser to show the updated version, close your tab (and any other tabs with the site), and re-open it.

The same DevTools screenshot of pre-cached assets, but now only displaying new assets instead of duplicates.
The cache should now only contain the new assets.

Workbox did all the legwork of making this all come out right! We did very little to get this going.

A better way to update content

It’s unlikely that you can get away with serving stale content to your users until they happen to close all their browser tabs. Fortunately, the VitePWA plugin offers a better way. The registerSW function accepts an object with an onNeedRefresh method. This method is called whenever there’s a new service worker waiting to take over. registerSW also returns a function that you can call to reload the page, activating the new service worker in the process.

That’s a lot, so let’s see some code:

if ("serviceWorker" in navigator) {
  // && !/localhost/.test(window.location) && !/lvh.me/.test(window.location)) {
  const updateSW = registerSW({
    onNeedRefresh() {
      Toastify({
        text: `<h4 style='display: inline'>An update is available!</h4>
               <br><br>
               <a class='do-sw-update'>Click to update and reload</a>  `,
        escapeMarkup: false,
        gravity: "bottom",
        onClick() {
          updateSW(true);
        }
      }).showToast();
    }
  });
}

I’m using the toastify-js library to show a toast UI component to let users know when a new version of the service worker is available and waiting. If the user clicks the toast, I call the function VitePWA gives me to reload the page, with the new service worker running.

A toast component screenshot with white text and a slight background gradient that goes from light blue on the left to bright blue on the right. It reads: an update is available! Click to update and reload.
Now when we have pending updates, a nice toast component pops up on the front end. Clicking it reloads the page with the new content in there.

One thing to remember here is that, after you deploy the code to show the toast, the toast component won’t show up the next time you load your site. That’s because the old service worker (the one before we added the toast component) is still running. That requires manually closing all tabs and re-opening the web app for the new service worker to take over. Then, the next time you update some code, the service worker should show the toast, prompting you to update.

Why doesn’t the service worker update when the page is refreshed? I mentioned earlier that refreshing the page does not update or activate the waiting service worker, so why does this work? Calling this method doesn’t only refresh the page, but it calls some low-level Service Worker APIs (in particular skipWaiting) as well, giving us the outcome we want.

Runtime caching

We’ve seen the bundle pre-caching we get for free with VitePWA for our build assets. What about caching any other content we might request at runtime? Workbox supports this via its runtimeCaching feature.

Here’s how. The VitePWA plugin can take an object, one property of which is workbox, which takes Workbox properties.

const getCache = ({ name, pattern }: any) => ({
  urlPattern: pattern,
  handler: "CacheFirst" as const,
  options: {
    cacheName: name,
    expiration: {
      maxEntries: 500,
      maxAgeSeconds: 60 * 60 * 24 * 365 * 2 // 2 years
    },
    cacheableResponse: {
      statuses: [200]
    }
  }
});
// ...

  plugins: [
    VitePWA({
      workbox: {
        runtimeCaching: [
          getCache({ 
            pattern: /^https:\/\/s3.amazonaws.com\/my-library-cover-uploads/, 
            name: "local-images1" 
          }),
          getCache({ 
            pattern: /^https:\/\/my-library-cover-uploads.s3.amazonaws.com/, 
            name: "local-images2" 
          })
        ]
      }
    })
  ],
// ...

I know, that’s a lot of code. But all it’s really doing is telling Workbox to cache anything it sees matching those URL patterns. The docs provide much more info if you want to get deep into specifics.

Now, after that update takes effect, we can see those resources being served by our service worker.

DevTools screenshot showing the resources that are loaded by the browser. There are four jpeg images.

And we can see the corresponding cache that was created.

DevTools screenshot showing the new cache instance that is stored in Cache Storage. It includes all of the cached images.

Adding your own service worker content

Let’s say you want to get advanced with your service worker. You want to add some code to sync data with IndexedDB, add fetch handlers, and respond with IndexedDB data when the user is offline (again, my prior post walks through the ins and outs of IndexedDB). But how do you put your own code into the service worker that Vite creates for us?

There’s another Workbox option we can use for this: importScripts.

VitePWA({
  workbox: {
    importScripts: ["sw-code.js"],

Here, the service worker will request sw-code.js at runtime. In that case, make sure there’s an sw-code.js file that can be served by your application. The easiest way to achieve that is to put it in the public folder (see the Vite docs for detailed instructions).

If this file starts to grow to a size such that you need to break things up with JavaScript imports, make sure you bundle it to prevent your service worker from trying to execute import statements (which it may or may not be able to do). You can create a separate Vite build instead.

Wrapping up

At the end of 2021, CSS-Tricks asked a bunch of front-end folks what one thing someone cans do to make their website better. Chris Ferdinandi suggested a service worker. Well, that’s exactly what we accomplished in this article and it was relatively simple, wasn’t it? That’s thanks to the VitePWA with hat tips to Workbox and the Cache API.

Service workers that leverage the Cache API are capable of greatly improving the perf of your web app. And while it might seem a little scary or confusing at first, it’s nice to know we have tools like the VitePWA plugin to simplify things a great deal. Install the plugin and let it do the heavy lifting. Sure, there are more advanced things that a service worker can do, and VitePWA can be used for more complex functionality, but an offline site is a fantastic starting point!


Making a Site Work Offline Using the VitePWA Plugin originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/vitepwa-plugin-offline-service-worker/feed/ 1 361001
Add a Service Worker to Your Site https://css-tricks.com/add-a-service-worker-to-your-site/ https://css-tricks.com/add-a-service-worker-to-your-site/#comments Tue, 28 Dec 2021 16:14:23 +0000 https://css-tricks.com/?p=359055 One of the best things you can do for your website in 2022 is add a service worker, if you don’t have one in place already. Service workers give your website super powers. Today, I want to show you some …


Add a Service Worker to Your Site originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
One of the best things you can do for your website in 2022 is add a service worker, if you don’t have one in place already. Service workers give your website super powers. Today, I want to show you some of the amazing things that they can do, and give you a paint-by-numbers boilerplate that you can use to start using them on your site right away.

What are service workers?

A service worker is a special type of JavaScript file that acts like middleware for your site. Any request that comes from the site, and any response it gets back, first goes through the service worker file. Service workers also have access to a special cache where they can save responses and assets locally.

Together, these features allow you to…

  • Serve frequently accessed assets from your local cache instead of the network, reducing data usage and improving performance.
  • Provide access to critical information (or even your entire site or app) when the visitor goes offline.
  • Prefetch important assets and API responses so they’re ready when the user needs them.
  • Provide fallback assets in response to HTTP errors.

In short, service workers allow you to build faster and more resilient web experiences.

Unlike regular JavaScript files, service workers do not have access to the DOM. They also run on their own thread, and as a result, don’t block other JavaScript from running. Service workers are designed to be fully asynchronous.

Security

Because service workers intercept every request and response for your site or app, they have some important security limitations.

Service workers follow a same-origin policy.

You can’t run your service worker from a CDN or third party. It has to be hosted at the same domain as where it will be run.

Service workers only work on sites with an installed SSL certificate.

Many web hosts provide SSL certificates at no cost or for a small fee. If you’re comfortable with the command line, you can also install one for free using Let’s Encrypt.

There is an exception to the SSL certificate requirement for localhost testing, but you can’t run your service worker from the file:// protocol. You need to have a local server running.

Adding a service worker to your site or web app

To use a service worker, the first thing we need to do is register it with the browser. You can register a service worker using the navigator.serviceWorker.register() method. Pass in the path to the service worker file as an argument.

navigator.serviceWorker.register('sw.js');

You can run this in an external JavaScript file, but prefer to run it directly in a script element inline in my HTML so that it runs as soon as possible.

Unlike other types of JavaScript files, service workers only work for the directory in which they exist (and any of its sub-directories). A service worker file located at /js/sw.js would only work for files in the /js directory. As a result, you should place your service worker file inside the root directory of your site.

While service workers have fantastic browser support, it’s a good idea to make sure the browser supports them before running your registration script.

if (navigator && navigator.serviceWorker) {
  navigator.serviceWorker.register('sw.js');
}

After the service worker installs, the browser can activate it. Typically, this only happens when…

  • there is no service worker currently active, or
  • the user refreshes the page.

The service worker won’t run or intercept requests until it’s activated.

Listening for requests in a service worker

Once the service worker is active, it can start intercepting requests and running other tasks. We can listen for requests with self.addEventListener() and the fetch event.

// Listen for request events
self.addEventListener('fetch', function (event) {
  // Do stuff...
});

Inside the event listener, the event.request property is the request object itself. For ease, we can save it to the request variable.

Certain versions of the Chromium browser have a bug that throws an error if the page is opened in a new tab. Fortunately, there’s a simple fix from Paul Irish that I include in all of my service workers, just in case:

// Listen for request events
self.addEventListener('fetch', function (event) {

  // Get the request
  let request = event.request;

  // Bug fix
  // https://stackoverflow.com/a/49719964
  if (event.request.cache === 'only-if-cached' && event.request.mode !== 'same-origin') return;

});

Once your service worker is active, every single request is sent through it, and will be intercepted with the fetch event.

Service worker strategies

Once your service worker is installed and activated, you can intercept requests and responses, and handle them in various ways. There are two primary strategies you can use in your service worker:

  1. Network-first. With a network-first approach, you pass along requests to the network. If the request isn’t found, or there’s no network connectivity, you then look for the request in the service worker cache.
  2. Offline-first. With an offline-first approach, you check for a requested asset in the service worker cache first. If it’s not found, you send the request to the network.

Network-first and offline-first approaches work in tandem. You will likely mix-and-match approaches depending on the type of asset being requested.

Offline-first is great for large assets that don’t change very often: CSS, JavaScript, images, and fonts. Network-first is a better fit for frequently updated assets like HTML and API requests.

Strategies for caching assets

How do you get assets into your browser’s cache? You’ll typically use two different approaches, depending on the types of assets.

  1. Pre-cache on install. Every site and web app has a set of core assets that are used on almost every page: CSS, JavaScript, a logo, favicon, and fonts. You can pre-cache these during the install event, and serve them using an offline-first approach whenever they’re requested.
  2. Cache as you browser. Your site or app likely has assets that won’t be accessed on every visit or by every visitor; things like blog posts and images that go with articles. For these assets, you may want to cache them in real-time as the visitor accesses them.

You can then serve those cached assets, either by default or as a fallback, depending on your approach.

Implementing network-first and offline-first strategies in your service worker

Inside a fetch event in your service worker, the request.headers.get('Accept') method returns the MIME type for the content. We can use that to determine what type of file the request is for. MDN has a list of common files and their MIME types. For example, HTML files have a MIME type of text/html.

We can pass the type of file we’re looking for into the String.includes() method as an argument, and use if statements to respond in different ways based on the file type.

// Listen for request events
self.addEventListener('fetch', function (event) {

  // Get the request
  let request = event.request;

  // Bug fix
  // https://stackoverflow.com/a/49719964
  if (event.request.cache === 'only-if-cached' && event.request.mode !== 'same-origin') return;

  // HTML files
  // Network-first
  if (request.headers.get('Accept').includes('text/html')) {
    // Handle HTML files...
    return;
  }

  // CSS & JavaScript
  // Offline-first
  if (request.headers.get('Accept').includes('text/css') || request.headers.get('Accept').includes('text/javascript')) {
    // Handle CSS and JavaScript files...
    return;
  }

  // Images
  // Offline-first
  if (request.headers.get('Accept').includes('image')) {
    // Handle images...
  }

});

Network-first

Inside each if statement, we use the event.respondWith() method to modify the response that’s sent back to the browser.

For assets that use a network-first approach, we use the fetch() method, passing in the request, to pass through the request for the HTML file. If it returns successfully, we’ll return the response in our callback function. This is the same behavior as not having a service worker at all.

If there’s an error, we can use Promise.catch() to modify the response instead of showing the default browser error message. We can use the caches.match() method to look for that page, and return it instead of the network response.

// Send the request to the network first
// If it's not found, look in the cache
event.respondWith(
  fetch(request).then(function (response) {
    return response;
  }).catch(function (error) {
    return caches.match(request).then(function (response) {
      return response;
    });
  })
);

Offline-first

For assets that use an offline-first approach, we’ll first check inside the browser cache using the caches.match() method. If a match is found, we’ll return it. Otherwise, we’ll use the fetch() method to pass the request along to the network.

// Check the cache first
// If it's not found, send the request to the network
event.respondWith(
  caches.match(request).then(function (response) {
    return response || fetch(request).then(function (response) {
      return response;
    });
  })
);

Pre-caching core assets

Inside an install event listener in the service worker, we can use the caches.open() method to open a service worker cache. We pass in the name we want to use for the cache, app, as an argument.

The cache is scoped and restricted to your domain. Other sites can’t access it, and if they have a cache with the same name the contents are kept entirely separate.

The caches.open() method returns a Promise. If a cache already exists with this name, the Promise will resolve with it. If not, it will create the cache first, then resolve.

// Listen for the install event
self.addEventListener('install', function (event) {
  event.waitUntil(caches.open('app'));
});

Next, we can chain a then() method to our caches.open() method with a callback function.

In order to add files to the cache, we need to request them, which we can do with the new Request() constructor. We can use the cache.add() method to add the file to the service worker cache. Then, we return the cache object.

We want the install event to wait until we’ve cached our file before completing, so let’s wrap our code in the event.waitUntil() method:

// Listen for the install event
self.addEventListener('install', function (event) {

  // Cache the offline.html page
  event.waitUntil(caches.open('app').then(function (cache) {
    cache.add(new Request('offline.html'));
    return cache;
  }));

});

I find it helpful to create an array with the paths to all of my core files. Then, inside the install event listener, after I open my cache, I can loop through each item and add it.

let coreAssets = [
  '/css/main.css',
  '/js/main.js',
  '/img/logo.svg',
  '/img/favicon.ico'
];

// On install, cache some stuff
self.addEventListener('install', function (event) {

  // Cache core assets
  event.waitUntil(caches.open('app').then(function (cache) {
    for (let asset of coreAssets) {
      cache.add(new Request(asset));
    }
    return cache;
  }));

});

Cache as you browse

Your site or app likely has assets that won’t be accessed on every visit or by every visitor; things like blog posts and images that go with articles. For these assets, you may want to cache them in real-time as the visitor accesses them. On subsequent visits, you can load them directly from cache (with an offline-first approach) or serve them as a fallback if the network fails (using a network-first approach).

When a fetch() method returns a successful response, we can use the Response.clone() method to create a copy of it.

Next, we can use the caches.open() method to open our cache. Then, we’ll use the cache.put() method to save the copied response to the cache, passing in the request and copy of the response as arguments. Because this is an asynchronous function, we’ll wrap our code in the event.waitUntil() method. This prevents the event from ending before we’ve saved our copy to cache. Once the copy is saved, we can return the response as normal.

/explanation We use cache.put() instead of cache.add() because we already have a response. Using cache.add() would make another network call.

// HTML files
// Network-first
if (request.headers.get('Accept').includes('text/html')) {
  event.respondWith(
    fetch(request).then(function (response) {

      // Create a copy of the response and save it to the cache
      let copy = response.clone();
      event.waitUntil(caches.open('app').then(function (cache) {
        return cache.put(request, copy);
      }));

      // Return the response
      return response;

  }).catch(function (error) {
      return caches.match(request).then(function (response) {
        return response;
      });
    })
  );
}

Putting it all together

I’ve put together a copy-paste boilerplate for you on GitHub. Add your core assets to the coreAssets array, and register it on your site to get started.

If you do nothing else, this will be a huge boost to your site in 2022.

But there’s so much more you can do with service workers. There are advanced caching strategies for APIs. You can provide an offline page with critical information if a visitor loses their network connection. You can clean up bloated caches as the user browses.

Jeremy Keith’s book, Going Offline, is a great primer on service workers. If you want to take things to the next level and dig into progressive web apps, Jason Grigsby’s book dives into the various strategies you can use.

And for a pragmatic deep dive you can complete in about an hour, I also have a course and ebook on service workers with lots of code examples and a project you can work on.


Add a Service Worker to Your Site originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/add-a-service-worker-to-your-site/feed/ 9 359055
Creating Scheduled Push Notifications https://css-tricks.com/creating-scheduled-push-notifications/ https://css-tricks.com/creating-scheduled-push-notifications/#comments Wed, 08 Apr 2020 14:53:47 +0000 https://css-tricks.com/?p=305691 Scheduled is the key word there — that’s a fairly new thing! When a push notification is scheduled (i.e. “Take your pill” or “You’ve got a flight in 3 hours”) that means it can be shown to the user even …


Creating Scheduled Push Notifications originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Scheduled is the key word there — that’s a fairly new thing! When a push notification is scheduled (i.e. “Take your pill” or “You’ve got a flight in 3 hours”) that means it can be shown to the user even if they’ve gone offline. That’s an improvement from the past where push notification required the user being online. 

So how do scheduled push notifications work? There are four key parts we’re going to look at:

  • Registering a Service Worker
  • Adding and removing scheduled push notifications
  • Enhancing push notifications with action buttons
  • Handling push notifications in the Service Worker

First, a little background

Push notifications are a great way to inform site users that something important has happened and that they might want to open our (web) app again. With the Notifications API — in combination with the Push API and the HTTP Web Push Protocol — the web became an easy way to send a push notification from a server to an application and display it on a device.

You may have already seen this sort of thing evolve. For example, how often do you see some sort of alert to accept notifications from a website? While browser vendors are already working on solutions to make that less annoying (both Firefox and Chrome have outlined plans), Chrome 80 just started an origin trial for the new Notification Trigger API, which lets us create notifications triggered by different events rather than a server push alone. For now, however, time-based triggers are the only supported events we have. But other events, like geolocation-based triggers, are already planned.

Scheduling an event in JavaScript is pretty easy, but there is one problem. In our push notification scenario, we can’t be sure the application is running at the exact moment we want to show the notification. This means that we can’t just schedule it on an application layer. Instead, we’ll need to do it on a Service Worker level. That’s where the new API comes into play.

The Notification Trigger API is in an early feedback phase. You need to enable the #enable-experimental-web-platform-features flag in Chrome or you should register your application for an origin trial.

Also, the Service Worker API requires a secure connection over HTTPS. So, if you try it out on your machine, you’ll need to ensure that it’s served over HTTPS.

Setting things up

I created a very basic setup. We have one application.js file, one index.html file, and one service-worker.js file, as well as a couple of image assets.

/project-folder
├── index.html
├── application.js
├── service-worker.js
└── assets
   ├─ badge.png
   └── icon.png

You can find the full example of a basic Notification Trigger API demo on GitHub.

Registering a Service Worker

First, we need to register a Service Worker. For now, it will do nothing but log that the registration was successful.

// service-worker.js
// listen to the install event
self.addEventListener('install', event => console.log('ServiceWorker installed'));
<!-- index.html -->
<script>
  if ('serviceWorker' in navigator) {
    navigator.serviceWorker.register('/service-worker.js');
  }
</script>

Setting up the push notification

Inside our application, we need to ask for the user’s permission to show notifications. From there, we’ll get our Service Worker registration and register a new notification for this scope. So far, nothing new.

The cool part is the new showTrigger property. This lets us define the conditions for displaying a notification. For now, we want to add a new TimestampTrigger, which accepts a timestamp. And since everything happens directly on the device, it also works offline.

// application.js
document.querySelector('#notification-button').onclick = async () => {
  const reg = await navigator.serviceWorker.getRegistration();
  Notification.requestPermission().then(permission => {
    if (permission !== 'granted') {
      alert('you need to allow push notifications');
    } else {
      const timestamp = new Date().getTime() + 5 * 1000; // now plus 5000ms
      reg.showNotification(
        'Demo Push Notification',
        {
          tag: timestamp, // a unique ID
          body: 'Hello World', // content of the push notification
          showTrigger: new TimestampTrigger(timestamp), // set the time for the push notification
          data: {
            url: window.location.href, // pass the current url to the notification
          },
          badge: './assets/badge.png',
          icon: './assets/icon.png',
        }
      );
    }
  });
};

Handling the notification

Right now, the notification should show up at the specified timestamp. But now we need a way to interact with it, and that’s where we need the Service Worker notificationclick and notificationclose events.

Both events listen to the relevant interactions and both can use the full potential of the Service Worker. For example, we could open a new window:

// service-worker.js
self.addEventListener('notificationclick', event => {
  event.waitUntil(self.clients.openWindow('/'));
});

That’s a pretty straightforward example. But with the power of the Service Worker, we can do a lot more. Let’s check if the required window is already open and only open a new one if it isn’t.

// service-worker.js
self.addEventListener('notificationclick', event => {
  event.waitUntil(self.clients.matchAll().then(clients => {
    if (clients.length){ // check if at least one tab is already open
      clients[0].focus();
    } else {
      self.clients.openWindow('/');
    }
  }));
});

Notification actions

Another great way to facilitate interaction with users is to add predefined actions to the notifications. We could, for example, let them choose if they want to dismiss the notification or open the app.

// application.js
reg.showNotification(
  'Demo Push Notification',
  {
    tag: timestamp, // a unique ID
    body: 'Hello World', // content of the push notification
    showTrigger: new TimestampTrigger(timestamp), // set the time for the push notification
    data: {
      url: window.location.href, // pass the current url to the notification
    },
    badge: './assets/badge.png',
    icon: './assets/icon.png',
    actions: [
      {
        action: 'open',
        title: 'Open app’
      },
      {
        action: 'close',
        title: 'Close notification',
      }
    ]
  }
);

Now we use those notifications inside the Service Worker.

// service-worker.js
self.addEventListener('notificationclick', event => {
  if (event.action === 'close') {
    event.notification.close();
  } else {
    self.clients.openWindow('/');
  }
});

Cancelling push notifications

It’s also possible to cancel pending notifications. In this case, we need to get all pending notifications from the Service Worker and then close them before they are sent to the device.

// application.js
document.querySelector('#notification-cancel').onclick = async () => {
  const reg = await navigator.serviceWorker.getRegistration();
  const notifications = await reg.getNotifications({
    includeTriggered: true
  });
  notifications.forEach(notification => notification.close());
  alert(`${notifications.length} notification(s) cancelled`);
};

Communication

The last step is to set up the communication between the app and the Service Worker using the postMessage method on the Service Worker clients. Let’s say we want to notify the tab that’s already active that a push notification click happened.

// service-worker.js
self.addEventListener('notificationclick', event => {
  event.waitUntil(self.clients.matchAll().then(clients => {
    if(clients.length){ // check if at least one tab is already open
      clients[0].focus();
      clients[0].postMessage('Push notification clicked!');
    } else {
      self.clients.openWindow('/');
    }
  }));
});
// application.js
navigator.serviceWorker.addEventListener('message', event => console.log(event.data));

Summary

The Notification API is a very powerful feature to enhance the mobile experience of web applications. Thanks to the arrival of  the Notification Trigger API, it just got a very important improvement. The API is still under development, so now is the perfect time to play around with it and give feedback to the developers.

If you are working with Vue or React, I’d recommend you take a look at my own Progressive Web App demo. It includes a documented example using the Notification Trigger API for both frameworks that looks like this:


Creating Scheduled Push Notifications originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/creating-scheduled-push-notifications/feed/ 14 305691
Client-Side Image Editing on Mobile https://css-tricks.com/client-side-image-editing-on-mobile/ Fri, 20 Mar 2020 22:57:18 +0000 https://css-tricks.com/?p=305199 Michael Scharnagl:

Ever wanted to easily convert an image to a grayscale image on your phone? I do sometimes, and that’s why I build a demo using the Web Share Target API to achieve exactly that.

For this I


Client-Side Image Editing on Mobile originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Michael Scharnagl:

Ever wanted to easily convert an image to a grayscale image on your phone? I do sometimes, and that’s why I build a demo using the Web Share Target API to achieve exactly that.

For this I used the Service Worker way to handle the data. Once the data is received on the client, I use drawImage from canvas to draw the image in canvas, use the grayscale filter to convert it to a grayscale image and output the final image.

So you “install” the little microsite like a PWA, then you natively “share” an image to it and it comes back edited. Clever. Android on Chrome only at the moment.

Reminds me of this “Browser Functions” idea in reverse. That was a server that did things a browser can do, this is a browser doing things a server normally does.

To Shared LinkPermalink on CSS-Tricks


Client-Side Image Editing on Mobile originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
305199
Going Buildless https://css-tricks.com/going-buildless/ https://css-tricks.com/going-buildless/#comments Tue, 27 Aug 2019 14:44:01 +0000 https://css-tricks.com/?p=294820 I’m in a long distance relationship. That means I’m on a plane to England every few weeks, and every time I’m on that plane, I think about how nice it would be to read some Reddit posts. What I could …


Going Buildless originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I’m in a long distance relationship. That means I’m on a plane to England every few weeks, and every time I’m on that plane, I think about how nice it would be to read some Reddit posts. What I could do is find a Reddit app that lets me cache posts for offline (I’m sure there is one out there), or I could take the opportunity to write something myself and have fun using the latest and greatest technologies and web standards out there!

On top of that, there has been a lot of discussion around what I like to call going buildless, which I think is really fascinating development in which production projects are created without using a build process (like a bundler).

This post is also a homage to a couple of awesome people in the web community who are making some great things possible. I’ll be linking to all that stuff as we move along. Do note that this won’t be a step-by-step tutorial, but if you want to check out the code, you can find the finished project on GitHub.

Our end result should look something like this:

Let’s dive in and install a few dependencies

npm i @babel/core babel-loader @babel/preset-env @babel/preset-react webpack webpack-cli react react-dom redux react-redux html-webpack-plugin are-you-tired-yet html-loader webpack-dev-server

I’m kidding.

We’re not gonna use any of that.

We’re going to try and avoid as much tooling and dependencies as we can to keep the entry barrier low. What we will be using is:

  • LitElement – LitElement is our component model. It’s easy to use, lightweight, close to the metal, and leverages web components.
  • @vaadin/router – This is a really small (< 7kb) router that has an awesome developer experience that I cannot recommend enough.
  • @pika/web – This will help us get our modules together for easy development.
  • es-dev-server – This is a simple dev server for modern web development workflows, made by us at open-wc. Although any HTTP server will doc, feel free to bring your own.

That’s it! We’ll also be using a few browser standards, namely: es modules, web components, import-maps, kv-storage and service-worker.

Let’s go ahead and install our dependencies:

npm i -S lit-element @vaadin/router
npm i -D @pika/web es-dev-server

We’ll also add a postinstall hook to our package.json that’s going to run Pika for us:

"scripts": {
  "start": "es-dev-server",
  "postinstall": "pika-web"
}

🐭 Pika

Pika is a project by Fred K. Schott that aims to bring that nostalgic 2014 simplicity to 2019 web development. Fred is up to all sorts of awesome stuff. For one, he made pika.dev, which lets you easily search for modern JavaScript packages on npm. He also recently gave his talk Reimagining the Registry at DinosaurJS 2019, which I highly recommend you watch.

Pika takes things even one step further. If we run pika-web, it’ll install our dependencies as single JavaScript files to a new web_modules/ directory. If your dependency exports an ES “module” entrypoint in its package.json manifest, Pika supports it. If you have any transitive dependencies, Pika will create separate chunks for any shared code among your dependencies.

What this means, is that in our case our output will look something like:

└─ web_modules/
    ├─ lit-element.js
    └─ @vaadin
        └─ router.js

Sweet! That’s it. We have our dependencies ready to go as single JavaScript module files, and this is going to make things really convenient for us later on in this post, so stay tuned!

📥 Import maps

Alright! Now that we’ve got our dependencies sorted out, let’s get to work. We’ll make an index.html that’ll look something like this:

<html>
  <!-- head, etc. -->
  <body>
    <reddit-pwa-app></reddit-pwa-app>
    <script src="./src/reddit-pwa-app.js" type="module"></script>
  </body>
</html>

And reddit-pwa-app.js:

import { LitElement, html } from 'lit-element';

class RedditPwaApp extends LitElement {

  // ...

  render() {
    return html`
      <h1>Hello world!</h1>
    `;
  }
}

customElements.define('reddit-pwa-app', RedditPwaApp);

We’re off to a great start. Let’s try and see how this looks in the browser so far, so lets start our server, open the browser and… What’s this? An error?

Oh boy.

And we’ve barely even started. Alright, let’s take a look. The problem here is that our module specifiers are bare. They are bare module specifiers. What this means is that there are no paths specified, no file extensions, they’re just… pretty bare. Our browser has no idea on what to do with this, so it’ll throw an error.

import { LitElement, html } from 'lit-element'; // <-- bare module specifier
import { Router } from '@vaadin/router'; // <-- bare module specifier

import { foo } from './bar.js'; // <-- not bare!
import { html } from 'https://unpkg.com/lit-html'; // <-- not bare!

Naturally, we could use some tools for this, like webpack, or rollup, or a dev server that rewrites the bare module specifiers to something meaningful to browsers, so we can load our imports. But that means we have to bring in a bunch of tooling, dive into configuration, and we’re trying to stay minimal here. We just want to write code! In order to solve this, we’re going to take a look at import maps.

Import maps is a new proposal that lets you control the behavior of JavaScript imports. Using an import map, we can control what URLs get fetched by JavaScript import statements and import() expressions, and allows this mapping to be reused in non-import contexts. This is great for several reasons:

  • It allows our bare module specifiers to work.
  • It provides a fallback resolution so that import $ from "jquery"; can try to go to a CDN first, but fall back to a local version if the CDN server is down.
  • It enables polyfilling of (or other control over) built-in modules. (More on that later, hang on tight!)
  • Solves the nested dependency problem. (Go read that blog!)

Sounds pretty sweet, no? Import maps are currently available in Chrome 75+ behind a flag, and with that knowledge in mind, let’s go to our index.html, and add an import map to our <head>:

<head>
  <script type="importmap">
    {
      "imports": {
        "@vaadin/router": "/web_modules/@vaadin/router.js",
        "lit-element": "/web_modules/lit-element.js"
      }
    }
  </script>
</head>

If we go back to our browser, and refresh our page, we’ll have no more errors, and we should see our <h1>Hello world!</h1> on our screen.

Import maps is an incredibly interesting new standard, and definitely something you should be keeping your eyes on. If you’re interested in experimenting with them, and generate your own import map based on a yarn.lock file, you can try our open-wc import-maps-generate package and play around. Im really excited to see what people will develop in combination with import maps.

📡 Service Worker

Alright, we’re going to skip ahead in time a little bit. We’ve got our dependencies working, we have our router set up, and we’ve done some API calls to get the data from Reddit and display it on our screen. Going over all of the code is a bit out of scope for this post, but remember that you can find all the code in the GitHub repo if you want to read the implementation details.

Since we’re making this app so we can read reddit threads on the airplane it would be great if our application worked offline, and if we could somehow save some posts to read.

Service workers are a kind of JavaScript Worker that runs in the background. You can visualize it as sitting in between the web page, and the network. Whenever your web page makes a request, it goes through the service worker first. This means that we can intercept the request, and do stuff with it! For example, we can let the request go through to the network to get a response, and cache it when it returns so we can use that cached data later when we might be offline. We can also use a service worker to precache our assets. What this means is that we can precache any critical assets our application may need in order to work offline. If we have no network connection, we can simply fall back to the assets we cached, and still have a working (albeit offline) application.

If you’re interested in learning more about Progressive Web Apps and service worker, I highly recommend you read The Offline Cookbook, by Jake Archibald, as well as this video tutorial series by Jad Joubran.

Let’s go ahead and implement a service worker. In our index.html, we’ll add the following snippet:

<script>
  if ('serviceWorker' in navigator) {
    window.addEventListener('load', () => {
      navigator.serviceWorker.register('./sw.js').then(() => {
        console.log('ServiceWorker registered!');
      }, (err) => {
        console.log('ServiceWorker registration failed: ', err);
      });
    });
  }
</script>

We’ll also add a sw.js file to the root of our project. So we’re about to precache the assets of our app, and this is where Pika just made life really easy for us. If you’ll take a look at the install handler in the service worker file:

self.addEventListener('install', (event) => {
  event.waitUntil(
    caches.open(CACHENAME).then((cache) => {
      return cache.addAll([
        '/',
        './web_modules/lit-element.js',
        './web_modules/@vaadin/router.js',
        './src/reddit-pwa-app.js',
        './src/reddit-pwa-comment.js',
        './src/reddit-pwa-search.js',
        './src/reddit-pwa-subreddit.js',
        './src/reddit-pwa-thread.js',
        './src/utils.js',
      ]);
    })
  );
});

You’ll find that we’re totally in control of our assets, and we have a nice, clean list of files we need in order to work offline.

📴 Going offline

Right. Now that we’ve cached our assets to work offline, it would be excellent if we could actually save some posts that we can read while offline. There are many ways that lead to Rome, but since we’re living on the edge a little bit, we’re going to go with: Kv-storage!

📦 Built-in Modules

There are a few things to talk about here. Kv-storage is a built-in module. Built-in modules are very similar to regular JavaScript modules, except they ship with the browser. It’s good to note that while built-in modules ship with the browser, they are not exposed on the global scope, and are namespaced with std: (Yes, really.). This has a few advantages: they won’t add any overhead to starting up a new JavaScript runtime context (e.g. a new tab, worker, or service worker), and they won’t consume any memory or CPU unless they’re actually imported, as well as avoid naming collisions with existing code.

Another interesting, if not somewhat controversial, proposal as a built-in module is the std-toast element, and the std-switch element.

🗃 Kv-storage

Alright, with that out of the way, lets talk about kv-storage. Kv-storage (or “key value storage”) is layered on top of IndexedDB and fairly similar to localStorage, except for only a few major differences.

The motivation for kv-storage is that localStorage is synchronous, which can lead to bad performance and syncing issues. It’s also limited exclusively to String key/value pairs. The alternative, IndexedDB, is… hard to use. The reason it’s so hard to use is that it predates promises, and this leads to a, well, pretty bad developer experience. Not fun. Kv-storage, however, is a lot of fun, asynchronous, and easy to use! Consider the following example:

import { storage, /* StorageArea */ } from "std:kv-storage";

(async () => {
  await storage.set("mycat", "Tom");
  console.log(await storage.get("mycat")); // Tom
})();

Notice how we’re importing from std:kv-storage? This import specifier is bare as well, but in this case it’s okay because it actually ships with the browser.

Pretty neat. We can perfectly use this for adding a ‘save for offline’ button, and simply store the JSON data for a Reddit thread, and get it when we need it.

// reddit-pwa-thread.js:52:
const savedPosts = new StorageArea("saved-posts");

// ...

async saveForOffline() {
  await savedPosts.set(this.location.params.id, this.thread); // id of the post + thread as json
  this.isPostSaved = true;
}

So now if we click the “save for offline” button, and we go to the DevTools “Application” tab, we can see a kv-storage:saved-posts that holds the JSON data for this post:

And if we go back to our search page, we’ll have a list of saved posts with the post we just saved:

🔮 Polyfilling

Excellent. However, we’re about to run into another problem here. Living on the edge is fun, but also dangerous. The problem that we’re hitting here is that, at the time of writing, kv-storage is only implemented in Chrome behind a flag. That’s not great. Fortunately, there’s a polyfill available, and at the same time we get to show off yet another really useful feature of import-maps; polyfilling!

First things first, lets install the kv-storage-polyfill:

npm i -S kv-storage-polyfill

Note that our postinstall hook will run Pika for us again.

Let’s also add the following to our import map in our index.html:

<script type="importmap">
  {
    "imports": {
      "@vaadin/router": "/web_modules/@vaadin/router.js",
      "lit-element": "/web_modules/lit-element.js",
      "/web_modules/kv-storage-polyfill.js": [
        "std:kv-storage",
        "/web_modules/kv-storage-polyfill.js"
      ]
    }
  }
</script>

What happens here is that whenever /web_modules/kv-storage-polyfill.js is requested or imported, the browser will first try to see if std:kv-storage is available; however, if that fails, it’ll load /web_modules/kv-storage-polyfill.js instead.

So in code, if we import:

import { StorageArea } from '/web_modules/kv-storage-polyfill.js';

This is what will happen:

"/web_modules/kv-storage-polyfill.js": [                 // when I'm requested
    "std:kv-storage",                      // try me first!
  "/web_modules/kv-storage-polyfill.js"    // or fallback to me
]

🎉 Conclusion

And we should now have a simple, functioning PWA with minimal dependencies. There are a few nitpicks to this project that we could complain about, and they’d all likely be fair. For example, we probably could’ve gone without using Pika, but it does make life really easy for us. You could have made the same argument about adding a webpack configuration, but you’d have missed the point. The point here is to make a fun application, while using some of the latest features, drop some buzzwords, and have a low barrier for entry. As Fred Schott would say: “In 2019, you should use a bundler because you want to, not because you need to.”

If you’re interested in nitpicking, however, you can read this great discussion about using webpack vs. Pika vs. buildless, and you’ll get some great insights from Sean Larkinn of the webpack core team himself, as well as Fred K. Schott, creator of Pika.

I hope you enjoyed this blog post, and I hope you learned something, or discovered some new interesting people to follow. There are lots of exciting developments happening in this space right now, and I hope I got you as excited about them as I am. If you have any questions, comments, feedback, or nitpicks, feel free to reach out to me on twitter at @passle_ or @openwc and don’t forget to check out open-wc.org 😉.

Honorable Mentions

I’d like to give a few shout-outs to some very interesting people that are doing some great stuff, and you may want to keep an eye on.


Going Buildless originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/going-buildless/feed/ 4 294820
Inline SVG… Cached https://css-tricks.com/inline-svg-cached/ https://css-tricks.com/inline-svg-cached/#comments Fri, 12 Apr 2019 14:28:38 +0000 http://css-tricks.com/?p=285846 I wrote that using inline <svg icons make for the best icon system. I still think that’s true. It’s the easiest possible way to drop an icon onto a page. No network request, perfectly styleable.

But inlining code has …


Inline SVG… Cached originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I wrote that using inline <svg> icons make for the best icon system. I still think that’s true. It’s the easiest possible way to drop an icon onto a page. No network request, perfectly styleable.

But inlining code has some drawbacks, one of which is that it doesn’t take advantage of caching. You’re making the browser read and process the same code over and over as you browse around. Not that big of a deal. There are much bigger performance fish to fry, right? But it’s still fun to think about more efficient patterns.

Scott Jehl wrote that just because you inline something doesn’t mean you can’t cache it. Let’s see if Scott’s idea can extend to SVG icons.

Starting with inline SVG

Like this…

<!DOCTYPE html>
<html lang="en">

<head>
  <title>Inline SVG</title>
  <link rel="stylesheet" href="/styles/style.css">
</head>

<body>

  ...
 
  <svg width="24" height="24" viewBox="0 0 24 24" class="icon icon-alarm" xmlns="http://www.w3.org/2000/svg">
    <path id="icon-alarm" d="M11.5,22C11.64,22 11.77,22 11.9,21.96C12.55,21.82 13.09,21.38 13.34,20.78C13.44,20.54 13.5,20.27 13.5,20H9.5A2,2 0 0,0 11.5,22M18,10.5C18,7.43 15.86,4.86 13,4.18V3.5A1.5,1.5 0 0,0 11.5,2A1.5,1.5 0 0,0 10,3.5V4.18C7.13,4.86 5,7.43 5,10.5V16L3,18V19H20V18L18,16M19.97,10H21.97C21.82,6.79 20.24,3.97 17.85,2.15L16.42,3.58C18.46,5 19.82,7.35 19.97,10M6.58,3.58L5.15,2.15C2.76,3.97 1.18,6.79 1,10H3C3.18,7.35 4.54,5 6.58,3.58Z"></path>
  </svg>

It’s weirdly easy to toss text into browser cache as a file

In the above HTML, the selector .icon-alarm will fetch us the entire chunk of SVG for that icon.

const iconHTML = document.querySelector(".icon-alarm").outerHTML;

Then we can plunk it into the browser’s cache like this:

if ("caches" in window) {
  caches.open('static').then(function(cache) {
    cache.put("/icons/icon-wheelchair.svg", new Response(
      iconHTML,
      { headers: {'Content-Type': 'image/svg+xml'} }
    ));
  }
}

See the file path /icons/icon-wheelchair.svg? That’s kinda just made up. But it really will be put in the cache at that location.

Let’s make sure the browser grabs that file out of the cache when it’s requested

We’ll register a Service Worker on our pages:

if (navigator.serviceWorker) {   
  navigator.serviceWorker.register('/sw.js', {
    scope: '/'
  });
}

The service worker itself will be quite small, just a cache matcher:

self.addEventListener("fetch", event => {
  let request = event.request;

  event.respondWith(
    caches.match(request).then(response => {
      return response || fetch(request);
    })
  );
});

But… we never request that file, because our icons are inline.

True. But what if other pages benefitted from that cache? For example, an SVG icon could be placed on the page like this:

<svg class="icon">
  <use xlink:href="/icons/icon-alarm.svg#icon-alarm" /> 
</svg>

Since /icons/icon-alarm.svg is sitting there ready in cache, the browser will happily pluck it out of cache and display it.

(I was kind of amazed this works. Edge doesn’t like <use> elements that link to files, but that’ll be over soon enough. Update, it’s over, Edge went Chromium.)

And even if the file isn’t in the cache, assuming we actually chuck this file on the file system likely the result of some kind of “include” (I used Nunjucks on the demo).

But… <use> and inline SVG aren’t quite the same

True. What I like about the above is that it’s making use of the cache and the icons should render close to immediately. And there are some things you can style this way — for example, setting the fill on the parent icon should go through the shadow DOM that the <use> creates and colorize the SVG elements within.

Still, it’s not the same. The shadow DOM is a big barrier compared to inline SVG.

So, enhance them! We could asynchronously load a script that finds each SVG icon, Ajaxs for the SVG it needs, and replaces the <use> stuff…

const icons = document.querySelectorAll("svg.icon");

icons.forEach(icon => {
  const url = icon.querySelector("use").getAttribute("xlink:href"); // Might wanna look for href also
  fetch(url)
    .then(response => response.text())
    .then(data => {
      // This is probably a bit layout-thrashy. Someone smarter than me could probably fix that up.

      // Replace the <svg><use></svg> with inline SVG
      const newEl = document.createElement("span");
      newEl.innerHTML = data;
      icon.parentNode.replaceChild(newEl, icon);

      // Remove the <span>s
      const parent = newEl.parentNode;
      while (newEl.firstChild) parent.insertBefore(newEl.firstChild, newEl);
      parent.removeChild(newEl);
    });
});

Now, assuming this JavaScript executes correctly, this page has inline SVG available just like the original page did.

Demo & Repo


Inline SVG… Cached originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/inline-svg-cached/feed/ 12 285846
Service Worker Cookbook https://css-tricks.com/service-worker-cookbook/ Fri, 25 May 2018 20:54:50 +0000 http://css-tricks.com/?p=271633 I stumbled upon this site the other day from Mozilla that’s a collection of recipes to get started with a Service Worker — from caching strategies and notifications to providing an offline fallback to your users, this little cookbook has …


Service Worker Cookbook originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I stumbled upon this site the other day from Mozilla that’s a collection of recipes to get started with a Service Worker — from caching strategies and notifications to providing an offline fallback to your users, this little cookbook has it all.

You can also check out our guide to making a simple site work offline and the offline site that resulted from it.

To Shared LinkPermalink on CSS-Tricks


Service Worker Cookbook originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
271633
Going Offline https://css-tricks.com/going-offline/ Wed, 11 Apr 2018 13:55:41 +0000 http://css-tricks.com/?p=269325 Jeremy Keith has written a new book all about service workers and offline functionality that releases at the end of the month. The first chapter is posted on A List Apart. Now that the latest versions of iOS and …


Going Offline originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Jeremy Keith has written a new book all about service workers and offline functionality that releases at the end of the month. The first chapter is posted on A List Apart. Now that the latest versions of iOS and macOS Safari support service workers, I can’t think of a better time to learn about how progressive web apps work under the hood. In fact, here’s an example of a simple offline site and a short series on making web apps work offline.

News of Jeremy’s book had me going back through his previous book, Resilient Web Design, where I half-remembered this super interesting quote from Chapter 4:

If you build something using web technologies, and someone visits with a web browser, you can’t be sure how many of the web technologies will be supported. It probably won’t be 100%. But it’s also unlikely to be 0%. Some people will visit with iOS devices. Others will visit with Android devices. Some people will get 80% or 90% of what you’ve designed. Others will get just 20%, 30%, or 50%. The web isn’t a platform. It’s a continuum.

I love this idea of the web as a continuum that’s constantly improving and growing over time and so I’m sure Jeremy’s latest book will be just as fun and interesting.

To Shared LinkPermalink on CSS-Tricks


Going Offline originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
269325
Offline *Only* Viewing https://css-tricks.com/offline-only-viewing/ https://css-tricks.com/offline-only-viewing/#comments Thu, 08 Feb 2018 19:42:46 +0000 http://css-tricks.com/?p=266339 It made the rounds a while back that Chris Bolin built a page of his personal website that could only be viewed while you are offline. Now he has a whole magazine around this same concept called The Disconnect!


Offline *Only* Viewing originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
It made the rounds a while back that Chris Bolin built a page of his personal website that could only be viewed while you are offline.

This page itself is an experiment in that vein: What if certain content required us to disconnect? What if readers had access to that glorious focus that makes devouring a novel for hours at a time so satisfying? What if creators could pair that with the power of modern devices? Our phones and laptops are amazing platforms for inventive content—if only we could harness our own attention.

Now Bolin has a whole magazine around this same concept called The Disconnect!

The Disconnect is an offline-only, digital magazine of commentary, fiction, and poetry. Each issue forces you to disconnect from the internet, giving you a break from constant distractions and relentless advertisements.

I believe it’s some Service Worker trickery to serve different files depending on the state of the network. Usually, Service Workers are meant to serve cached files when the network is off or slow such as to make the website continue to work. This flips that logic on its head, preventing files from being served until the network is off.


Offline *Only* Viewing originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/offline-only-viewing/feed/ 4 266339
Making your web app work offline, Part 2: The Implementation https://css-tricks.com/making-web-app-work-offline-part-2-implementation/ https://css-tricks.com/making-web-app-work-offline-part-2-implementation/#comments Thu, 07 Dec 2017 14:33:39 +0000 http://css-tricks.com/?p=263437 This two-part series is a gentle, high-level introduction to offline web development. In Part 1 we got a basic service worker running, which caches our application resources. Now let’s extend it to support offline.

Article Series:

  1. The Setup
  2. The Implementation


Making your web app work offline, Part 2: The Implementation originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
This two-part series is a gentle, high-level introduction to offline web development. In Part 1 we got a basic service worker running, which caches our application resources. Now let’s extend it to support offline.

Article Series:

  1. The Setup
  2. The Implementation (you are here!)

Making an `offline.htm` file

Next, lets add some code to detect when the application is offline, and if so, redirect our users to a (cached) `offline.htm`.

But wait, if the service worker file is generated automatically, how do we go about adding in our own code, manually? Well, we can add an entry for importScripts, which tells our service worker to import the scripts we specify. It does this through the service worker’s native importScripts function, which is well-named. And we’ll also add our `offline.htm` file to our statically cached list of files. The new files are highlighted below:

new SWPrecacheWebpackPlugin({
    mergeStaticsConfig: true,
    filename: "service-worker.js",
    importScripts: ["../sw-manual.js"], 
    staticFileGlobs: [
      //...
      "offline.htm"
    ],
    // the rest of the config is unchanged
  })

Now, let’s go in our `sw-manual.js` file, and add code to load the cached `offline.htm` file when the user is offline.

toolbox.router.get(/books$/, handleMain);
toolbox.router.get(/subjects$/, handleMain);
toolbox.router.get(/localhost:3000\/$/, handleMain);
toolbox.router.get(/mylibrary.io$/, handleMain);

function handleMain(request) {
  return fetch(request).catch(() => {
    return caches.match("react-redux/offline.htm", { ignoreSearch: true });
  });
}

We’ll use the toolbox.router object we saw before to catch all our top-level routes, and if the main page doesn’t load from the network, send back the (hopefully cached) `offline.htm` file.

This is one of the few times in this post you’ll see promises being used directly, instead of with the async syntax, mainly because in this case it’s actually easier to just tack on a .catch(), rather than set up a try{} catch{} block.

The `offline.htm` file will be pretty basic, just some HTML that reads cached books from IndexedDB, and displays them in a rudimentary table. But before showing that, let’s walk through how to actually use IndexedDB (if you want to just see it now, it’s here)

Hello World, IndexedDB

IndexedDB is an in-browser database. It’s ideal for enabling offline functionality since it can be accessed without network connectivity, but it’s by no means limited to that.

The API pre-dates Promises, so it’s callback based. We’ll go through everything with the native API, but in practice, you’ll likely want to wrap and simplify it, either with your own helper methods which wrap the functionality with Promises, or with a third-party utility.

Let me repeat: the API for IndexedDB is awful. Here’s Jake Archibald saying he wouldn’t even teach it directly

We’ll still go over it because I really want you to see everything as it is, but please don’t let it scare you away. There’s plenty of simplifying abstractions out there, for example dexie and idb.

Setting up our database

Let’s add code to sw-manual that subscribes to the service worker’s activate event, and checks to see if we already have an IndexedDB setup; if not, we’ll create, and then fill it with data.

First, the creating bit.

self.addEventListener("activate", () => {
  //1 is the version of IDB we're opening
  let open = indexedDB.open("books", 1);

  //should only be called the first time, when version 1 does not exist
  open.onupgradeneeded = evt => {
    let db = open.result;
    //this callback should only ever be called upon creation of our IDB, when an upgrade is needed
    //for version 1, but to be doubly safe, and also to demonstrade this, we'll check to see
    //if the stores exist
    if (!db.objectStoreNames.contains("books") || !db.objectStoreNames.contains("syncInfo")) {
      if (!db.objectStoreNames.contains("books")) {
        let bookStore = db.createObjectStore("books", { keyPath: "_id" });
        bookStore.createIndex("imgSync", "imgSync", { unique: false });
      }
      if (!db.objectStoreNames.contains("syncInfo")) {
        db.createObjectStore("syncInfo", { keyPath: "id" });
        evt.target.transaction
          .objectStore("syncInfo")
          .add({ id: 1, lastImgSync: null, lastImgSyncStarted: null, lastLoadStarted: +new Date(), lastLoad: null });
      }
      evt.target.transaction.oncomplete = fullSync;
    }
  };
});

The code’s messy and manual; as I said, you’ll likely want to add some abstractions in practice. Some of the key points: we check for the objectStores (tables) we’ll be using, and create them as needed. Note that we can even create indexes, which we can see on the books store, with the imgSync index. We also create a syncInfo store (table) which we’ll use to store information on when we last synced our data, so we don’t pester our servers too frequently, asking for updates.

When the transaction has completed, at the very bottom, we call the fullSync method, which loads all our data. Let’s see what that looks like.

Performing an initial sync

Below is the relevant portion of the syncing code, which makes repeated calls to our endpoint to load our books, page by page, adding each result to IDB along the way. Again, this is using zero abstractions, so expect a lot of bloat.

See this GitHub gist for the full code, which includes some additional error handling, and code which runs when the last page is finished.

function fullSyncPage(db, page) {
  let pageSize = 50;
  doFetch("/book/offlineSync", { page, pageSize })
    .then(resp => resp.json())
    .then(resp => {
      if (!resp.books) return;
      let books = resp.books;
      let i = 0;
      putNext();

      function putNext() { //callback for an insertion, with indicators it hasn't had images cached yet
        if (i < pageSize) {
          let book = books[i++];
          let transaction = db.transaction("books", "readwrite");
          let booksStore = transaction.objectStore("books");
          //extend the book with the imgSync indicated, add it, and on success, do this for the next book
          booksStore.add(Object.assign(book, { imgSync: 0 })).onsuccess = putNext;
        } else {
          //either load the next page, or call loadDone()
        }
      }
    });
}

The putNext() function is where the real work is done. This serves as the callback for each successful insertion’s success. In real life we’d hopefully have a nice method that adds each book, wrapped in a promise, so we could do a simple for of loop, and await each insertion. But this is the “vanilla” solution or at least one of them.

We modify each book before inserting it, to set the imgSync property to 0, to indicate that this book has not had its image cached, yet.

And after we’ve exhausted the last page, and there are no more results, we call loadDone(), to set some metadata indicating the last time we did a full data sync.

In real life, this would be a good time to sync all those images, but let’s instead do it on-demand by the web app itself, in order to demonstrate another feature of service workers.

Communicating between the web app, and service worker

Let’s just pretend it would be a good idea to have the books’ covers load the next time the user visits our page when the service worker is running. Let’s have our web app send a message to the service worker, and we’ll have the service worker receive it, and then sync the book covers.

From our app code, we attempt to send a message to a running service worker, instructing it to sync images.

In the web app:

if ("serviceWorker" in navigator) {
  try {
    navigator.serviceWorker.controller.postMessage({ command: "sync-images" });
  } catch (er) {}
}

In `sw-manual.js`:

self.addEventListener("message", evt => {
  if (evt.data && evt.data.command == "sync-images") {
    let open = indexedDB.open("books", 1);

    open.onsuccess = evt => {
      let db = open.result;
      if (db.objectStoreNames.contains("books")) {
        syncImages(db);
      }
    };
  }
});

In sw-manual we have code to catch that message, and call the syncImages() method. Let’s look at that, next.

function syncImages(db) {
  let tran = db.transaction("books");
  let booksStore = tran.objectStore("books");
  let idx = booksStore.index("imgSync");
  let booksCursor = idx.openCursor(0);
  let booksToUpdate = [];

  //a cursor's onsuccess callback will fire for EACH item that's read from it
  booksCursor.onsuccess = evt => {
    let cursor = evt.target.result;
    //if (!cursor) means the cursor has been exhausted; there are no more results
    if (!cursor) return runIt();

    let book = cursor.value;
    booksToUpdate.push({ _id: book._id, smallImage: book.smallImage });
    //read the next item from the cursor
    cursor.continue();
  };

  async function runIt() {
    if (!booksToUpdate.length) return;

    for (let book of booksToUpdate) {
      try {
        //fetch, and cache the book's image 
        await preCacheBookImage(book);
        let tran = db.transaction("books", "readwrite");
        let booksStore = tran.objectStore("books");
        //now save the updated book - we'll wrap the IDB callback-based opertion in
        //a manual promise, so we can await it
        await new Promise(res => {
          let req = booksStore.get(book._id);
          req.onsuccess = ({ target: { result: bookToUpdate } }) => {
            bookToUpdate.imgSync = 1;
            booksStore.put(bookToUpdate);
            res();
          };
          req.onerror = () => res();
        });
      } catch (er) {
        console.log("ERROR", er);
      }
    }
  }
}

We’re cracking open the imageSync index from before, and reading all books that have a zero, which means they haven’t had their images sync’d yet. The booksCursor.onsuccess will be called over and over again, until there are no books left; I’m using this to put them all into an array, at which point I call the runIt() method, which runs through them, calling preCacheBookImage() for each. This method will cache the image, and if there are no unforeseen errors, update the book in IDB to indicate that imgSync is now 1.

If you’re wondering why in the world I’m going through the trouble to save all the books from the cursor into an array, before calling runIt(), rather than just walking through the results of the cursor, and caching and updating as I go, well — it turns out transactions in IndexedDB are a bit weird. They complete when you yield to the event loop unless you yield to the event loop in a method provided by the transaction. So if we leave the event loop to go do other things, like make a network request to pull down an image, then the cursor’s transaction will complete, and we’ll get an error if we try to continue reading from it later.

Manually updating the cache.

Let’s wrap this up, and look at the preCacheBookImage method which actually pulls down a cover image, and adds it to the relevant cache, (but only if it’s not there already.)

async function preCacheBookImage(book) {
  let smallImage = book.smallImage;
  if (!smallImage) return;

  let cachedImage = await caches.match(smallImage);
  if (cachedImage) return;

  if (/https:\/\/s3.amazonaws.com\/my-library-cover-uploads/.test(smallImage)) {
    let cache = await caches.open("local-images1");
    let img = await fetch(smallImage, { mode: "no-cors" });
    await cache.put(smallImage, img);
  }
}

If the book has no image, we’re done. Next, we check if it’s cached already — if so, we’re done. Lastly, we inspect the URL, and figure out which cache it belongs in.

The local-images1 cache name is the same from before, which we set up in our dynamic cache. If the image in question isn’t already there, we fetch it, and add it to cache. Each cache operation returns a promise, so the async/await syntax simplifies things nicely.

Testing it out

The way it’s set up, if we clear our service worker either in dev tools, below, or by just opening a fresh incognito window…

…then the first time we view our app, all our books will get saved to IndexedDB.

When we refresh, the image sync will happen. So if we start on a page that’s already pulling down these images, we’ll see our normal service worker saving them to cache (ahem, assuming we delay the ajax call to give our Service Worker a chance to install), which is what these events are in our network tab.

Then, if we navigate elsewhere and refresh, we won’t see any network requests for those image, since our sync method is already finding everything in cache.

If we clear our service workers again, and start on this same page, which is not otherwise pulling these images down, then refresh, we’ll see the network requests to pull down, and sync these images to cache.

Then if we navigate back to the page that uses these images, we won’t see the calls to cache these images, since they’re already cached; moreover, we’ll see these images being retrieved from cache by the service worker.

Both our runtimeCaching provided by sw-toolbox, and our own manual code are working together, off of the same cache.

It works!

As promised, here’s the `offline.htm` page

<div style="padding: 15px">
  <h1>Offline</h1>
  <table class="table table-condescend table-striped">
    <thead>
      <tr>
        <th></th>
        <th>Title</th>
        <th>Author</th>
      </tr>
    </thead>
    <tbody id="booksTarget">
      <!--insertion will happen here-->
    </tbody>
  </table>
</div>
let open = indexedDB.open("books");
open.onsuccess = evt => {
  let db = open.result;
  let transaction = db.transaction("books", "readonly");
  let booksStore = transaction.objectStore("books");
  var request = booksStore.openCursor();
  let rows = ``;
  request.onsuccess = function(event) {
    var cursor = event.target.result;
    if(cursor) {
      let book = cursor.value;
      rows += `
        <tr>
          <td><img src="${book.smallImage}" /></td>
          <td>${book.title}</td>
          <td>${Array.isArray(book.authors) ? book.authors.join("<br/>") : book.authors}</td>
        </tr>`;
      cursor.continue();
    } else {
      document.getElementById("booksTarget").innerHTML = rows;
    }
  };
}

Now let’s tell Chrome to pretend to be offline, and test it out:

Cool!

Where to, from here?

We’re barely scratching the surface. Your users can update these data from multiple devices, and each one will need to keep in sync somehow. You could either periodically wipe your IDB tables and re-sync; have the user manually trigger a re-sync when they want; or you could get really ambitious and try to log all your mutations on your server, and have each service worker on each device request all changes that happened since the last time it ran, in order to sync up.

The most interesting solution here is PouchDB, which does this syncing for you; the catch is it’s designed to work with CouchDB, which you may or may not be using.

Syncing local changes

For one last piece of code, let’s consider an easier problem to solve: syncing your IndexedDB with changes that are made right this minute, by your user who’s using your web app. We can already intercept fetch requests in the service worker, so it should be easy to listen for the right mutation endpoint, run it, then then peak at the results and update IndexedDB accordingly. Let’s take a look.

toolbox.router.post(/graphql/, request => {
  //just run the request as is
  return fetch(request).then(response => {
    //clone it by necessity 
    let respClone = response.clone();
    //do this later - get the response back to our user NOW
    setTimeout(() => {
      respClone.json().then(resp => {
        //this graphQL endpoint is for lots of things - inspect the data response to see
        //which operation we just ran
        if (resp && resp.data && resp.data.updateBook && resp.data.updateBook.Book) {
          syncBook(resp.data.updateBook.Book);
        }
      }, 5);
    });
    //return the response to our user NOW, before the IDB syncing
    return response;
  });
});

function syncBook(book) {
  let open = indexedDB.open("books", 1);

  open.onsuccess = evt => {
    let db = open.result;
    if (db.objectStoreNames.contains("books")) {
      let tran = db.transaction("books", "readwrite");
      let booksStore = tran.objectStore("books");
      booksStore.get(book._id).onsuccess = ({ target: { result: bookToUpdate } }) => {
        //update the book with the new values
        ["title", "authors", "isbn"].forEach(prop => (bookToUpdate[prop] = book[prop]));
        //and save it
        booksStore.put(bookToUpdate);
      };
    }
  };
}

This may seem a bit more involved than you were hoping. We can only read the fetch response once, and our application thread will also need to read it, so we’ll first clone the response. Then, we’ll run a setTimeout() so we can return the original response to the web application/user as quickly as possible, and do what we need thereafter. Don’t just rely on the promise in respClone.json() to do this, since promises use microtasks. I’ll let Jake Archibald explain what exactly that means, but the short of it is that they can starve the main event loop. I’m not quite smart enough to be certain whether that applies here, so I just went with the safe approach of setTimeout.

Since I’m using GraphQL, the responses are in a predictable format, and it’s easy to see if I just performed the operation I’m interested in, and if so I can re-sync the affected data.

Further reading

Literally everything here is explained in wonderful depth in this book by Tal Ater. If you’re interested in learning more, you can’t beat that as a learning resource.

For some more immediate, quick resources, here’s an MDN article on IndexedDB, and a service workers introduction, and offline cookbook, both from Google.

Parting thoughts

Giving your user useful things to do with your web app when they don’t even have network connectivity is an amazing new ability web developers have. As you’ve seen though, it’s no easy task. Hopefully this post has given you a realistic idea of what to expect, and a decent introduction to the things you’ll need to do to accomplish this.

Article Series:

  1. The Setup
  2. The Implementation (you are here!)

Making your web app work offline, Part 2: The Implementation originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/making-web-app-work-offline-part-2-implementation/feed/ 2 263437
Making your web app work offline, Part 1: The Setup https://css-tricks.com/making-your-web-app-work-offline-part-1/ https://css-tricks.com/making-your-web-app-work-offline-part-1/#comments Wed, 06 Dec 2017 14:57:27 +0000 http://css-tricks.com/?p=263322 This two-part series is a gentle introduction to offline web development. Getting a web application to do something while offline is surprisingly tricky, requiring a lot of things to be in place and functioning correctly. We’re going to cover all …


Making your web app work offline, Part 1: The Setup originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
This two-part series is a gentle introduction to offline web development. Getting a web application to do something while offline is surprisingly tricky, requiring a lot of things to be in place and functioning correctly. We’re going to cover all of these pieces from a high level, with working examples. This post is an overview, but there are plenty of more-detailed resources listed throughout.

Article Series:

  1. The Setup (you are here!)
  2. The Implementation

Basic approach

I’ll be making heavy use of JavaScript’s async/await syntax. It’s supported in all major browsers and Node, and greatly simplifies Promise-based code. The link above explains async well, but in a nutshell they allow you to resolve a promise, and access its value directly in code with await, rather than calling .then and accessing the value in the callback, which often leads to the dreaded “rightward drift.”

What are we building?

We’ll be extending an existing booklist project to sync the current user’s books to IndexedDB, and create a simplified offline page that’ll show even when the user has no network connectivity.

Starting with a service worker

The one non-negotiable thing you need for offline development is a service worker. A service worker is a background process that can, among other things, intercept network requests; redirect them; short circuit them by returning cached responses; or execute them as normal and do custom things with the response, like caching.

Basic caching

Probably the first, most basic, yet high impact thing you’ll do with a service worker is have it cache your application’s resources. Service worker and the cache it uses are extremely low-level primitives; everything is manual. In order to properly cache your resources you’ll need to fetch and add them to a cache, but then you’ll also need to track changes to these resources. You’ll track when they change, remove the prior version, and fetch and update the new one.

In practice, this means your service worker code will need to be generated as part of a build step, which hashes your files, and generates a file that’s smart enough to record these changes between versions, and update caches as needed.

Abstractions to the rescue

This is extremely tedious and error-prone code that you’d likely never want to write yourself. Luckily some smart people have written abstractions to help, namely sw-precache, and sw-toolbox by the great people at Google. Note, Google has since deprecated these tools in favor of the newer Workbox. I’ve yet to move my code over since sw-* works so well, but in any event the ideas are the same, and I’m told the conversion is easy. And it’s worth mentioning that sw-precache currently has about 30,000 downloads per day, so it’s still widely used.

Hello World, sw-precache

Let’s jump right in. We’re using webpack, and as webpack goes, there’s a plugin, so let’s check that out first.

// inside your webpack config
new SWPrecacheWebpackPlugin({
  mergeStaticsConfig: true,
  filename: "service-worker.js",
  staticFileGlobs: [ //static resources to cache
    "static/bootstrap/css/bootstrap-booklist-build.css",
    ...
  ],
  ignoreUrlParametersMatching: /./,
  stripPrefixMulti: { //any paths that need adjusting
    "static/": "react-redux/static/", 
    ...
  },
  ...
})

By default ALL of the bundles webpack makes will be precached. We’re also manually providing some paths to static resources I want cached in the staticFileGlobs property, and I’m adjusting some paths in stripPrefixMulti.

// inside your webpack config
const getCache = ({ name, pattern, expires, maxEntries }) => ({
  urlPattern: pattern,
  handler: "cacheFirst",
  options: {
    cache: {
      maxEntries: maxEntries || 500,
      name: name,
      maxAgeSeconds: expires || 60 * 60 * 24 * 365 * 2 //2 years
    },
    successResponses: /0|[123].*/
  }
});

new SWPrecacheWebpackPlugin({
  ...
  runtimeCaching: [ //pulls in sw-toolbox and caches dynamically based on a pattern
    getCache({ pattern: /^https:\/\/images-na.ssl-images-amazon.com/, name: "amazon-images1" }),
    getCache({ pattern: /book\/searchBooks/, name: "book-search", expires: 60 * 7 }), //7 minutes
    ...
  ]
})

Adding the runtimeCaching section to our SWPrecacheWebpackPlugin pulls in sw-toolbox and lets us cache urls matching a certain pattern, dynamically, as needed—with getCache helping keep the boilerplate to a minimum.

Hello World, sw-toolbox

The entire service worker file that’s generated is pretty big, but let’s just look at a small piece, namely one of the dynamic caches from above:

toolbox.router.get(/^https:\/\/images-na.ssl-images-amazon.com/, toolbox.cacheFirst, {
  cache: { maxEntries: 500, name: "amazon-images1", maxAgeSeconds: 63072000 },
  successResponses: /0|[123].*/
});

sw-toolbox has provided us with a nice, high-level router object we can use to hook into various URL requests, MVC-style. We’ll use this to setup offline shortly.

Don’t forget to register the service worker

And, of course, the existence of the service worker file that’s generated above is of no use by itself; it needs to be registered. The code looks like this, but be sure to either have it inside an onload listener, or some other place that’ll be guaranteed to run after the page has loaded.

if ("serviceWorker" in navigator) {
  navigator.serviceWorker.register("/service-worker.js");
}

There we have it! We got a basic service worker running, which caches our application resources. Tune in tomorrow when we extend it to support offline.

Article Series:

  1. The Setup (you are here!)
  2. The Implementation

Making your web app work offline, Part 1: The Setup originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/making-your-web-app-work-offline-part-1/feed/ 4 263322
Implementing Push Notifications: The Back End https://css-tricks.com/implementing-push-notifications-back-end/ https://css-tricks.com/implementing-push-notifications-back-end/#comments Wed, 23 Aug 2017 12:36:44 +0000 http://css-tricks.com/?p=259352 In the first part of this series we set up the front end with a Service Worker, a `manifest.json` file, and initialized Firebase. Now we need to create our database and watcher functions.

Article Series:

  1. Setting Up & Firebase
  2. The


Implementing Push Notifications: The Back End originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
In the first part of this series we set up the front end with a Service Worker, a `manifest.json` file, and initialized Firebase. Now we need to create our database and watcher functions.

Article Series:

  1. Setting Up & Firebase
  2. The Back End (You are here)

Creating a Database

Log into Firebase and click on Database in the navigation. Under Data you can manually add database references and see changes happen in real-time.

Make sure to adjust the rule set under Rules so you don’t have to fiddle with authentication during testing.

{
  "rules": {
    ".read": true,
    ".write": true
  }
}

Watching Database Changes with Cloud Functions

Remember the purpose of all this is to send a push notification whenever you publish a new blog post. So we need a way to watch for database changes in those data branches where the posts are being saved to.

With Firebase Cloud Functions we can automatically run backend code in response to events triggered by Firebase features.

Set up and initialize Firebase SDK for Cloud Functions

To start creating these functions we need to install the Firebase CLI. It requires Node v6.11.1 or later.

npm i firebase-tools -g

To initialize a project:

  1. Run firebase login
  2. Authenticate yourself
  3. Go to your project directory
  4. Run firebase init functions

A new folder called `functions` has been created. In there we have an `index.js` file in which we define our new functions.

Import the required Modules

We need to import the Cloud Functions and Admin SDK modules in `index.js` and initialize them.

const admin     = require('firebase-admin'),
      functions = require('firebase-function')

admin.initializeApp(functions.config().firebase)

The Firebase CLI will automatically install these dependencies. If you wish to add your own, modify the `package.json`, run npm install, and require them as you normally would.

Set up the Watcher

We target the database and create a reference we want to watch. In our case, we save to a posts branch which holds post IDs. Whenever a new post ID is added or deleted, we can react to that.

exports.sendPostNotification = functions.database.ref('/posts/{postID}').onWrite(event => {
  // react to changes    
}

The name of the export, sendPostNotification, is for distinguishing all your functions in the Firebase backend.

All other code examples will happen inside the onWrite function.

Check for Post Deletion

If a post is deleted, we probably shouldn’t send a push notification. So we log a message and exit the function. The logs can be found in the Firebase Console under Functions → Logs.

First, we get the post ID and check if a title is present. If it is not, the post has been deleted.

const postID    = event.params.postID,
      postTitle = event.data.val()

if (!postTitle) return console.log(`Post ${postID} deleted.`)

Get Devices to show Notifications to

In the last article we saved a device token in the updateSubscriptionOnServer function to the database in a branch called device_ids. Now we need to retrieve these tokens to be able to send messages to them. We receive so called snapshots which are basically data references containing the token.

If no snapshot and therefore no device token could be retrieved, log a message and exit the function since we don’t have anybody to send a push notification to.

const getDeviceTokensPromise = admin.database()
  .ref('device_ids')
  .once('value')
  .then(snapshots => {

      if (!snapshots) return console.log('No devices to send to.')

      // work with snapshots  
}

Create the Notification Message

If snapshots are available, we need to loop over them and run a function for each of them which finally sends the notification. But first, we need to populate it with a title, body, and an icon.

const payload = {
  notification: {
    title: `New Article: ${postTitle}`,
    body: 'Click to read article.',
    icon: 'https://mydomain.com/push-icon.png'
  }
}

snapshots.forEach(childSnapshot => {
  const token = childSnapshot.val()

  admin.messaging().sendToDevice(token, payload).then(response => {
    // handle response
  }
}

Handle Send Response

In case we fail to send or a token got invalid, we can remove it and log out a message.

response.results.forEach(result => {
  const error = result.error

  if (error) {
    console.error('Failed delivery to', token, error)

  if (error.code === 'messaging/invalid-registration-token' ||
      error.code === 'messaging/registration-token-not-registered') {

      childSnapshot.ref.remove()
      console.info('Was removed:', token)

  } else {
    console.info('Notification sent to', token)
  }

}

Deploy Firebase Functions

To upload your `index.js` to the cloud, we run the following command.

firebase deploy --only functions

Conclusion

Now when you add a new post, the subscribed users will receive a push notification to lead them back to your blog.

GitHub Repo Demo Site

Article Series:

  1. Setting Up & Firebase
  2. The Back End (You are here)

Implementing Push Notifications: The Back End originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/implementing-push-notifications-back-end/feed/ 1 259352
We built a PWA from scratch – This is what we learned https://css-tricks.com/built-pwa-scratch-learned/ Thu, 09 Feb 2017 14:15:18 +0000 http://css-tricks.com/?p=251328 I hadn’t considered the fact that if you’re fingerprinting your assets (e.g. style.987987090897.css) to take advantage of browser cache, you’ll need to update your Service Worker every time you do that. But I guess you’ve got a build step anyway, …


We built a PWA from scratch – This is what we learned originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I hadn’t considered the fact that if you’re fingerprinting your assets (e.g. style.987987090897.css) to take advantage of browser cache, you’ll need to update your Service Worker every time you do that. But I guess you’ve got a build step anyway, so it can be updated in both places:

… we used a NodeJS module called Stacify to automatically create new version numbers in all the places when a file is changed.

To Shared LinkPermalink on CSS-Tricks


We built a PWA from scratch – This is what we learned originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
251328
Implementing “Save For Offline” with Service Workers https://css-tricks.com/implementing-save-offline-service-workers/ Tue, 31 Jan 2017 12:46:08 +0000 http://css-tricks.com/?p=250728 A straightforward tutorial by Una Kravets on caching assets and individually requested articles with Service Workers for offline reading.

I’m curious what the best practice will become. It’s possible that asking users to click something is it. Also possible: passively …


Implementing “Save For Offline” with Service Workers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
A straightforward tutorial by Una Kravets on caching assets and individually requested articles with Service Workers for offline reading.

I’m curious what the best practice will become. It’s possible that asking users to click something is it. Also possible: passively caching articles based on recently published, currently viewing, or related to currently viewing.

To Shared LinkPermalink on CSS-Tricks


Implementing “Save For Offline” with Service Workers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
250728