fetch – CSS-Tricks https://css-tricks.com Tips, Tricks, and Techniques on using Cascading Style Sheets. Tue, 22 Feb 2022 15:28:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://i0.wp.com/css-tricks.com/wp-content/uploads/2021/07/star.png?fit=32%2C32&ssl=1 fetch – CSS-Tricks https://css-tricks.com 32 32 45537868 Reliably Send an HTTP Request as a User Leaves a Page https://css-tricks.com/send-an-http-request-on-page-exit/ https://css-tricks.com/send-an-http-request-on-page-exit/#comments Tue, 22 Feb 2022 15:24:01 +0000 https://css-tricks.com/?p=363511 On several occasions, I’ve needed to send off an HTTP request with some data to log when a user does something like navigate to a different page or submit a form. Consider this contrived example of sending some information to …


Reliably Send an HTTP Request as a User Leaves a Page originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
On several occasions, I’ve needed to send off an HTTP request with some data to log when a user does something like navigate to a different page or submit a form. Consider this contrived example of sending some information to an external service when a link is clicked:

<a href="/some-other-page" id="link">Go to Page</a>

<script>
document.getElementById('link').addEventListener('click', (e) => {
  fetch("/log", {
    method: "POST",
    headers: {
      "Content-Type": "application/json"
    }, 
    body: JSON.stringify({
      some: "data"
    })
  });
});
</script>

There’s nothing terribly complicated going on here. The link is permitted to behave as it normally would (I’m not using e.preventDefault()), but before that behavior occurs, a POST request is triggered on click. There’s no need to wait for any sort of response. I just want it to be sent to whatever service I’m hitting.

On first glance, you might expect the dispatch of that request to be synchronous, after which we’d continue navigating away from the page while some other server successfully handles that request. But as it turns out, that’s not what always happens.

Browsers don’t guarantee to preserve open HTTP requests

When something occurs to terminate a page in the browser, there’s no guarantee that an in-process HTTP request will be successful (see more about the “terminated” and other states of a page’s lifecycle). The reliability of those requests may depend on several things — network connection, application performance, and even the configuration of the external service itself.

As a result, sending data at those moments can be anything but reliable, which presents a potentially significant problem if you’re relying on those logs to make data-sensitive business decisions.

To help illustrate this unreliability, I set up a small Express application with a page using the code included above. When the link is clicked, the browser navigates to /other, but before that happens, a POST request is fired off.

While everything happens, I have the browser’s Network tab open, and I’m using a “Slow 3G” connection speed. Once the page loads and I’ve cleared the log out, things look pretty quiet:

Viewing HTTP request in the network tab

But as soon as the link is clicked, things go awry. When navigation occurs, the request is cancelled.

Viewing HTTP request fail in the network tab

And that leaves us with little confidence that the external service was actually able process the request. Just to verify this behavior, it also occurs when we navigate programmatically with window.location:

document.getElementById('link').addEventListener('click', (e) => {
+ e.preventDefault();

  // Request is queued, but cancelled as soon as navigation occurs. 
  fetch("/log", {
    method: "POST",
    headers: {
      "Content-Type": "application/json"
    }, 
    body: JSON.stringify({
      some: 'data'
    }),
  });

+ window.location = e.target.href;
});

Regardless of how or when navigation occurs and the active page is terminated, those unfinished requests are at risk for being abandoned.

But why are they cancelled?

The root of the issue is that, by default, XHR requests (via fetch or XMLHttpRequest) are asynchronous and non-blocking. As soon as the request is queued, the actual work of the request is handed off to a browser-level API behind the scenes.

As it relates to performance, this is good — you don’t want requests hogging the main thread. But it also means there’s a risk of them being deserted when a page enters into that “terminated” state, leaving no guarantee that any of that behind-the-scenes work reaches completion. Here’s how Google summarizes that specific lifecycle state:

A page is in the terminated state once it has started being unloaded and cleared from memory by the browser. No new tasks can start in this state, and in-progress tasks may be killed if they run too long.

In short, the browser is designed with the assumption that when a page is dismissed, there’s no need to continue to process any background processes queued by it.

So, what are our options?

Perhaps the most obvious approach to avoid this problem is, as much as possible, to delay the user action until the request returns a response. In the past, this has been done the wrong way by use of the synchronous flag supported within XMLHttpRequest. But using it completely blocks the main thread, causing a host of performance issues — I’ve written about some of this in the past — so the idea shouldn’t even be entertained. In fact, it’s on its way out of the platform (Chrome v80+ has already removed it).

Instead, if you’re going to take this type of approach, it’s better to wait for a Promise to resolve as a response is returned. After it’s back, you can safely perform the behavior. Using our snippet from earlier, that might look something like this:

document.getElementById('link').addEventListener('click', async (e) => {
  e.preventDefault();

  // Wait for response to come back...
  await fetch("/log", {
    method: "POST",
    headers: {
      "Content-Type": "application/json"
    }, 
    body: JSON.stringify({
      some: 'data'
    }),
  });

  // ...and THEN navigate away.
   window.location = e.target.href;
});

That gets the job done, but there are some non-trivial drawbacks.

First, it compromises the user’s experience by delaying the desired behavior from occurring. Collecting analytics data certainly benefits the business (and hopefully future users), but it’s less than ideal to make your present users to pay the cost to realize those benefits. Not to mention, as an external dependency, any latency or other performance issues within the service itself will be surfaced to the user. If timeouts from your analytics service cause a customer from completing a high-value action, everyone loses.

Second, this approach isn’t as reliable as it initially sounds, since some termination behaviors can’t be programmatically delayed. For example, e.preventDefault() is useless in delaying someone from closing a browser tab. So, at best, it’ll cover collecting data for some user actions, but not enough to be able to trust it comprehensively.

Instructing the browser to preserve outstanding requests

Thankfully, there are options to preserve outstanding HTTP requests that are built into the vast majority of browsers, and that don’t require user experience to be compromised.

Using Fetch’s keepalive flag

If the keepalive flag is set to true when using fetch(), the corresponding request will remain open, even if the page that initiated that request is terminated. Using our initial example, that’d make for an implementation that looks like this:

<a href="/some-other-page" id="link">Go to Page</a>

<script>
  document.getElementById('link').addEventListener('click', (e) => {
    fetch("/log", {
      method: "POST",
      headers: {
        "Content-Type": "application/json"
      }, 
      body: JSON.stringify({
        some: "data"
      }), 
      keepalive: true
    });
  });
</script>

When that link is clicked and page navigation occurs, no request cancellation occurs:

Viewing HTTP request succeed in the network tab

Instead, we’re left with an (unknown) status, simply because the active page never waited around to receive any sort of response.

A one-liner like this an easy fix, especially when it’s part of a commonly used browser API. But if you’re looking for a more focused option with a simpler interface, there’s another way with virtually the same browser support.

Using Navigator.sendBeacon()

The Navigator.sendBeacon()function is specifically intended for sending one-way requests (beacons). A basic implementation looks like this, sending a POST with stringified JSON and a “text/plain” Content-Type:

navigator.sendBeacon('/log', JSON.stringify({
  some: "data"
}));

But this API doesn’t permit you to send custom headers. So, in order for us to send our data as “application/json”, we’ll need to make a small tweak and use a Blob:

<a href="/some-other-page" id="link">Go to Page</a>

<script>
  document.getElementById('link').addEventListener('click', (e) => {
    const blob = new Blob([JSON.stringify({ some: "data" })], { type: 'application/json; charset=UTF-8' });
    navigator.sendBeacon('/log', blob));
  });
</script>

In the end, we get the same result — a request that’s allowed to complete even after page navigation. But there’s something more going on that may give it an edge over fetch(): beacons are sent with a low priority.

To demonstrate, here’s what’s shown in the Network tab when both fetch() with keepalive and sendBeacon() are used at the same time:

Viewing HTTP request in the network tab

By default, fetch() gets a “High” priority, while the beacon (noted as the “ping” type above) have the “Lowest” priority. For requests that aren’t critical to the functionality of the page, this is a good thing. Taken straight from the Beacon specification:

This specification defines an interface that […] minimizes resource contention with other time-critical operations, while ensuring that such requests are still processed and delivered to destination.

Put another way, sendBeacon() ensures its requests stay out of the way of those that really matter for your application and your user’s experience.

An honorable mention for the ping attribute

It’s worth mentioning that a growing number of browsers support the ping attribute. When attached to links, it’ll fire off a small POST request:

<a href="http://localhost:3000/other" ping="http://localhost:3000/log">
  Go to Other Page
</a>

And those requests headers will contain the page on which the link was clicked (ping-from), as well as the href value of that link (ping-to):

headers: {
  'ping-from': 'http://localhost:3000/',
  'ping-to': 'http://localhost:3000/other'
  'content-type': 'text/ping'
  // ...other headers
},

It’s technically similar to sending a beacon, but has a few notable limitations:

  1. It’s strictly limited for use on links, which makes it a non-starter if you need to track data associated with other interactions, like button clicks or form submissions.
  2. Browser support is good, but not great. At the time of this writing, Firefox specifically doesn’t have it enabled by default.
  3. You’re unable to send any custom data along with the request. As mentioned, the most you’ll get is a couple of ping-* headers, along with whatever other headers are along for the ride.

All things considered, ping is a good tool if you’re fine with sending simple requests and don’t want to write any custom JavaScript. But if you’re needing to send anything of more substance, it might not be the best thing to reach for.

So, which one should I reach for?

There are definitely tradeoffs to using either fetch with keepalive or sendBeacon() to send your last-second requests. To help discern which is the most appropriate for different circumstances, here are some things to consider:

You might go with fetch() + keepalive if:

  • You need to easily pass custom headers with the request.
  • You want to make a GET request to a service, rather than a POST.
  • You’re supporting older browsers (like IE) and already have a fetch polyfill being loaded.

But sendBeacon() might be a better choice if:

  • You’re making simple service requests that don’t need much customization.
  • You prefer the cleaner, more elegant API.
  • You want to guarantee that your requests don’t compete with other high-priority requests being sent in the application.

Avoid repeating my mistakes

There’s a reason I chose to do a deep dive into the nature of how browsers handle in-process requests as a page is terminated. A while back, my team saw a sudden change in the frequency of a particular type of analytics log after we began firing the request just as a form was being submitted. The change was abrupt and significant — a ~30% drop from what we had been seeing historically.

Digging into the reasons this problem arose, as well as the tools that are available to avoid it again, saved the day. So, if anything, I’m hoping that understanding the nuances of these challenges help someone avoid some of the pain we ran into. Happy logging!


Reliably Send an HTTP Request as a User Leaves a Page originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/send-an-http-request-on-page-exit/feed/ 24 363511
How to Cancel Pending API Requests to Show Correct Data https://css-tricks.com/how-to-cancel-pending-api-requests-to-show-correct-data/ https://css-tricks.com/how-to-cancel-pending-api-requests-to-show-correct-data/#comments Fri, 25 Jun 2021 14:33:33 +0000 https://css-tricks.com/?p=342911 I recently had to create a widget in React that fetches data from multiple API endpoints. As the user clicks around, new data is fetched and marshalled into the UI. But it caused some problems.

One problem quickly became evident: …


How to Cancel Pending API Requests to Show Correct Data originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I recently had to create a widget in React that fetches data from multiple API endpoints. As the user clicks around, new data is fetched and marshalled into the UI. But it caused some problems.

One problem quickly became evident: if the user clicked around fast enough, as previous network requests got resolved, the UI was updated with incorrect, outdated data for a brief period of time.

We can debounce our UI interactions, but that fundamentally does not solve our problem. Outdated network fetches will resolve and update our UI with wrong data up until the final network request finishes and updates our UI with the final correct state. The problem becomes more evident on slower connections. Furthermore, we’re left with useless networks requests that waste the user’s data.

Here is an example I built to illustrate the problem. It grabs game deals from Steam via the cool Cheap Shark API using the modern fetch() method. Try rapidly updating the price limit and you will see how the UI flashes with wrong data until it finally settles.

The solution

Turns out there is a way to abort pending DOM asynchronous requests using an AbortController. You can use it to cancel not only HTTP requests, but event listeners as well.

The AbortController interface represents a controller object that allows you to abort one or more Web requests as and when desired.

Mozilla Developer Network

The AbortController API is simple: it exposes an AbortSignal that we insert into our fetch() calls, like so:

const abortController = new AbortController()
const signal = abortController.signal
fetch(url, { signal })

From here on, we can call abortController.abort() to make sure our pending fetch is aborted.

Let’s rewrite our example to make sure we are canceling any pending fetches and marshalling only the latest data received from the API into our app:

The code is mostly the same with few key distinctions:

  1. It creates a new cached variable, abortController, in a useRef in the <App /> component.
  2. For each new fetch, it initializes that fetch with a new AbortController and obtains its corresponding AbortSignal.
  3. It passes the obtained AbortSignal to the fetch() call.
  4. It aborts itself on the next fetch.
const App = () => {
 // Same as before, local variable and state declaration
 // ...

 // Create a new cached variable abortController in a useRef() hook
 const abortController = React.useRef()

 React.useEffect(() => {
  // If there is a pending fetch request with associated AbortController, abort
  if (abortController.current) {
    abortController.abort()
  }
  // Assign a new AbortController for the latest fetch to our useRef variable
  abortController.current = new AbortController()
  const { signal } = abortController.current

  // Same as before
  fetch(url, { signal }).then(res => {
    // Rest of our fetching logic, same as before
  })
 }, [
  abortController,
  sortByString,
  upperPrice,
  lowerPrice,
 ])
}

Conclusion

That’s it! We now have the best of both worlds: we debounce our UI interactions and we manually cancel outdated pending network fetches. This way, we are sure that our UI is updated once and only with the latest data from our API.


How to Cancel Pending API Requests to Show Correct Data originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/how-to-cancel-pending-api-requests-to-show-correct-data/feed/ 10 342911
Using AbortController as an Alternative for Removing Event Listeners https://css-tricks.com/using-abortcontroller-as-an-alternative-for-removing-event-listeners/ https://css-tricks.com/using-abortcontroller-as-an-alternative-for-removing-event-listeners/#comments Mon, 15 Feb 2021 21:48:33 +0000 https://css-tricks.com/?p=334169 The idea of an “abortable” fetch came to life in 2017 when AbortController was released. That gives us a way to bail on an API request initiated by fetch() — even multiple calls — whenever we want.

Here’s a super …


Using AbortController as an Alternative for Removing Event Listeners originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
The idea of an “abortable” fetch came to life in 2017 when AbortController was released. That gives us a way to bail on an API request initiated by fetch() — even multiple calls — whenever we want.

Here’s a super simple example using AbortController to cancel a fetch() request:

const controller = new AbortController();
const res = fetch('/', { signal: controller.signal });
controller.abort();
console.log(res); // => Promise(rejected): "DOMException: The user aborted a request"

You can really see its value when used for a modern interface of setTimeout. This way, making a fetch timeout after, say 10 seconds, is pretty straightforward:

function timeout(duration, signal) {
  return new Promise((resolve, reject) => {
    const handle = setTimeout(resolve, duration);
    signal?.addEventListener('abort', e => {
      clearTimeout(handle);
      reject(new Error('aborted'));
    });
  });
}

// Usage
const controller = new AbortController();
const promise = timeout(10000, controller.signal);
controller.abort();
console.log(promise); // => Promise(rejected): "Error: aborted"

But the big news is that addEventListener now accepts an Abort Signal as of Chrome 88. What’s cool about that? It can be used as an alternate of removeEventListener:

const controller = new AbortController();
eventTarget.addEventListener('event-type', handler, { signal: controller.signal });
controller.abort();

What’s even cooler than that? Well, because AbortController is capable of aborting multiple cancelable requests at once, it streamlines the process of removing multiple listeners in one fell swoop. I’ve already found it particularly useful for drag and drop.

Here’s how I would have written a drag and drop script without AbortController, relying two removeEventListener instances to wipe out two different events:

// With removeEventListener
el.addEventListener('mousedown', e => {
  if (e.buttons !== 1) return;

  const onMousemove = e => {
    if (e.buttons !== 1) return;
    /* work */
  }

  const onMouseup = e => {
    if (e.buttons & 1) return;
    window.removeEventListener('mousemove', onMousemove);
    window.removeEventListener('mouseup', onMouseup);
  }

  window.addEventListener('mousemove', onMousemove);
  window.addEventListener('mouseup', onMouseup); // Can’t use `once: true` here because we want to remove the event only when primary button is up
});

With the latest update, addEventListener accepts the signal property as its second argument, allowing us to call abort() once to stop all event listeners when they’re no longer needed:

// With AbortController
el.addEventListener('mousedown', e => {
  if (e.buttons !== 1) return;

  const controller = new AbortController();

  window.addEventListener('mousemove', e => {
    if (e.buttons !== 1) return;
    /* work */
  }, { signal: controller.signal });

  window.addEventListener('mouseup', e => {
    if (e.buttons & 1) return;
    controller.abort();
  }, { signal: controller.signal });
});

Again, Chrome 88 is currently the only place where addEventListener officially accepts an AbortSignal. While other major browsers, including Firefox and Safari, support AbortController, integrating its signal with addEventListener is a no go at the moment… and there are no signals (pun sorta intended) that they plan to work on it. That said, a polyfill is available.


Using AbortController as an Alternative for Removing Event Listeners originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/using-abortcontroller-as-an-alternative-for-removing-event-listeners/feed/ 10 334169
WordPress-Powered Landing Pages on a Totally Different Site via Cloudflare Workers https://css-tricks.com/wordpress-powered-landing-pages-on-a-totally-different-site-via-cloudflare-workers/ https://css-tricks.com/wordpress-powered-landing-pages-on-a-totally-different-site-via-cloudflare-workers/#respond Wed, 22 Jul 2020 19:39:35 +0000 https://css-tricks.com/?p=317537 What if you have some content on one site and want to display that content on another site? We can do this in the browser no problem. We can fetch it, and plunk it onto the page.

Ajax, right? Ugh. …


WordPress-Powered Landing Pages on a Totally Different Site via Cloudflare Workers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
What if you have some content on one site and want to display that content on another site? We can do this in the browser no problem. We can fetch it, and plunk it onto the page.

Ajax, right? Ugh. Now we’re in client-side rendered site territory, which isn’t great for performance, speed, or resiliency.

What if we could fetch that content and stitch it into the main page on the server side? Server side isn’t the right word for it though. What if we could do it at the global CDN level? Do it at the edge, as they say. That’s what we’ve been doing at CodePen, so we can build pages with the lovely WordPress block editor but serve them on our main site.

To Shared LinkPermalink on CSS-Tricks


WordPress-Powered Landing Pages on a Totally Different Site via Cloudflare Workers originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/wordpress-powered-landing-pages-on-a-totally-different-site-via-cloudflare-workers/feed/ 0 317537
Hey, let’s create a functional calendar app with the JAMstack https://css-tricks.com/hey-lets-create-a-functional-calendar-app-with-the-jamstack/ Wed, 03 Jul 2019 14:37:45 +0000 http://css-tricks.com/?p=291924 Hey, let’s create a functional calendar app with the JAMstack

I’ve always wondered how dynamic scheduling worked so I decided to do extensive research, learn new things, and write about the technical part of the journey. It’s only fair to …


Hey, let’s create a functional calendar app with the JAMstack originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Hey, let’s create a functional calendar app with the JAMstack

I’ve always wondered how dynamic scheduling worked so I decided to do extensive research, learn new things, and write about the technical part of the journey. It’s only fair to warn you: everything I cover here is three weeks of research condensed into a single article. Even though it’s beginner-friendly, it’s a healthy amount of reading. So, please, pull up a chair, sit down and let’s have an adventure.

My plan was to build something that looked like Google Calendar but only demonstrate three core features:

  1. List all existing events on a calendar
  2. Create new events
  3. Schedule and email notification based on the date chosen during creation. The schedule should run some code to email the user when the time is right.

Pretty, right? Make it to the end of the article, because this is what we’ll make.

A calendar month view with a pop-up form for creating a new event as an overlay.

The only knowledge I had about asking my code to run at a later or deferred time was CRON jobs. The easiest way to use a CRON job is to statically define a job in your code. This is ad hoc — statically means that I cannot simply schedule an event like Google Calendar and easily have it update my CRON code. If you are experienced with writing CRON triggers, you feel my pain. If you’re not, you are lucky you might never have to use CRON this way.

To elaborate more on my frustration, I needed to trigger a schedule based on a payload of HTTP requests. The dates and information about this schedule would be passed in through the HTTP request. This means there’s no way to know things like the scheduled date beforehand.

We (my colleagues and I) figured out a way to make this work and — with the help of Sarah Drasner’s article on Durable Functions — I understood what I needed learn (and unlearn for that matter). You will learn about everything I worked in this article, from event creation to email scheduling to calendar listings. Here is a video of the app in action:

You might notice the subtle delay. This has nothing to do with the execution timing of the schedule or running the code. I am testing with a free SendGrid account which I suspect have some form of latency. You can confirm this by testing the serverless function responsible without sending emails. You would notice that the code runs at exactly the scheduled time.

Tools and architecture

Here are the three fundamental units of this project:

  1. React Frontend: Calendar UI, including the UI to create, update or delete events.
  2. 8Base GraphQL: A back-end database layer for the app. This is where we will store, read and update our date from. The fun part is you won’t write any code for this back end.
  3. Durable Functions: Durable functions are kind of Serverless Functions that have the power of remembering their state from previous executions. This is what replaces CRON jobs and solves the ad hoc problem we described earlier.

See the Pen
durable-func1
by Chris Nwamba (@codebeast)
on CodePen.

The rest of this post will have three major sections based on the three units we saw above. We will take them one after the other, build them out, test them, and even deploy the work. Before we get on with that, let’s setup using a starter project I made to get us started.

Project Repo

Getting Started

You can set up this project in different ways — either as a full-stack project with the three units in one project or as a standalone project with each unit living in it’s own root. Well, I went with the first because it’s more concise, easier to teach, and manageable since it’s one project.

The app will be a create-react-app project and I made a starter for us to lower the barrier to set up. It comes with supplementary code and logic that we don’t need to explain since they are out of the scope of the article. The following are set up for us:

  1. Calendar component
  2. Modal and popover components for presenting event forms
  3. Event form component
  4. Some GraphQL logic to query and mutate data
  5. A Durable Serverless Function scaffold where we will write the schedulers

Tip: Each existing file that we care about has a comment block at the top of the document. The comment block tells you what is currently happening in the code file and a to-do section that describes what we are required to do next.

Start by cloning the starter form Github:

git clone -b starter --single-branch https://github.com/christiannwamba/calendar-app.git

Install the npm dependencies described in the root package.json file as well as the serverless package.json:

npm install

Orchestrated Durable Functions for scheduling

There are two words we need to get out of the way first before we can understand what this term is — orchestration and durable.

Orchestration was originally used to describe an assembly of well-coordinated events, actions, etc. It is heavily borrowed in computing to describe a smooth coordination of computer systems. The key word is coordinate. We need to put two or more units of a system together in a coordinated way.

Durable is used to describe anything that has the outstanding feature of lasting longer.

Put system coordination and long lasting together, and you get Durable Functions. This is the most powerful feature if Azure’s Serverless Function. Durable Functions based in what we now know have these two features:

  1. They can be used to assemble the execution of two or more functions and coordinate them so race conditions do not occur (orchestration).
  2. Durable Functions remember things. This is what makes it so powerful. It breaks the number one rule of HTTP: stateless. Durable functions keep their state intact no matter how long they have to wait. Create a schedule for 1,000,000 years into the future and a durable function will execute after one million years while remembering the parameters that were passed to it on the day of trigger. That means Durable Functions are stateful.

These durability features unlock a new realm of opportunities for serverless functions and that is why we are exploring one of those features today. I highly recommend Sarah’s article one more time for a visualized version of some of the possible use cases of Durable Functions.

I also made a visual representation of the behavior of the Durable Functions we will be writing today. Take this as an animated architectural diagram:

Shows the touch-points of a serverless system.

A data mutation from an external system (8Base) triggers the orchestration by calling the HTTP Trigger. The trigger then calls the orchestration function which schedules an event. When the time for execution is due, the orchestration function is called again but this time skips the orchestration and calls the activity function. The activity function is the action performer. This is the actual thing that happens e.g. “send email notification”.

Create orchestrated Durable Functions

Let me walk you through creating functions using VS Code. You need two things:

  1. An Azure account
  2. VS Code

Once you have both setup, you need to tie them together. You can do this using a VS Code extension and a Node CLI tool. Start with installing the CLItool:


npm install -g azure-functions-core-tools

# OR

brew tap azure/functions
brew install azure-functions-core-tools

Next, install the Azure Function extension to have VS Code tied to Functions on Azure. You can read more about setting up Azure Functions from my previous article.


Now that you have all the setup done, let’s get into creating these functions. The functions we will be creating will map to the following folders.

Folder Function
schedule Durable HTTP Trigger
scheduleOrchestrator Durable Orchestration
sendEmail Durable Activity

Start with the trigger.

  1. Click on the Azure extension icon and follow the image below to create the schedule function
    Shows the interface steps going from Browse to JavaScript to Durable Functions HTTP start to naming the function schedule.
  2. Since this is the first function, we chose the folder icon to create a function project. The icon after that creates a single function (not a project).
  3. Click Browse and create a serverless folder inside the project. Select the new serverless folder.
  4. Select JavaScript as the language. If TypeScript (or any other language) is your jam, please feel free.
  5. Select Durable Functions HTTP starter. This is the trigger.
  6. Name the first function as schedule

Next, create the orchestrator. Instead of creating a function project, create a function instead.

  1. Click on the function icon:
  2. Select Durable Functions orchestrator.
  3. Give it a name, scheduleOrchestrator and hit Enter.
  4. You will be asked to select a storage account. Orchestrator uses storage to preserve the state of a function-in-process.
  5. Select a subscription in your Azure account. In my case, I chose the free trial subscription.
  6. Follow the few remaining steps to create a storage account.

Finally, repeat the previous step to create an Activity. This time, the following should be different:

  • Select Durable Functions activity.
  • Name it sendEmail.
  • No storage account will be needed.

Scheduling with a durable HTTP trigger

The code in serverless/schedule/index.js does not need to be touched. This is what it looks like originally when the function is scaffolded using VS Code or the CLI tool.

const df = require("durable-functions");
module.exports = async function (context, req) {
  const client = df.getClient(context);
  const instanceId = await client.startNew(req.params.functionName, undefined, req.body);
  context.log(`Started orchestration with ID = '${instanceId}'.`);
  return client.createCheckStatusResponse(context.bindingData.req, instanceId);
};

What is happening here?

  1. We’re creating a durable function on the client side that is based on the context of the request.
  2. We’re calling the orchestrator using the client’s startNew() function. The orchestrator function name is passed as the first argument to startNew() via the params object. A req.body is also passed to startNew() as third argument which is forwarded to the orchestrator.
  3. Finally, we return a set of data that can be used to check the status of the orchestrator function, or even cancel the process before it’s complete.

The URL to call the above function would look like this:

http://localhost:7071/api/orchestrators/{functionName}

Where functionName is the name passed to startNew. In our case, it should be:

//localhost:7071/api/orchestrators/scheduleOrchestrator

It’s also good to know that you can change how this URL looks.

Orchestrating with a Durable Orchestrator

The HTTP trigger startNew call calls a function based on the name we pass to it. That name corresponds to the name of the function and folder that holds the orchestration logic. The serverless/scheduleOrchestrator/index.js file exports a Durable Function. Replace the content with the following:

const df = require("durable-functions");
module.exports = df.orchestrator(function* (context) {
  const input = context.df.getInput()
  // TODO -- 1
  
  // TODO -- 2
});

The orchestrator function retrieves the request body from the HTTP trigger using context.df.getInput().

Replace TODO -- 1 with the following line of code which might happen to be the most significant thing in this entire demo:

yield context.df.createTimer(new Date(input.startAt))

What this line does use the Durable Function to create a timer based on the date passed in from the request body via the HTTP trigger.

When this function executes and gets here, it will trigger the timer and bail temporarily. When the schedule is due, it will come back, skip this line and call the following line which you should use in place of TODO -- 2.

return yield context.df.callActivity('sendEmail', input);

The function would call the activity function to send an email. We are also passing a payload as the second argument.

This is what the completed function would look like:

const df = require("durable-functions");

module.exports = df.orchestrator(function* (context) {
  const input = context.df.getInput()
    
  yield context.df.createTimer(new Date(input.startAt))
    
  return yield context.df.callActivity('sendEmail', input);
});

Sending email with a durable activity

When a schedule is due, the orchestrator comes back to call the activity. The activity file lives in serverless/sendEmail/index.js. Replace what’s in there with the following:

const sgMail = require('@sendgrid/mail');
sgMail.setApiKey(process.env['SENDGRID_API_KEY']);

module.exports = async function(context) {
  // TODO -- 1
  const msg = {}
  // TODO -- 2
  return msg;
};

It currently imports SendGrid’s mailer and sets the API key. You can get an API Key by following these instructions.

I am setting the key in an environmental variable to keep my credentials safe. You can safely store yours the same way by creating a SENDGRID_API_KEY key in serverless/local.settings.json with your SendGrid key as the value:

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "<<AzureWebJobsStorage>",
    "FUNCTIONS_WORKER_RUNTIME": "node",
    "SENDGRID_API_KEY": "<<SENDGRID_API_KEY>"
  }
}

Replace TODO -- 1 with the following line:

const { email, title, startAt, description } = context.bindings.payload;

This pulls out the event information from the input from the orchestrator function. The input is attached to context.bindings. payload can be anything you name it so go to serverless/sendEmail/function.json and change the name value to payload:

{
  "bindings": [
    {
      "name": "payload",
      "type": "activityTrigger",
      "direction": "in"
    }
  ]
}

Next, update TODO -- 2 with the following block to send an email:

const msg = {
  to: email,
  from: { email: 'chris@codebeast.dev', name: 'Codebeast Calendar' },
  subject: `Event: ${title}`,
  html: `<h4>${title} @ ${startAt}</h4> <p>${description}</p>`
};
sgMail.send(msg);

return msg;

Here is the complete version:

const sgMail = require('@sendgrid/mail');
sgMail.setApiKey(process.env['SENDGRID_API_KEY']);

module.exports = async function(context) {
  const { email, title, startAt, description } = context.bindings.payload;
  const msg = {
    to: email,
    from: { email: 'chris@codebeast.dev', name: 'Codebeast Calendar' },
    subject: `Event: ${title}`,
    html: `<h4>${title} @ ${startAt}</h4> <p>${description}</p>`
  };
  sgMail.send(msg);

  return msg;
};

Deploying functions to Azure

Deploying functions to Azure is easy. It’s merely a click away from the VS Code editor. Click on the circled icon to deploy and get a deploy URL:

Still with me this far in? You’re making great progress! It’s totally OK to take a break here, nap, stretch or get some rest. I definitely did while writing this post.

Data and GraphQL layer with 8Base

My easiest description and understanding of 8Base is “Firebase for GraphQL.” 8Base is a database layer for any kind of app you can think of and the most interesting aspect of it is that it’s based on GraphQL.

The best way to describe where 8Base fits in your stack is to paint a picture of a scenario.

Imagine you are a freelance developer with a small-to-medium scale contract to build an e-commerce store for a client. Your core skills are on the web so you are not very comfortable on the back end. though you can write a bit of Node.

Unfortunately, e-commerce requires managing inventories, order management, managing purchases, managing authentication and identity, etc. “Manage” at a fundamental level just means data CRUD and data access.

Instead of the redundant and boring process of creating, reading, updating, deleting, and managing access for entities in our backend code, what if we could describe these business requirements in a UI? What if we can create tables that allow us to configure CRUD operations, auth and access? What if we had such help and only focus on building frontend code and writing queries? Everything we just described is tackled by 8Base

Here is an architecture of a back-end-less app that relies on 8Base as it’s data layer:

Create an 8Base table for events storage and retrieval

The first thing we need to do before creating a table is to create an account. Once you have an account, create a workspace that holds all the tables and logic for a given project.

Next, create a table, name the table Events and fill out the table fields.

We need to configure access levels. Right now, there’s nothing to hide from each user, so we can just turn on all access to the Events table we created:

Setting up Auth is super simple with 8base because it integrates with Auth0. If you have entities that need to be protected or want to extend our example to use auth, please go wild.

Finally, grab your endpoint URL for use in the React app:

Testing GraphQL Queries and mutation in the playground

Just to be sure that we are ready to take the URL to the wild and start building the client, let’s first test the API with a GraphQL playground and see if the setup is fine. Click on the explorer.

Paste the following query in the editor.

query {
  eventsList {
    count
    items {
      id
      title
      startAt
      endAt
      description
      allDay
      email
    }
  }
}

I created some test data through the 8base UI and I get the result back when I run they query:

You can explore the entire database using the schema document on the right end of the explore page.

Calendar and event form interface

The third (and last) unit of our project is the React App which builds the user interfaces. There are four major components making up the UI and they include:

  1. Calendar: A calendar UI that lists all the existing events
  2. Event Modal: A React modal that renders EventForm component to create a component
  3. Event Popover: Popover UI to read a single event, update event using EventForm or delete event
  4. Event Form: HTML form for creating new event

Before we dive right into the calendar component, we need to setup React Apollo client. The React Apollo provider empowers you with tools to query a GraphQL data source using React patterns. The original provider allows you to use higher order components or render props to query and mutate data. We will be using a wrapper to the original provider that allows you query and mutate using React Hooks.

In src/index.js, import the React Apollo Hooks and the 8base client in TODO -- 1:

import { ApolloProvider } from 'react-apollo-hooks';
import { EightBaseApolloClient } from '@8base/apollo-client';

At TODO -- 2, configure the client with the endpoint URL we got in the 8base setup stage:

const URI = 'https://api.8base.com/cjvuk51i0000701s0hvvcbnxg';

const apolloClient = new EightBaseApolloClient({
  uri: URI,
  withAuth: false
});

Use this client to wrap the entire App tree with the provider on TODO -- 3:

ReactDOM.render(
  <ApolloProvider client={apolloClient}>
    <App />
  </ApolloProvider>,
  document.getElementById('root')
);

Showing events on the calendar

The Calendar component is rendered inside the App component and the imports BigCalendar component from npm. Then :

  1. We render Calendar with a list of events.
  2. We give Calendar a custom popover (EventPopover) component that will be used to edit events.
  3. We render a modal (EventModal) that will be used to create new events.

The only thing we need to update is the list of events. Instead of using the static array of events, we want to query 8base for all store events.

Replace TODO -- 1 with the following line:

const { data, error, loading } = useQuery(EVENTS_QUERY);

Import the useQuery library from npm and the EVENTS_QUERY at the beginning of the file:

import { useQuery } from 'react-apollo-hooks';
import { EVENTS_QUERY } from '../../queries';

EVENTS_QUERY is exactly the same query we tested in 8base explorer. It lives in src/queries and looks like this:

export const EVENTS_QUERY = gql`
  query {
    eventsList {
      count
      items {
        id
        ...
      }
    }
  }
`;

Let’s add a simple error and loading handler on TODO -- 2:

if (error) return console.log(error);
  if (loading)
    return (
      <div className="calendar">
        <p>Loading...</p>
      </div>
    );

Notice that the Calendar component uses the EventPopover component to render a custom event. You can also observe that the Calendar component file renders EventModal as well. Both components have been setup for you, and their only responsibility is to render EventForm.

Create, update and delete events with the event form component

The component in src/components/Event/EventForm.js renders a form. The form is used to create, edit or delete an event. At TODO -- 1, import useCreateUpdateMutation and useDeleteMutation:

import {useCreateUpdateMutation, useDeleteMutation} from './eventMutationHooks'
  • useCreateUpdateMutation: This mutation either creates or updates an event depending on whether the event already existed.
  • useDeleteMutation: This mutation deletes an existing event.

A call to any of these functions returns another function. The function returned can then serve as an even handler.

Now, go ahead and replace TODO -- 2 with a call to both functions:

const createUpdateEvent = useCreateUpdateMutation(
  payload,
  event,
  eventExists,
  () => closeModal()
);
const deleteEvent = useDeleteMutation(event, () => closeModal());

These are custom hooks I wrote to wrap the useMutation exposed by React Apollo Hooks. Each hook creates a mutation and passes mutation variable to the useMutation query. The blocks that look like the following in src/components/Event/eventMutationHooks.js are the most important parts:

useMutation(mutationType, {
  variables: {
    data
  },
  update: (cache, { data }) => {
    const { eventsList } = cache.readQuery({
      query: EVENTS_QUERY
    });
    cache.writeQuery({
      query: EVENTS_QUERY,
      data: {
        eventsList: transformCacheUpdateData(eventsList, data)
      }
    });
    //..
  }
});

Call the Durable Function HTTP trigger from 8Base

We have spent quite some time in building the serverless structure, data storage and UI layers of our calendar app. To recap, the UI sends data to 8base for storage, 8base saves data and triggers the Durable Function HTTP trigger, the HTTP trigger kicks in orchestration and the rest is history. Currently, we are saving data with mutation but we are not calling the serverless function anywhere in 8base.

8base allows you to write custom logic which is what makes it very powerful and extensible. Custom logic are simple functions that are called based on actions performed on the 8base database. For example, we can set up a logic to be called every time a mutation occurs on a table. Let’s create one that is called when an event is created.

Start by installing the 8base CLI:

npm install -g 8base

On the calendar app project run the following command to create a starter logic:

8base init 8base

8base init command creates a new 8base logic project. You can pass it a directory name which in this case we are naming the 8base logic folder, 8base — don’t get it twisted.

Trigger scheduling logic

Delete everything in 8base/src and create a triggerSchedule.js file in the src folder. Once you have done that, drop in the following into the file:

const fetch = require('node-fetch');

module.exports = async event => {
  const res = await fetch('<HTTP Trigger URL>', {
    method: 'POST',
    body: JSON.stringify(event.data),
    headers: { 'Content-Type': 'application/json' }
  })
  const json = await res.json();
  console.log(event, json)
  return json;
};

The information about the GraphQL mutation is available on the event object as data.

Replace <HTTP Trigger URL> with the URL you got after deploying your function. You can get the URL by going to the function in your Azure URL and click “Copy URL.”

You also need to install the node-fetch module, which will grab the data from the API:

npm install --save node-fetch

8base logic configuration

The next thing to do is tell 8base what exact mutation or query that needs to trigger this logic. In our case, a create mutation on the Events table. You can describe this information in the 8base.yml file:

functions:
  triggerSchedule:
    handler:
      code: src/triggerSchedule.js
    type: trigger.after
    operation: Events.create

In a sense, this is saying, when a create mutation happens on the Events table, please call src/triggerSchedule.js after the mutation has occurred.

We want to deploy all the things

Before anything can be deployed, we need to login into the 8Base account, which we can do via command line:

8base login

Then, let’s run the deploy command to send and set up the app logic in your workspace instance.

8base deploy

Testing the entire flow

To see the app in all its glory, click on one of the days of the calendar. You should get the event modal containing the form. Fill that out and put a future start date so we trigger a notification. Try a date more than 2-5 mins from the current time because I haven’t been able to trigger a notification any faster than that.

Yay, go check your email! The email should have arrived thanks to SendGrid. Now we have an app that allows us to create events and get notified with the details of the event submission.


Hey, let’s create a functional calendar app with the JAMstack originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
291924
Earth day, API’s and sunshine. https://css-tricks.com/earth-day-apis-and-sunshine/ Wed, 01 May 2019 14:31:18 +0000 http://css-tricks.com/?p=286980 Cassie Evans showcases some really nifty web design ideas and explores using the API provided by the company her team over at Clearleft recently hired to cover their building’s roof with solar panels. Cassie outlines her journey designing a webpage


Earth day, API’s and sunshine. originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Cassie Evans showcases some really nifty web design ideas and explores using the API provided by the company her team over at Clearleft recently hired to cover their building’s roof with solar panels. Cassie outlines her journey designing a webpage that uses the API to populate some light data visualizations about the energy the building uses now that the solar panels are installed.

Here at Clearleft we’ve been taking small steps to reduce our environmental impact. In December 2018 we covered the roof of our home with solar panels.

With the first of the glorious summer sun starting to shine down on us, we started to ponder about what environmental impact they’d had over the last 5 months.

Luckily for us, our solar panels have an API, so we can not only find out that information, we can request it from SolarEdge and display it in our very own interface.

The post is a great practical look into using the Fetch API which is also something Zell Liew wrote up in thorough detail, covering the history of using APIs with JavaScript, handling errors and other weird things that might happen when working with it.

But, equally interesting and useful is reading through Cassie’s thought process as she sketches a wireframe for the page, researches how to use the Fetch API, and integrates animation into a lovely SVG illustration. It’s an exercise in both design and development that many of us can relate to but also learn from.

Oh and while we’re on the topic of data visualizations, Dan Englishby posted something just today on the many ways if getting data into charts. Working with real-time APIs is covered there as well, and is a nice segue from Cassie’s post.

To Shared LinkPermalink on CSS-Tricks


Earth day, API’s and sunshine. originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
286980
Using data in React with the Fetch API and axios https://css-tricks.com/using-data-in-react-with-the-fetch-api-and-axios/ https://css-tricks.com/using-data-in-react-with-the-fetch-api-and-axios/#comments Fri, 03 Aug 2018 14:15:04 +0000 http://css-tricks.com/?p=274506 If you are new to React, and perhaps have only played with building to-do and counter apps, you may not yet have run across a need to pull in data for your app. There will likely come a time when …


Using data in React with the Fetch API and axios originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
If you are new to React, and perhaps have only played with building to-do and counter apps, you may not yet have run across a need to pull in data for your app. There will likely come a time when you’ll need to do this, as React apps are most well suited for situations where you’re handling both data and state.

The first set of data you may need to handle might be hard-coded into your React application, like we did for this demo from our Error Boundary tutorial:

See the Pen error boundary 0 by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

What if you want to handle data from an API? That’s the purpose of this tutorial. Specifically, we’ll make use of the Fetch API and axios as examples for how to request and use data.

The Fetch API

The Fetch API provides an interface for fetching resources. We’ll use it to fetch data from a third-party API and see how to use it when fetching data from an API built in-house.

Using Fetch with a third-party API

See the Pen React Fetch API Pen 1 by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

We will be fetching random users from JSONPlaceholder, a fake online REST API for testing. Let’s start by creating our component and declaring some default state.

class App extends React.Component {
  state = {
    isLoading: true,
    users: [],
    error: null
  }

  render() {
    <React.Fragment>
    </React.Fragment>
  }
}

There is bound to be a delay when data is being requested by the network. It could be a few seconds or maybe a few milliseconds. Either way, during this delay, it’s good practice to let users know that something is happening while the request is processing.

To do that we’ll make use of isLoading to either display the loading message or the requested data. The data will be displayed when isLoading is false, else a loading message will be shown on the screen. So the render() method will look like this:

render() {
  const { isLoading, users, error } = this.state;
  return (
    <React.Fragment>
      <h1>Random User</h1>
      // Display a message if we encounter an error
      {error ? <p>{error.message}</p> : null}
      // Here's our data check
      {!isLoading ? (
        users.map(user => {
          const { username, name, email } = user;
          return (
            <div key={username}>
              <p>Name: {name}</p>
              <p>Email Address: {email}</p>
              <hr />
            </div>
          );
        })
      // If there is a delay in data, let's let the user know it's loading
      ) : (
        <h3>Loading...</h3>
      )}
    </React.Fragment>
  );
}

The code is basically doing this:

  1. De-structures isLoading, users and error from the application state so we don’t have to keep typing this.state.
  2. Prints a message if the application encounters an error establishing a connection
  3. Checks to see if data is loading
  4. If loading is not happening, then we must have the data, so we display it
  5. If loading is happening, then we must still be working on it and display “Loading…” while the app is working

For Steps 3-5 to work, we need to make the request to fetch data from an API. This is where the JSONplaceholder API will come in handy for our example.

fetchUsers() {
  // Where we're fetching data from
  fetch(`https://jsonplaceholder.typicode.com/users`)
    // We get the API response and receive data in JSON format...
    .then(response => response.json())
    // ...then we update the users state
    .then(data =>
      this.setState({
        users: data,
        isLoading: false,
      })
    )
    // Catch any errors we hit and update the app
    .catch(error => this.setState({ error, isLoading: false }));
}

We create a method called fetchUser() and use it to do exactly what you might think: request user data from the API endpoint and fetch it for our app. Fetch is a promise-based API which returns a response object. So, we make use of the json() method to get the response object which is stored in data and used to update the state of users in our application. We also need to change the state of isLoading to false so that our application knows that loading has completed and all is clear to render the data.

The fact that Fetch is promise-based means we can also catch errors using the .catch() method. Any error encountered is used a value to update our error’s state. Handy!

The first time the application renders, the data won’t have been received — it can take seconds. We want to trigger the method to fetch the users when the application state can be accessed for an update and the application re-rendered. React’s componentDidMount() is the best place for this, so we’ll place the fetchUsers() method in it.

componentDidMount() {
  this.fetchUsers();
}

Using Fetch With Self-Owned API

So far, we’ve looked at how to put someone else’s data to use in an application. But what if we’re working with our own data in our own API? That’s what we’re going to cover right now.

See the Pen React Fetch API Pen 2 by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

I built an API which is available on GitHub. The JSON response you get has been placed on AWS — that’s what we will use for this tutorial.

As we did before, let’s create our component and set up some default state.

class App extends React.Component {
  state = {
    isLoading: true,
    posts: [],
    error: null
  }

  render() {
    <React.Fragment>
    </React.Fragment>
  }
}

Our method for looping through the data will be different from the one we used before but only because of the data’s structure, which is going to be different. You can see the difference between our data structure here and the one we obtained from JSONPlaceholder.

Here is how the render() method will look like for our API:

render() {
  const { isLoading, posts, error } = this.state;
  return (
    <React.Fragment>
      <h1>React Fetch - Blog</h1>
      <hr />
      {!isLoading ? Object.keys(posts).map(key => <Post key={key} body={posts[key]} />) : <h3>Loading...</h3>}
    </React.Fragment>
  );
}

Let’s break down the logic

{
  !isLoading ? 
  Object.keys(posts).map(key => <Post key={key} body={posts[key]} />) 
  : <h3>Loading...</h3>
}

When isLoading is not true, we return an array, map through it and pass the information to the Post component as props. Otherwise, we display a “Loading…” message while the application is at work. Very similar to before.

The method to fetch posts will look like the one used in the first part.

fetchPosts() {
  // The API where we're fetching data from
  fetch(`https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/posts.json`)
    // We get a response and receive the data in JSON format...
    .then(response => response.json())
    // ...then we update the state of our application
    .then(
      data =>
        this.setState({
          posts: data,
          isLoading: false,
        })
    )
    // If we catch errors instead of a response, let's update the app
    .catch(error => this.setState({ error, isLoading: false }));
}

Now we can call the fetchPosts method inside a componentDidMount() method

componentDidMount() {
  this.fetchPosts();
}

In the Post component, we map through the props we received and render the title and content for each post:

const Post = ({ body }) => {
  return (
    <div>
      {body.map(post => {
        const { _id, title, content } = post;
        return (
          <div key={_id}>
            <h2>{title}</h2>
            <p>{content}</p>
            <hr />
          </div>
        );
      })}
    </div>
  );
};

There we have it! Now we know how to use the Fetch API to request data from different sources and put it to use in an application. High fives. ✋

axios

OK, so we’ve spent a good amount of time looking at the Fetch API and now we’re going to turn our attention to axios.

Like the Fetch API, axios is a way we can make a request for data to use in our application. Where axios shines is how it allows you to send an asynchronous request to REST endpoints. This comes in handy when working with the REST API in a React project, say a headless WordPress CMS.

There’s ongoing debate about whether Fetch is better than axios and vice versa. We’re not going to dive into that here because, well, you can pick the right tool for the right job. If you’re curious about the points from each side, you can read here and here.

Using axios with a third-party API

See the Pen React Axios 1 Pen by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

Like we did with the Fetch API, let’s start by requesting data from an API. For this one, we’ll fetch random users from the Random User API.

First, we create the App component like we’ve done it each time before:

class App extends React.Component {
  state = {
    users: [],
    isLoading: true,
    errors: null
  };

  render() {
    return (
      <React.Fragment>
      </React.Fragment>
    );
  }
}

The idea is still the same: check to see if loading is in process and either render the data we get back or let the user know things are still loading.

To make the request to the API, we’ll need to create a function. We’ll call the function getUsers(). Inside it, we’ll make the request to the API using axios. Let’s see how that looks like before explaining further.

getUsers() {
  // We're using axios instead of Fetch
  axios
    // The API we're requesting data from
    .get("https://randomuser.me/api/?results=5")
    // Once we get a response, we'll map the API endpoints to our props
    .then(response =>
      response.data.results.map(user => ({
        name: `${user.name.first} ${user.name.last}`,
        username: `${user.login.username}`,
        email: `${user.email}`,
        image: `${user.picture.thumbnail}`
      }))
    )
    // Let's make sure to change the loading state to display the data
    .then(users => {
      this.setState({
        users,
        isLoading: false
      });
    })
    // We can still use the `.catch()` method since axios is promise-based
    .catch(error => this.setState({ error, isLoading: false }));
}

Quite different from the Fetch examples, right? The basic structure is actually pretty similar, but now we’re in the business of mapping data between endpoints.

The GET request is passed from the API URL as a parameter. The response we get from the API contains an object called data and that contains other objects. The information we want is available in data.results, which is an array of objects containing the data of individual users.

Here we go again with calling our method inside of the componentDidMount() method:

componentDidMount() {
  this.getUsers();
}

Alternatively, you can do this instead and basically combine these first two steps:

componentDidMount() {
  axios
    .get("https://randomuser.me/api/?results=5")
    .then(response =>
      response.data.results.map(user => ({
        name: `${user.name.first} ${user.name.last}`,
        username: `${user.login.username}`,
        email: `${user.email}`,
        image: `${user.picture.thumbnail}`
      }))
    )
    .then(users => {
      this.setState({
        users,
        isLoading: false
      });
    })
    .catch(error => this.setState({ error, isLoading: false }));
}

If you are coding locally from your machine, you can temporarily edit the getUsers() function to look like this:

getUsers() {
  axios
    .get("https://randomuser.me/api/?results=5")
    .then(response => console.log(response))
    .catch(error => this.setState({ error, isLoading: false }));
}

Your console should get something similar to this:

We map through the results array to obtain the information we need for each user. The array of users is then used to set a new value for our users state. With that done, we can then change the value of isLoading.

By default, isLoading is set to true. When the state of users is updated, we want to change the value of isLoading to false since this is the cue our app is looking for to make the switch from “Loading…” to rendered data.

render() {
  const { isLoading, users } = this.state;
  return (
    <React.Fragment>
      <h2>Random User</h2>
      <div>
        {!isLoading ? (
          users.map(user => {
            const { username, name, email, image } = user;
            return (
              <div key={username}>
                <p>{name}</p>
                <div>
                  <img src={image} alt={name} />
                </div>
                <p>{email}</p>
                <hr />
              </div>
            );
          })
        ) : (
          <p>Loading...</p>
        )}
      </div>
    </React.Fragment>
  );
}

If you log the users state to the console, you will see that it is an array of objects:

The empty array shows the value before the data was obtained. The returned data contains only the name, username, email address and image of individual users because those are the endpoints we mapped out. There is a lot more data available from the API, of course, but we’d have to add those to our getUsers method.

Using axios with your own API

See the Pen React Axios 2 Pen by Kingsley Silas Chijioke (@kinsomicrote) on CodePen.

You have seen how to use axios with a third-party API but we can look at what it’s like to request data from our own API, just like we did with the Fetch API. In fact, let’s use same JSON file we used for Fetch so we can see the difference between the two approaches.

Here is everything put together:

class App extends React.Component {
  // State will apply to the posts object which is set to loading by default
  state = {
    posts: [],
    isLoading: true,
    errors: null
  };
  // Now we're going to make a request for data using axios
  getPosts() {
    axios
      // This is where the data is hosted
      .get("https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/posts.json")
      // Once we get a response and store data, let's change the loading state
      .then(response => {
        this.setState({
          posts: response.data.posts,
          isLoading: false
        });
      })
      // If we catch any errors connecting, let's update accordingly
      .catch(error => this.setState({ error, isLoading: false }));
  }
  // Let's our app know we're ready to render the data
  componentDidMount() {
    this.getPosts();
  }
  // Putting that data to use
  render() {
    const { isLoading, posts } = this.state;
    return (
      <React.Fragment>
        <h2>Random Post</h2>
        <div>
          {!isLoading ? (
            posts.map(post => {
              const { _id, title, content } = post;
              return (
                <div key={_id}>
                  <h2>{title}</h2>
                  <p>{content}</p>
                  <hr />
                </div>
              );
            })
          ) : (
            <p>Loading...</p>
          )}
        </div>
      </React.Fragment>
    );
  }
}

The main difference between this method and using axios to fetch from a third-party is how the data is formatted. We’re getting straight-up JSON this way rather than mapping endpoints.

The posts data we get from the API is used to update the value of the component’s posts state. With this, we can map through the array of posts in render(). We then obtain the id, title and content of each post using ES6 de-structuring, which is then rendered to the user.

Like we did before, what is displayed depends on the value of isLoading. When we set a new state for posts using the data obtained from the API, we had to set a new state for isLoading, too. Then we can finally let the user know data is loading or render the data we’ve received.

async and await

Another thing the promise-based nate of axios allows us to do is take advantage of is async and await . Using this, the getPosts() function will look like this.

async getPosts() {
  const response = await axios.get("https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/posts.json");
  try {
    this.setState({
      posts: response.data.posts,
      isLoading: false
    });
  } catch (error) {
    this.setState({ error, isLoading: false });
  }
}

Base instance

With axios, it’s possible to create a base instance where we drop in the URL for our API like so:

const api = axios.create({
  baseURL: "https://s3-us-west-2.amazonaws.com/s.cdpn.io/3/posts.json"
});

…then make use of it like this:

async getPosts() {
  const response = await api.get();
  try {
    this.setState({
      posts: response.data.posts,
      isLoading: false
    });
  } catch (error) {
    this.setState({ error, isLoading: false });
  }
}

Simply a nice way of abstracting the API URL.

Now, data all the things!

As you build React applications, you will run into lots of scenarios where you want to handle data from an API. Hopefully you know feel armed and ready to roll with data from a variety of sources with options for how to request it.

Want to play with more data? Sarah recently wrote up the steps for creating your own serverless API from a list of public APIs.


Using data in React with the Fetch API and axios originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/using-data-in-react-with-the-fetch-api-and-axios/feed/ 5 274506
Using Fetch https://css-tricks.com/using-fetch/ https://css-tricks.com/using-fetch/#comments Tue, 02 May 2017 12:54:46 +0000 http://css-tricks.com/?p=254018 Whenever we send or retrieve information with JavaScript, we initiate a thing known as an Ajax call. Ajax is a technique to send and retrieve information behind the scenes without needing to refresh the page. It allows browsers to send …


Using Fetch originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
Whenever we send or retrieve information with JavaScript, we initiate a thing known as an Ajax call. Ajax is a technique to send and retrieve information behind the scenes without needing to refresh the page. It allows browsers to send and retrieve information, then do things with what it gets back, like add or change HTML on the page.

Let’s take a look at the history of that and then bring ourselves up-to-date.

Just looking for the basic fetch snippet? Here you go.
fetch(URL)
  .then(response => response.json())
  .then(data => {
    console.log(data)
  });

Another note here, we’re going to be using ES6 syntax for all the demos in this article.

A few years ago, the easiest way to initiate an Ajax call was through the use of jQuery’s ajax method:

$.ajax('some-url', {
  success: (data) => { /* do something with the data */ },
  error: (err) => { /* do something when an error happens */}
});

We could do Ajax without jQuery, but we had to write an XMLHttpRequest, which is pretty complicated.

Thankfully, browsers nowadays have improved so much that they support the Fetch API, which is a modern way to Ajax without helper libraries like jQuery or Axios. In this article, I’ll show you how to use Fetch to handle both success and errors.

Support for Fetch

Let’s get support out of the way first.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

Desktop

ChromeFirefoxIEEdgeSafari
4239No1410.1

Mobile / Tablet

Android ChromeAndroid FirefoxAndroidiOS Safari
10810710810.3

Support for Fetch is pretty good! All major browsers (with the exception of Opera Mini and old IE) support it natively, which means you can safely use it in your projects. If you need support anywhere it isn’t natively supported, you can always depend on this handy polyfill.

Getting data with Fetch

Getting data with Fetch is easy. You just need to provide Fetch with the resource you’re trying to fetch (so meta!).

Let’s say we’re trying to get a list of Chris’ repositories on Github. According to Github’s API, we need to make a get request for api.github.com/users/chriscoyier/repos.

This would be the fetch request:

fetch('https://api.github.com/users/chriscoyier/repos');

So simple! What’s next?

Fetch returns a Promise, which is a way to handle asynchronous operations without the need for a callback.

To do something after the resource is fetched, you write it in a .then call:

fetch('https://api.github.com/users/chriscoyier/repos')
  .then(response => {/* do something */})

If this is your first encounter with Fetch, you’ll likely be surprised by the response Fetch returns. If you console.log the response, you’ll get the following information:

{
  body: ReadableStream
  bodyUsed: false
  headers: Headers
  ok : true
  redirected : false
  status : 200
  statusText : "OK"
  type : "cors"
  url : "http://some-website.com/some-url"
  __proto__ : Response
}

Here, you can see that Fetch returns a response that tells you the status of the request. We can see that the request is successful (ok is true and status is 200), but a list of Chris’ repos isn’t present anywhere!

Turns out, what we requested from Github is hidden in body as a readable stream. We need to call an appropriate method to convert this readable stream into data we can consume.

Since we’re working with GitHub, we know the response is JSON. We can call response.json to convert the data.

There are other methods to deal with different types of response. If you’re requesting an XML file, then you should call response.text. If you’re requesting an image, you call response.blob.

All these conversion methods (response.json et all) returns another Promise, so we can get the data we wanted with yet another .then call.

fetch('https://api.github.com/users/chriscoyier/repos')
  .then(response => response.json())
  .then(data => {
    // Here's a list of repos!
    console.log(data)
  });

Phew! That’s all you need to do to get data with Fetch! Short and simple, isn’t it? :)

Next, let’s take a look at sending some data with Fetch.

Sending data with Fetch

Sending data with Fetch is pretty simple as well. You just need to configure your fetch request with three options.

fetch('some-url', options);

The first option you need to set is your request method to post, put or del. Fetch automatically sets the method to get if you leave it out, which is why getting a resource takes lesser steps.

The second option is to set your headers. Since we’re primarily sending JSON data in this day and age, we need to set Content-Type to be application/json.

The third option is to set a body that contains JSON content. Since JSON content is required, you often need to call JSON.stringify when you set the body.

In practice, a post request with these three options looks like:

let content = {some: 'content'};

// The actual fetch request
fetch('some-url', {
  method: 'post',
  headers: {
    'Content-Type': 'application/json'
  },
  body: JSON.stringify(content)
})
// .then()...

For the sharp-eyed, you’ll notice there’s some boilerplate code for every post, put or del request. Ideally, we can reuse our headers and call JSON.stringify on the content before sending since we already know we’re sending JSON data.

But even with the boilerplate code, Fetch is still pretty nice for sending any request.

Handling errors with Fetch, however, isn’t as straightforward as handling success messages. You’ll see why in a moment.

Handling errors with Fetch

Although we always hope for Ajax requests to be successful, they can fail. There are many reasons why requests may fail, including but not limited to the following:

  1. You tried to fetch a non-existent resource.
  2. You’re unauthorized to fetch the resource.
  3. You entered some arguments wrongly
  4. The server throws an error.
  5. The server timed out.
  6. The server crashed.
  7. The API changed.

Things aren’t going to be pretty if your request fails. Just imagine a scenario you tried to buy something online. An error occured, but it remains unhandled by the people who coded the website. As a result, after clicking buy, nothing moves. The page just hangs there… You have no idea if anything happened. Did your card go through? 😱.

Now, let’s try to fetch a non-existent error and learn how to handle errors with Fetch. For this example, let’s say we misspelled chriscoyier as chrissycoyier

// Fetching chrissycoyier's repos instead of chriscoyier's repos
fetch('https://api.github.com/users/chrissycoyier/repos')

We already know we should get an error since there’s no chrissycoyier on Github. To handle errors in promises, we use a catch call.

Given what we know now, you’ll probably come up with this code:

fetch('https://api.github.com/users/chrissycoyier/repos')
  .then(response => response.json())
  .then(data => console.log('data is', data))
  .catch(error => console.log('error is', error));

Fire your fetch request. This is what you’ll get:

Fetch failed, but the code that gets executed is the second `.then` instead of `.catch`

Why did our second .then call execute? Aren’t promises supposed to handle errors with .catch? Horrible! 😱😱😱

If you console.log the response now, you’ll see slightly different values:

{
  body: ReadableStream
  bodyUsed: true
  headers: Headers
  ok: false // Response is not ok
  redirected: false
  status: 404 // HTTP status is 404.
  statusText: "Not Found" // Request not found
  type: "cors"
  url: "https://api.github.com/users/chrissycoyier/repos"
}

Most of the response remain the same, except ok, status and statusText. As expected, we didn’t find chrissycoyier on Github.

This response tells us Fetch doesn’t care whether your AJAX request succeeded. It only cares about sending a request and receiving a response from the server, which means we need to throw an error if the request failed.

Hence, the initial then call needs to be rewritten such that it only calls response.json if the request succeeded. The easiest way to do so to check if the response is ok.

fetch('some-url')
  .then(response => {
    if (response.ok) {
      return response.json()
    } else {
      // Find some way to get to execute .catch()
    }
  });

Once we know the request is unsuccessful, we can either throw an Error or reject a Promise to activate the catch call.

// throwing an Error
else {
  throw new Error('something went wrong!')
}

// rejecting a Promise
else {
  return Promise.reject('something went wrong!')
}

Choose either one, because they both activate the .catch call.

Here, I choose to use Promise.reject because it’s easier to implement. Errors are cool too, but they’re harder to implement, and the only benefit of an Error is a stack trace, which would be non-existent in a Fetch request anyway.

So, the code looks like this so far:

fetch('https://api.github.com/users/chrissycoyier/repos')
  .then(response => {
    if (response.ok) {
      return response.json()
    } else {
      return Promise.reject('something went wrong!')
    }
  })
  .then(data => console.log('data is', data))
  .catch(error => console.log('error is', error));
Failed request, but error gets passed into catch correctly

This is great. We’re getting somewhere since we now have a way to handle errors.

But rejecting the promise (or throwing an Error) with a generic message isn’t good enough. We won’t be able to know what went wrong. I’m pretty sure you don’t want to be on the receiving end for an error like this…

Yeah… I get it that something went wrong… but what exactly? 🙁

What went wrong? Did the server time out? Was my connection cut? There’s no way for me to know! What we need is a way to tell what’s wrong with the request so we can handle it appropriately.

Let’s take a look at the response again and see what we can do:

{
  body: ReadableStream
  bodyUsed: true
  headers: Headers
  ok: false // Response is not ok
  redirected: false
  status: 404 // HTTP status is 404.
  statusText: "Not Found" // Request not found
  type: "cors"
  url: "https://api.github.com/users/chrissycoyier/repos"
}

Okay great. In this case, we know the resource is non-existent. We can return a 404 status or Not Found status text and we’ll know what to do with it.

To get status and statusText into the .catch call, we can reject a JavaScript object:

fetch('some-url')
  .then(response => {
    if (response.ok) {
      return response.json()
    } else {
      return Promise.reject({
        status: response.status,
        statusText: response.statusText
      })
    }
  })
  .catch(error => {
    if (error.status === 404) {
      // do something about 404
    }
  })

Now we’re getting somewhere again! Yay! 😄.

Let’s make this better! 😏.

The above error handling method is good enough for certain HTTP statuses which doesn’t require further explanation, like:

  • 401: Unauthorized
  • 404: Not found
  • 408: Connection timeout

But it’s not good enough for this particular badass:

  • 400: Bad request.

What constitutes bad request? It can be a whole slew of things! For example, Stripe returns 400 if the request is missing a required parameter.

Stripe’s explains it returns a 400 error if the request is missing a required field

It’s not enough to just tell our .catch statement there’s a bad request. We need more information to tell what’s missing. Did your user forget their first name? Email? Or maybe their credit card information? We won’t know!

Ideally, in such cases, your server would return an object, telling you what happened together with the failed request. If you use Node and Express, such a response can look like this.

res.status(400).send({
  err: 'no first name'
})

Here, we can’t reject a Promise in the initial .then call because the error object from the server can only be read after response.json.

The solution is to return a promise that contains two then calls. This way, we can first read what’s in response.json, then decide what to do with it.

Here’s what the code looks like:

fetch('some-error')
  .then(handleResponse)

function handleResponse(response) {
  return response.json()
    .then(json => {
      if (response.ok) {
        return json
      } else {
        return Promise.reject(json)
      }
    })
}

Let’s break the code down. First, we call response.json to read the json data the server sent. Since, response.json returns a Promise, we can immediately call .then to read what’s in it.

We want to call this second .then within the first .then because we still need to access response.ok to determine if the response was successful.

If you want to send the status and statusText along with the json into .catch, you can combine them into one object with Object.assign().

let error = Object.assign({}, json, {
  status: response.status,
  statusText: response.statusText
})
return Promise.reject(error)

With this new handleResponse function, you get to write your code this way, and your data gets passed into .then and .catch automatically

fetch('some-url')
  .then(handleResponse)
  .then(data => console.log(data))
  .catch(error => console.log(error))

Unfortunately, we’re not done with handling the response just yet :(

Handling other response types

So far, we’ve only touched on handling JSON responses with Fetch. This already solves 90% of use cases since APIs return JSON nowadays.

What about the other 10%?

Let’s say you received an XML response with the above code. Immediately, you’ll get an error in your catch statement that says:

Parsing an invalid JSON produces a Syntax error

This is because XML isn’t JSON. We simply can’t return response.json. Instead, we need to return response.text. To do so, we need to check for the content type by accessing the response headers:

.then(response => {
  let contentType = response.headers.get('content-type')

  if (contentType.includes('application/json')) {
    return response.json()
    // ...
  }

  else if (contentType.includes('text/html')) {
    return response.text()
    // ...
  }

  else {
    // Handle other responses accordingly...
  }
});

Wondering why you’ll ever get an XML response?

Well, I encountered it when I tried using ExpressJWT to handle authentication on my server. At that time, I didn’t know you can send JSON as a response, so I left it as its default, XML. This is just one of the many unexpected possibilities you’ll encounter. Want another? Try fetching some-url :)

Anyway, here’s the entire code we’ve covered so far:

fetch('some-url')
  .then(handleResponse)
  .then(data => console.log(data))
  .catch(error => console.log(error))

function handleResponse (response) {
  let contentType = response.headers.get('content-type')
  if (contentType.includes('application/json')) {
    return handleJSONResponse(response)
  } else if (contentType.includes('text/html')) {
    return handleTextResponse(response)
  } else {
    // Other response types as necessary. I haven't found a need for them yet though.
    throw new Error(`Sorry, content-type ${contentType} not supported`)
  }
}

function handleJSONResponse (response) {
  return response.json()
    .then(json => {
      if (response.ok) {
        return json
      } else {
        return Promise.reject(Object.assign({}, json, {
          status: response.status,
          statusText: response.statusText
        }))
      }
    })
}
function handleTextResponse (response) {
  return response.text()
    .then(text => {
      if (response.ok) {
        return text
      } else {
        return Promise.reject({
          status: response.status,
          statusText: response.statusText,
          err: text
        })
      }
    })
}

It’s a lot of code to write/copy and paste into if you use Fetch. Since I use Fetch heavily in my projects, I create a library around Fetch that does exactly what I described in this article (plus a little more).

Introducing zlFetch

zlFetch is a library that abstracts away the handleResponse function so you can skip ahead to and handle both your data and errors without worrying about the response.

A typical zlFetch look like this:

zlFetch('some-url', options)
  .then(data => console.log(data))
  .catch(error => console.log(error));

To use zlFetch, you first have to install it.

npm install zl-fetch --save

Then, you’ll import it into your code. (Take note of default if you aren’t importing with ES6 imports). If you need a polyfill, make sure you import it before adding zlFetch.

// Polyfills (if needed)
require('isomorphic-fetch') // or whatwg-fetch or node-fetch if you prefer

// ES6 Imports
import zlFetch from 'zl-fetch';

// CommonJS Imports
const zlFetch = require('zl-fetch');

zlFetch does a bit more than removing the need to handle a Fetch response. It also helps you send JSON data without needing to write headers or converting your body to JSON.

The below the functions do the same thing. zlFetch adds a Content-Type and converts your content into JSON under the hood.

let content = {some: 'content'}

// Post request with fetch
fetch('some-url', {
  method: 'post',
  headers: {'Content-Type': 'application/json'}
  body: JSON.stringify(content)
});

// Post request with zlFetch
zlFetch('some-url', {
  method: 'post',
  body: content
});

zlFetch also makes authentication with JSON Web Tokens easy.

The standard practice for authentication is to add an Authorization key in the headers. The contents of this Authorization key is set to Bearer your-token-here. zlFetch helps to create this field if you add a token option.

So, the following two pieces of code are equivalent.

let token = 'someToken'
zlFetch('some-url', {
  headers: {
    Authorization: `Bearer ${token}`
  }
});

// Authentication with JSON Web Tokens with zlFetch
zlFetch('some-url', {token});

That’s all zlFetch does. It’s just a convenient wrapper function that helps you write less code whenever you use Fetch. Do check out zlFetch if you find it interesting. Otherwise, feel free to roll your own!

Here’s a Pen for playing around with zlFetch:

Wrapping up

Fetch is a piece of amazing technology that makes sending and receiving data a cinch. We no longer need to write XHR requests manually or depend on larger libraries like jQuery.

Although Fetch is awesome, error handling with Fetch isn’t straightforward. Before you can handle errors properly, you need quite a bit of boilerplate code to pass information go to your .catch call.

With zlFetch (and the info presented in this article), there’s no reason why we can’t handle errors properly anymore. Go out there and put some fun into your error messages too :)


By the way, if you liked this post, you may also like other front-end-related articles I write on my blog. Feel free to pop by and ask any questions you have. I’ll get back to you as soon as I can.


Using Fetch originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/using-fetch/feed/ 19 254018
Need to do Dependency-Free Ajax? https://css-tricks.com/need-dependency-free-ajax/ https://css-tricks.com/need-dependency-free-ajax/#comments Tue, 14 Mar 2017 15:13:05 +0000 http://css-tricks.com/?p=252760 One of the big reasons to use jQuery, for a long time, was how easy it made Ajax. It has a super clean, flexible, and cross-browser compatible API for all the Ajax methods. jQuery is still mega popular, but it’s …


Need to do Dependency-Free Ajax? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
One of the big reasons to use jQuery, for a long time, was how easy it made Ajax. It has a super clean, flexible, and cross-browser compatible API for all the Ajax methods. jQuery is still mega popular, but it’s becoming more and more common to ditch it, especially as older browser share drops and new browsers have a lot of powerful stuff we used to learn on jQuery for. Even just querySelectorAll is often cited as a reason to lose the jQuery dependency.

How’s Ajax doing?

Let’s say we needed to do a GET request to get some HTML from a URL endpoint. We aren’t going to do any error handling to keep this brief.

jQuery would have been like this:

$.ajax({
  type: "GET",
  url: "/url/endpoint/",
}).done(function(data) {
  // We got the `data`!
});

If we wanted to ditch the jQuery and go with browser-native Ajax, we could do it like this:

var httpRequest = new XMLHttpRequest();
httpRequest.onreadystatechange = ajaxDone;
httpRequest.open('GET', '/url/endpoint/');
httpRequest.send();

function ajaxDone() {
  if (httpRequest.readyState === XMLHttpRequest.DONE) {
    if (httpRequest.status === 200) {
      // We got the `httpRequest.responseText`! 
    }
  }
}

The browser support for this is kinda complicated. The basics work as far back as IE 7, but IE 10 is when it really got solid. If you wanna get more robust, but still skip any dependencies, you can even use a window.ActiveXObject fallback and get down to IE 6.

Long story short, it’s certainly possible to do Ajax without any dependencies and get pretty deep browser support. Remember jQuery is just JavaScript, so you can always just do whatever it does under the hood.

But there is another thing jQuery has been doing for quite a while with it’s Ajax: it’s Promise based. One of the many cool things about Promises, especially when combined with a “asynchronous” even like Ajax, is that it allows you to run multiple requests in parallel, which is aces for performance.

The native Ajax stuff I just posted isn’t Promise-based.

If you want a strong and convenient Promise-based Ajax API, with fairly decent cross-browser support (down to IE 8), you could consider Axios. Yes, it’s a dependency just like jQuery, it’s just hyper-focused on Ajax, 11.8 KB before GZip, and doesn’t have any dependencies of its own.

With Axios, the code would look like:

axios({
  method: 'GET',
  url: '/url/endpoint/'
}).then(function(response) {
  // We got the `response.data`!
});

Notice the then statement, which means we’re back in the Promise land. Tiny side note, apparently the requests don’t look identical to jQuery on the server side.

Browsers aren’t done with us yet though! There is a fairly new Fetch API that does Promise-based Ajax with a nice and clean syntax:

fetch('/url/endpoint/')
  .then(function(response) {
    return response.text();
  })
  .then(function(text) {
    // We got the `text`!
  });

The browser support for this is getting pretty good too! It’s shipping in all stable desktop browsers, including Edge. The danger zone is no IE support at all and only iOS 10.1+.


Need to do Dependency-Free Ajax? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/need-dependency-free-ajax/feed/ 8 252760
Making a Simple Site Work Offline with ServiceWorker https://css-tricks.com/serviceworker-for-offline/ https://css-tricks.com/serviceworker-for-offline/#comments Tue, 10 Nov 2015 14:51:13 +0000 http://css-tricks.com/?p=210533 I’ve been playing around with ServiceWorker a lot recently, so when Chris asked me to write an article about it I couldn’t have been more thrilled. ServiceWorker is the most impactful modern web technology since Ajax. It’s an API that …


Making a Simple Site Work Offline with ServiceWorker originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
I’ve been playing around with ServiceWorker a lot recently, so when Chris asked me to write an article about it I couldn’t have been more thrilled. ServiceWorker is the most impactful modern web technology since Ajax. It’s an API that lives inside the browser and sits between your web pages and your application servers. Once installed and activated, a ServiceWorker can programmatically determine how to respond to requests for resources from your origin, even when the browser is offline. ServiceWorker can be used to power the so-called “Offline First” web.

ServiceWorker is a progressive technology, and in this article, I’ll show you how to take a website and make it available offline for humans who are using a modern browser while leaving humans with unsupported browsers unaffected.

Here’s a silent, 26 second video of a supporting browser (Chrome) going offline and the final demo site still working:

If you would like to just look at the code, there is a Simple Offline Site repo we’ve built for this. You can see the entire thing as a CodePen Project, and it’s even a full on demo website.

Browser Support

Today, ServiceWorker has browser support in Google Chrome, Opera, and in Firefox behind a configuration flag. Microsoft is likely to work on it soon. There’s no official word from Apple’s Safari yet.

Jake Archibald has a page tracking the support of all the ServiceWorker-related technologies.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

Desktop

ChromeFirefoxIEEdgeSafari
4544No1711.1

Mobile / Tablet

Android ChromeAndroid FirefoxAndroidiOS Safari
10810710811.3-11.4

Given the fact that you can implement this stuff in a progressive enhancement style (doesn’t affect unsupported browsers), it’s a great opportunity to get ahead of the pack. The ones that are supported are going to greatly appreciate it.

Before getting started, I should point out a couple of things for you to take into account.

Secure Connections Only

You should know that there are a few hard requirements when it comes to ServiceWorker. First and foremost, your site needs to be served over a secure connection. If you’re still serving your site over HTTP, it might be a good excuse to implement HTTPS.

HTTPS is required for ServiceWorker

You could use a CDN proxy like CloudFlare to serve traffic securely. Remember to find and fix mixed content warnings as some browsers may warn your customers about your site being unsafe, otherwise.

While the spec for HTTP/2 doesn’t inherently enforce encrypted connections, browsers intend to implement HTTP/2 and similar technologies only over HTTPS. The ServiceWorker specification, on the other hand, recommends browser implementation over HTTPS. Browsers have also hinted at marking sites served over unencrypted connections as insecure. Search engines penalize unencrypted results.

“HTTPS only” is the browsers way of saying “this is important, you should do this”.

A Promise Based API

The future of web browser API implementations is Promise-heavy. The fetch API, for example, sprinkles sweet Promise-based sugar on top of XMLHttpRequest. ServiceWorker makes occasional use of fetch, but there’s also worker registration, caching, and message passing, all of which are Promise-based.

Whether or not you are a fan of promises, they are here to stay, so you better get used to them.

Registering Your First ServiceWorker

I worked together with Chris on the simplest possible practical demonstration of how to use ServiceWorker. He implemented a simple website (static HTML, CSS, JavaScript, and images) and asked me to add offline support. I felt like that would be a great opportunity to display how easy and unobtrusive it is to add offline capabilities to an existing website.

If you’d like to skip to the end, take a look at the this commit to the demo site on GitHub.

The first step is to register the ServiceWorker. Instead of blindly attempting the registration, we feature-detect that ServiceWorker is available.

if ('serviceWorker' in navigator) {

}

The following piece of code demonstrates how we would install a ServiceWorker. The JavaScript resource passed to .register will be executed in the context of a ServiceWorker. Note how registration returns a Promise so that you can track whether or not the ServiceWorker registration was successful. I preceded logging statements with CLIENT: to make it visually easier for me to figure out whether a logging statement was coming from a web page or the ServiceWorker script.

// ServiceWorker is a progressive technology. Ignore unsupported browsers
if ('serviceWorker' in navigator) {
  console.log('CLIENT: service worker registration in progress.');
  navigator.serviceWorker.register('/service-worker.js').then(function() {
    console.log('CLIENT: service worker registration complete.');
  }, function() {
    console.log('CLIENT: service worker registration failure.');
  });
} else {
  console.log('CLIENT: service worker is not supported.');
}

The endpoint to the service-worker.js file is quite important. If the script were served from, say, /js/service-worker.js then the ServiceWorker would only be able to intercept requests in the /js/ context, but it’d be blind to resources like /other. This is typically an issue because you usually scope your JavaScript files in a /js/, /public/, /assets/, or similar “directory”, whereas you’ll want to serve the ServiceWorker script from the domain root in most cases.

That was, in fact, the only necessary change to your web application code, provided that you had already implemented HTTPS. At this point, supporting browsers will issue a request for /service-worker.js and attempt to install the worker.

How should you structure the service-worker.js file, then?

Putting Together A ServiceWorker

ServiceWorker is event-driven and your code should aim to be stateless. That’s because when a ServiceWorker isn’t being used it’s shut down, losing all state. You have no control over that, so it’s best to avoid any long-term dependence on the in-memory state.

Below, I listed the most notable events you’ll have to handle in a ServiceWorker.

  • The install event fires when a ServiceWorker is first fetched. This is your chance to prime the ServiceWorker cache with the fundamental resources that should be available even while users are offline.
  • The fetch event fires whenever a request originates from your ServiceWorker scope, and you’ll get a chance to intercept the request and respond immediately, without going to the network.
  • The activate event fires after a successful installation. You can use it to phase out older versions of the worker. We’ll look at a basic example where we deleted stale cache entries.

Let’s go over each event and look at examples of how they could be handled.

Installing Your ServiceWorker

A version number is useful when updating the worker logic, allowing you to remove outdated cache entries during the activation step, as we’ll see a bit later. We’ll use the following version number as a prefix when creating cache stores.

var version = 'v1::';

You can use addEventListener to register an event handler for the install event. Using event.waitUntil blocks the installation process on the provided p promise. If the promise is rejected because, for instance, one of the resources failed to be downloaded, the service worker won’t be installed. Here, you can leverage the promise returned from opening a cache with caches.open(name) and then mapping that into cache.addAll(resources), which downloads and stores responses for the provided resources.

self.addEventListener("install", function(event) {
  console.log('WORKER: install event in progress.');
  event.waitUntil(
    /* The caches built-in is a promise-based API that helps you cache responses,
       as well as finding and deleting them.
    */
    caches
      /* You can open a cache by name, and this method returns a promise. We use
         a versioned cache name here so that we can remove old cache entries in
         one fell swoop later, when phasing out an older service worker.
      */
      .open(version + 'fundamentals')
      .then(function(cache) {
        /* After the cache is opened, we can fill it with the offline fundamentals.
           The method below will add all resources we've indicated to the cache,
           after making HTTP requests for each of them.
        */
        return cache.addAll([
          '/',
          '/css/global.css',
          '/js/global.js'
        ]);
      })
      .then(function() {
        console.log('WORKER: install completed');
      })
  );
});

Once the install step succeeds, the activate event fires. This helps us phase out an older ServiceWorker, and we’ll look at it later. For now, let’s focus on the fetch event, which is a bit more interesting.

Intercepting Fetch Requests

The fetch event fires whenever a page controlled by this service worker requests a resource. This isn’t limited to fetch or even XMLHttpRequest. Instead, it comprehends even the request for the HTML page on first load, as well as JS and CSS resources, fonts, any images, etc. Note also that requests made against other origins will also be caught by the fetch handler of the ServiceWorker. For instance, requests made against i.imgur.com – the CDN for a popular image hosting site – would also be caught by our service worker as long as the request originated on one of the clients (e.g browser tabs) controlled by the worker.

Just like install, we can block the fetch event by passing a promise to event.respondWith(p), and when the promise fulfills the worker will respond with that instead of the default action of going to the network. We can use caches.match to look for cached responses, and return those responses instead of going to the network.

As described in the comments, here we’re using an “eventually fresh” caching pattern where we return whatever is stored on the cache but always try to fetch a resource again from the network regardless, to keep the cache updated. If the response we served to the user is stale, they’ll get a fresh response the next time they request the resource. If the network request fails, it’ll try to recover by attempting to serve a hardcoded Response.

self.addEventListener("fetch", function(event) {
  console.log('WORKER: fetch event in progress.');

  /* We should only cache GET requests, and deal with the rest of method in the
     client-side, by handling failed POST,PUT,PATCH,etc. requests.
  */
  if (event.request.method !== 'GET') {
    /* If we don't block the event as shown below, then the request will go to
       the network as usual.
    */
    console.log('WORKER: fetch event ignored.', event.request.method, event.request.url);
    return;
  }
  /* Similar to event.waitUntil in that it blocks the fetch event on a promise.
     Fulfillment result will be used as the response, and rejection will end in a
     HTTP response indicating failure.
  */
  event.respondWith(
    caches
      /* This method returns a promise that resolves to a cache entry matching
         the request. Once the promise is settled, we can then provide a response
         to the fetch request.
      */
      .match(event.request)
      .then(function(cached) {
        /* Even if the response is in our cache, we go to the network as well.
           This pattern is known for producing "eventually fresh" responses,
           where we return cached responses immediately, and meanwhile pull
           a network response and store that in the cache.
           Read more:
           https://ponyfoo.com/articles/progressive-networking-serviceworker
        */
        var networked = fetch(event.request)
          // We handle the network request with success and failure scenarios.
          .then(fetchedFromNetwork, unableToResolve)
          // We should catch errors on the fetchedFromNetwork handler as well.
          .catch(unableToResolve);

        /* We return the cached response immediately if there is one, and fall
           back to waiting on the network as usual.
        */
        console.log('WORKER: fetch event', cached ? '(cached)' : '(network)', event.request.url);
        return cached || networked;

        function fetchedFromNetwork(response) {
          /* We copy the response before replying to the network request.
             This is the response that will be stored on the ServiceWorker cache.
          */
          var cacheCopy = response.clone();

          console.log('WORKER: fetch response from network.', event.request.url);

          caches
            // We open a cache to store the response for this request.
            .open(version + 'pages')
            .then(function add(cache) {
              /* We store the response for this request. It'll later become
                 available to caches.match(event.request) calls, when looking
                 for cached responses.
              */
              cache.put(event.request, cacheCopy);
            })
            .then(function() {
              console.log('WORKER: fetch response stored in cache.', event.request.url);
            });

          // Return the response so that the promise is settled in fulfillment.
          return response;
        }

        /* When this method is called, it means we were unable to produce a response
           from either the cache or the network. This is our opportunity to produce
           a meaningful response even when all else fails. It's the last chance, so
           you probably want to display a "Service Unavailable" view or a generic
           error response.
        */
        function unableToResolve () {
          /* There's a couple of things we can do here.
             - Test the Accept header and then return one of the `offlineFundamentals`
               e.g: `return caches.match('/some/cached/image.png')`
             - You should also consider the origin. It's easier to decide what
               "unavailable" means for requests against your origins than for requests
               against a third party, such as an ad provider
             - Generate a Response programmaticaly, as shown below, and return that
          */

          console.log('WORKER: fetch request failed in both cache and network.');

          /* Here we're creating a response programmatically. The first parameter is the
             response body, and the second one defines the options for the response.
          */
          return new Response('<h1>Service Unavailable</h1>', {
            status: 503,
            statusText: 'Service Unavailable',
            headers: new Headers({
              'Content-Type': 'text/html'
            })
          });
        }
      })
  );
});

There’s several more strategies, some of which I discuss in an article about ServiceWorker strategies on my blog.

As promised, let’s look at the code you can use to phase out older versions of your ServiceWorker script.

Phasing Out Older ServiceWorker Versions

The activate event fires after a service worker has been successfully installed. It is most useful when phasing out an older version of a service worker, as at this point you know that the new worker was installed correctly. In this example, we delete old caches that don’t match the version for the worker we just finished installing.

self.addEventListener("activate", function(event) {
  /* Just like with the install event, event.waitUntil blocks activate on a promise.
     Activation will fail unless the promise is fulfilled.
  */
  console.log('WORKER: activate event in progress.');

  event.waitUntil(
    caches
      /* This method returns a promise which will resolve to an array of available
         cache keys.
      */
      .keys()
      .then(function (keys) {
        // We return a promise that settles when all outdated caches are deleted.
        return Promise.all(
          keys
            .filter(function (key) {
              // Filter by keys that don't start with the latest version prefix.
              return !key.startsWith(version);
            })
            .map(function (key) {
              /* Return a promise that's fulfilled
                 when each outdated cache is deleted.
              */
              return caches.delete(key);
            })
        );
      })
      .then(function() {
        console.log('WORKER: activate completed.');
      })
  );
});

Reminder: there is a Simple Offline Site repo we’ve built for this. You can see the entire thing as a CodePen Project, and it’s even a full on demo website.


Making a Simple Site Work Offline with ServiceWorker originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

]]>
https://css-tricks.com/serviceworker-for-offline/feed/ 14 210533