Featured resource
Tech Upskilling Playbook 2025
Tech Upskilling Playbook

Build future-ready tech teams and hit key business milestones with seven proven plays from industry leaders.

Learn more
  • Labs icon Lab
  • Core Tech
Labs

Guided: Offline Applications with Service Workers

In this lab, you'll be introduced to two Web APIs (Service Workers and Cache) and gain practical experience implementing a network proxy to allow a messaging web app to work offline.

Labs

Path Info

Level
Clock icon Intermediate
Duration
Clock icon 41m
Last updated
Clock icon Aug 18, 2025

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Table of Contents

  1. Challenge

    Introduction

    This lab will guide you through the progressive enhancement of a web app using service worker functionality so that it can work offline. The lab content assumes you're familiar with JavaScript, HTML, CSS, events, promises, async/await, the DOM, and HTTP(S) requests. It also assumes you know how to open your browser's DevTools, and that you're using Google Chrome during the lab. If you are using a different browser, many aspects will be similar, especially in another Chromium-based browser (e.g., Edge, Brave, etc.).

    Some notes:

    • You'll do all your coding in /sw.js and/or /index.html. Sometimes you'll leave a file's contents as they were for the previous task, and sometimes you'll build upon it. Each task will give specific instructions about this.
    • Per best practices, it's recommended you keep strict mode enabled by leaving 'use strict' as the first line of any JavaScript file or <script> area.
    • If you get stuck on a task, you can consult the solutions/ folder.
  2. Challenge

    Service Worker Basics

    The Service Worker API enables modern apps to intercept network requests via a special type of web worker (a service worker) and thus provides functionality even when users are offline. They have their own lifecycle, going through several registration states:

    • installing
    • installed
    • activating
    • activated
    • redundant

    As soon as multiple service worker versions are involved, freshly installed ones normally wait for page navigation events before they start activating. As they activate, they take over from all previous service workers for the same page, which are now redundant (a state which is also applied if installation fails).

    A service worker must have its own JavaScript file — throughout this lab, /sw.js. Service worker files do not default to strict mode.

    As web workers, service workers run in a separate thread and have no access to certain features (like the DOM). Instead, they have access to a special top-level self variable that indirectly inherits from EventTarget (allowing addEventListener), and otherwise exposes parts of the Web Worker API and Service Worker API.

    But service workers do have access to console, for example:

    console.log('Outside of events')
    

    The basic skeleton of a service worker is to make use of the install and activate events:

    self.addEventListener('install', function installEventHandler(event) {
      console.log('Got "install" event', event)
    })
    
    self.addEventListener('activate', function activateEventHandler(event) {
      console.log('Got "activate" event', event)
    })
    

    This service worker isn't practical in production as-is, but it will let you observe its lifecycle. All three of the log statements will normally only be shown on first load. With your service worker properly set up, now you can install it. In index.html, you'll need to write and call an async function to register your service worker (and for this lab's checks, it must be called registerServiceWorker):

    registerServiceWorker()
    
    async function registerServiceWorker() {
      // ...
    }
    

    Within this function, start with a feature availability check (if ('serviceWorker' in navigator)). If this fails, do nothing — your app shouldn't depend on service workers for its core functionality, after all.

    If it passes, the actual registration goes like this:

    console.log('Before registration')
    try {
      var registration = await navigator.serviceWorker.register('sw.js')
    } catch (error) {
      console.error(`Registration failed with ${error}`)
    }
    

    That main navigator.serviceWorker.register call instructs the browser to fetch sw.js, treat it as a service worker, and install it. If there's some error in the file, though, this call will throw.

    To expose how the lifecycle works, this can go after that call, still within the try block:

    for (let serviceWorker of ['installing', 'waiting', 'active']) {
      if (registration[serviceWorker]) {
        console.log(
          `Service worker found at 'registration.${serviceWorker}'`,
          registration[serviceWorker]
        )
      }
    }
    

    The first time you run this, at the time of logging, the second log statement will show state: 'installing', but the console's object dropdown will show state: 'activated' by the time you get to click it.

    On subsequent runs, they'll match, since the service worker is already activated. Note the slight terminology difference between state: 'activated' and the fact that the worker is found at registration.active (not activated in the latter) — missing these subtleties can be a source of bugs.

    The back end included in this lab reports the current epoch time at the endpoint /api/time/. This can go in the <body> for manual testing purposes in future steps, and to add a base of something visible to your app:

    <p><a href="/api/time/">Try a dynamic JSON endpoint.</a></p>
    

    Though not required to pass this check, feel free to load up {{localhost:3000}}, open the Console tab of DevTools, and examine the difference in output between first and second loads.


    In terms of developer workflow, service workers have some pitfalls.

    Suppose you tweak something trivial, like console.log('OUTSIDE OF EVENTS') in the service worker.

    If you reload — assuming you have Preserve log enabled so it immediately doesn't disappear — you'll see the effects of this in the console. This can be misleading because the new service worker gets installed, but isn't activated yet. Subsequent reloads don't help either — the install event gets fired once, and that's it — and now the app reports that it finds service workers both at .active and .waiting. As mentioned earlier, it waits in the installed state for navigation events (closing the tab, navigating away from the page, etc.) until it's the right time to displace older service workers registered on the same page.

    A forced reload (holding Shift) will activate a waiting worker. However, it won't fetch and install an updated one. So the workflow becomes regular reload (to install) followed by forced reload (to activate).

    Thankfully, Chrome's DevTools has an Application tab with handy features just for this. First, you can see which service workers are active and waiting on the page, and tell the waiting ones to skip waiting and take control of the current page. Better yet, there's an Update on reload checkbox which, when enabled, automatically skips the waiting phase, and creates a new service worker on every reload, even if your service worker code remains byte-identical between reloads. Though not required to pass this check, feel free to disable Preserve log in the settings of the Console tab of DevTools, and examine the difference in behavior between first and subsequent reloads (and forced reloads) after making the above change. Then try the same with Update on reload enabled in the Application tab.


    Lastly, if you run into a situation that's tricky to debug, there's also an Unregister button that brings your app back to a "first load" state regarding its service worker. If you find yourself relying on this for every change, though, it hints at something that could be done differently in your service worker, particularly its installation and activation handlers.

  3. Challenge

    Static and Dynamic Caching

    So far, your app will still behave like any other page during an outage, giving a browser connection/loading error. For an offline-ready app, your service worker needs to cache your app and any static assets it needs. Currently, the app itself contains only a link — it will need a dependency on a static asset to have something to cache:

    <button>Smile</button>
    
    var button = document.getElementsByTagName('button')[0]
    button.addEventListener('click', async function buttonClickHandler(event) {
      var response = await fetch('/assets/Emoticon_Smile_Purple.png')
      var blob = await response.blob()
      var img = document.createElement('img')
      img.src = URL.createObjectURL(blob)
      button.parentNode.replaceChild(img, button)
    })
    

    The details are beyond the scope of this lab, but this makes the button replace itself with a smile image (a static asset).

    On the service worker side, your cache will need a name:

    var CACHE_NAME = 'v1'
    

    To populate the cache upon installation, call event.waitUntil(addStaticResourcesToCache()) in installEventHandler. Here, waitUntil expects a promise, and won't let the installation be considered complete until it resolves. You can implement addStasticResourcesToCache as async so it returns a promise either way, but it turns out the cache mechanism from the Service Worker API returns one, and this can be passed along directly as the return value:

    async function addStaticResourcesToCache() {
      console.log('Caching static assets', CACHE_NAME)
      var cache = await caches.open(CACHE_NAME)
      return cache.addAll(['./', './assets/Emoticon_Smile_Purple.png'])
    }
    

    Your service worker has access to caches as a global CacheStorage object. The above code uses that to open cache v1 and return a promise to add fetch responses (if they're 200-level) for the app and its static asset to the cache.

    With your response cache built, you're ready to put it to use by intercepting fetch events with a new handler:

    self.addEventListener('fetch', function fetchEventHandler(event) {
      console.log('Got "fetch" event', event)
      event.respondWith(cacheOtherwiseNetwork(event))
    })
    

    Unlike waitUntil, you can only call respondWith once, and it expects either a direct Response object or a promise of one. Here's where service workers get somewhat low-level, giving you the opportunity to customize cache logic however you like. For this lab's static assets, you can follow the standard approach of "try the cache; if that fails, try the network; if that fails, give up".

    Here's how to try the cache:

    async function cacheOtherwiseNetwork(event) {
      var { request } = event
      var cachedResponse = await caches.match(request)
      if (cachedResponse && cachedResponse.status === 200) {
        console.log('Found cache match', request.url)
        return cachedResponse
      }
      console.log('Could not find good (200) cache match', request.url)
      // ...next, try the network...
    }
    

    Note that you don't need to open a named cache like you did with addAll because caches.match searches all of your app's caches for you. If you're wondering why you need to check status if addAll won't add non-200-level responses, it's because addAll isn't the only way something can end up in your cache, as you'll see next. To try the network:

    try {
      var responseFromNetwork = await fetch(request)
      var cache = await caches.open(CACHE_NAME)
      await cache.put(request, responseFromNetwork.clone())
      console.log('Network fetched (and now cached)', responseFromNetwork.status)
      return responseFromNetwork
    } catch (error) {
      console.error('Network fetch failed', error)
      return new Response('Offline or network error occurred.', {
        status: 503,
        headers: { 'Content-Type': 'text/plain' },
      })
    }
    

    As you may have guessed, cache.put lets you cache any response, even errors. But why the clone() call? Because Response bodies can only be read once, by design. Normally, it doesn't matter whether you cache or return the clone or the original, but for the purpose of this lab's checks, cache the clone.

    With that, all paths return a Response object directly to respondWith, and your cache-first fetch handler is complete. Though not required to pass this check, feel free to try the Offline checkbox in the Application tab. If you find it doesn't work exactly as expected, you have two options:

    1. Close your app's tab and reopen it.
    2. Go to the Network tab, right-click a request, and pick Block request domain.

    With the latter option, be sure after you're done testing to unblock the domain before moving on to the next task.


    If you've been experimenting with the dynamic link between tasks, you may have noticed that the previous task introduced a bug: Your "dynamic" link is now permanently cached. The normally always-up-to-date time response is frozen in time.

    The solution is to differentiate between your dynamic and static endpoints. Since you have full control over the cache logic, this difference can be specified however you want. In this case, you can split on whether the request URL lives under /api/ instead of always calling event.respondWith(cacheOtherwiseNetwork(event)) in your fetch handler:

    if (event.request.url.startsWith(`${self.location.origin}/api/`)) {
      event.respondWith(networkOtherwiseCache(event))
    } else {
      event.respondWith(cacheOtherwiseNetwork(event))
    }
    

    The name hints at the type of strategy you'll implement for handling dynamic asset fetches: Try the network first (caching a clone of the response upon success); if that's down, try the cache; if neither work, give up. It's largely the same code, differently ordered:

    async function networkOtherwiseCache(event) {
      var { request } = event
      console.log('Dynamic request', request.url)
      try {
        var responseFromNetwork = await fetch(request)
        var cache = await caches.open('dynamic')
        await cache.put(request, responseFromNetwork.clone())
        console.log('Network fetched (and now cached)', responseFromNetwork.status)
        return responseFromNetwork
      } catch (error) {
        console.log('Will try cache since network fetch failed', error)
        var cachedResponse = await caches.match(request)
        if (cachedResponse && cachedResponse.status === 200) {
          console.log('Found cache match', request.url)
          return cachedResponse
        }
        console.error('No cache match found either', request.url)
        return new Response(
          'Offline or network error occurred, and no cached version available.',
          {
            status: 503,
            headers: { 'Content-Type': 'text/plain' },
          }
        )
      }
    }
    

    Here, instead of CACHE_NAME (i.e., v1), the name is just dynamic because what's being cached isn't tied to a particular version of this particular app.

    But other apps' structures may suggest another strategy — as always, the cache logic can be whatever it needs to be. For instance, you might have particular offline placeholders for particular types of dynamic assets. The catch block's post-cache-attempt section is where you'd return await caches.match(fallbackUrl) (if successful) if you wanted a custom offline page for /api/time/, for instance. At this point, your app is able to cache static and dynamic assets separately, and static assets are cached under CACHE_NAME. As your app evolves and the name shifts several times, it will accumulate caches that likely contain duplicate or stale data, wasting user storage and slowing cache response time. Handling this aspect of service workers is not automatic.

  4. Challenge

    Optimizing the Offline Experience

    Thankfully, it's not overly arduous to optimize resource management by deleting previous caches.

    While addStaticResourcesToCache() makes sense during installation, purging old caches is better done during activation. A typical user won't have Update on reload enabled, so purging a cache during installation would be premature because the user would still be using it. Triggering this, as you may have guessed, is accomplished via a event.waitUntil(deleteOldCaches()) call in activateEventHandler, then implementing deleteOldCaches:

    async function deleteOldCaches() {
      var names = await caches.keys()
      return Promise.all(
        names.map(function maybeDelete(name) {
          if (name !== CACHE_NAME && name !== 'dynamic') {
            console.log('Will delete old cache', name)
            return caches.delete(name)
          }
        })
      )
    }
    

    Here, caches.keys works similarly to Object.keys, returning an array of cache names. Promise.all makes the return value wait to resolve until all the promises returned by caches.delete resolve first. Meanwhile the if makes sure you don't purge the static cache you just installed, nor the separate, unversioned dynamic one.

    If you're wondering if checking every name is needlessly thorough (compared to just targeting the previous version, for example), it's actually a sensible approach in production, where it's easy to issue multiple updates to your service worker between the visits of a given user. In other words, a service worker should be prepared to update from any prior version.

    Finally, bump CACHE_NAME to v2 so the new code actually has a cache to delete (even though, in this case, the caches will contain the same thing). If you're doing the manual testing mentioned in some tasks, reload your app now so the Smile button becomes visible. Then be sure to not click the Smile button nor reload the page until instructed to do so, after passing the next task check.

    The Update on reload feature is nice for developers, but what about regular users? If you want the same behavior of automatically skipping to activation, there's a programmatic way to do this: call self.skipWaiting() within your install handler.

    But before you try deploying that, make a few other changes so you can see how the side-effects of this tactic can play out. Remove your app's Smile button and associated click handler code. In your service worker, change the array in the cache.addAll call parameter to no longer include ./assets/Emoticon_Smile_Purple.png. Finally, bump CACHE_NAME to v3 (this time with good reason, given the changes to the app). Though not required to pass this check, feel free to try the following sequence to discover the pitfall of skipWaiting, after making sure that your code passes the check:

    1. Still without reloading yet, disable the Update on reload checkbox to simulate a production user environment.
    2. Do a regular reload of the page, as any user might do.
    3. Enable the Offline checkbox, simulating the user going out of range of their WiFi connection (for example).
    4. Click the Smile button. (Normally, it will no longer do anything user-visible, instead producing an error in the DevTools console.)

    Because service workers give you full control over caching and add an additional state element to keep in mind while testing (online versus offline at various points during a sequence of events), it's easy for your code to exhibit a pitfall like the above and just as easy to remain unaware of it. But what might seem like an edge case can be detrimental to your app's offline experience: installing and activating a new version of your service worker broke the core functionality (the Smile button) of older versions of your app by removing their versions of the cache. That's because skipWaiting causes an activation where the service worker takes over the current page as loaded, rather than also reloading the current page.

    Only use skipWaiting if the incoming version doesn't purge any assets needed by any previous versions. For this final task, don't add back in the Smile button. But keeping in mind the fact that, in a real-world scenario, users might not be updating from the previous task directly, remove the skipWaiting call nonetheless.

    The last facet of offline experience this lab will explore is the navigation preload feature. When the user navigates (e.g., by clicking a link) on a page with a registered service worker, the browser normally boots up the service worker before fetching the new resource (after all, it's the service worker's job to intercept fetches). In many cases the delay might be short, but in some circumstances (like a device under heavy load), it can be long enough to be noticeable.

    Navigation preload solves this by booting the service worker and pre-fetching the new resource in parallel. It's then up to the service worker to handle the situation. Enabling the feature is done during activation:

    event.waitUntil(enableNavigationPreload())
    

    Normally, the two waitUntil calls could be in any order, but this lab's check requires that the first call enables navigation preload and the second call deletes old caches.

    The actual implementation is straightforward, too:

    async function enableNavigationPreload() {
      return self.registration.navigationPreload?.enable()
    }
    

    The ?. notation results in the async promise resolving immediately to undefined if this feature isn't available in a user's browser. If it is, enable also returns a promise that resolves to undefined.

    For the purpose of this lab, you'll implement the logic in only one of your cache strategy functions, cacheOtherwiseNetwork, which you can rename to cacheOtherwisePreloadOtherwiseNetwork to reflect its updated logic. In other words, between the cache check and the network try/catch, see if the requested resource was made available by navigation preloading:

    var cache = await caches.open(CACHE_NAME)
    var preloadResponse = await event.preloadResponse
    if (preloadResponse && preloadResponse.status === 200) {
      console.log('Using preload', preloadResponse)
      await cache.put(request, preloadResponse.clone())
      return preloadResponse
    }
    console.log('Could not find good (200) preload', request.url)
    

    This new section works extremely similarly to the cache section. The main difference is that event.preloadResponse already targets the current fetch URL for the event — there's no need to pass a parameter. Also be careful not to trip over the fact that event.preloadResponse is actually a promise, whereas the local preloadResponse is the actual resulting response (if it turned out to be available).

    Since both this new section and the try block use the same var cache = await caches.open(CACHE_NAME), you can delete it from the try block.

    Note that unfortunately Chrome will complain in the console that The service worker navigation preload request was cancelled before 'preloadResponse' settled, but this is longstanding behavior that's safe to ignore. Service workers allow a lot of control, so there are numerous ways of approaching the details and ways that service workers can interact with different caching strategies. You can get as sophisticated as your application needs.

    --

    Congratulations on completing this lab!

Kevin has 25+ years in full-stack development. Now he's focused on PostgreSQL and JavaScript. He's also used Haxe to create indie games, after a long history in desktop apps and Perl back ends.

What's a lab?

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Provided environment for hands-on practice

We will provide the credentials and environment necessary for you to practice right within your browser.

Guided walkthrough

Follow along with the author’s guided walkthrough and build something new in your provided environment!

Did you know?

On average, you retain 75% more of your learning if you get time for practice.