Featured resource
Tech Upskilling Playbook 2025
Tech Upskilling Playbook

Build future-ready tech teams and hit key business milestones with seven proven plays from industry leaders.

Learn more
  • Labs icon Lab
  • Core Tech
Labs

Guided: Debug and Optimize Asynchronous JavaScript Code

In this lab, you'll gain hands-on practice debugging and profiling asynchronous JavaScript code with developer tools. You'll also learn how to properly handle asynchronous errors and use asynchronous approaches to optimizing network requests.

Labs

Path Info

Level
Clock icon Intermediate
Duration
Clock icon 47m
Last updated
Clock icon Jul 17, 2025

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Table of Contents

  1. Challenge

    Introduction

    This lab will introduce two common JavaScript developer tools and how they can be used to debug and profile asynchronous code. No prior experience with them is required, and neither is any experience with debugging or profiling.

    The two tools you'll use are VS Code (provided in this lab) and DevTools (sort for "web developer tools"). DevTools is a feature of most desktop web browsers. This lab only officially supports Google Chrome, but any Chromium-based browser (e.g., Edge, Brave, etc.) will likely have the same interface.

    Some notes:

    • Depending on the task, you'll do all your coding in either /src/main.mjs (for Step 2) or /index.html (for Steps 3-4).
    • Per best practices, it's recommended you keep strict mode enabled by leaving 'use strict' as the first line of any JavaScript file or <script> area.
    • Sometimes you'll replace your previous content in the same file, and sometimes you'll build upon it — this will be made explicit in the instructions of every task.
    • If you get stuck on a task, you can consult the solutions/ folder.

    In VS Code, you can use the Explorer sidebar (the default, and the top icon of those on the left within the VS Code interface) to navigate files and the Run and Debug sidebar (the second icon from the bottom in the same set) to use its debugger.

  2. Challenge

    Debugging

    Often, when it's time to debug, JavaScript developers will default to using console.log to figure out what's happening with their code's variables and control flow. But VS Code has a built-in debugger that provides you a lot more detail with a lot less work.

    Take this code, for example:

    var host = 'localhost:5173',
      url = `http://${host}/birthday-parties`
    
    fetch(url)
      .then(function fetchHandler(response) {
        var contentType = response.headers.get('Content-Type')
        response.json().then(function jsonHandler(data) {
          return {
            contentType,
            data,
          }
        })
      })
      .then(console.log)
    

    Your colleague is having trouble figuring out what's wrong with it. When they run it, they expect it to log some preliminary response data, but instead, it's just logging undefined.

    You could add more console.log statements, but you decide to use the debugger instead.

    As you'll see in a moment, the debugger lets you pause your program on any line and control your program's running state with six controls:

    1. Continue (runs normally*)
    2. Step Over (runs any functions normally*, stepping to the next line in the current scope)
    3. Step Into (steps into the bodies of any functions called from the current line)
    4. Step Out (runs normally* until the end of the current function, stepping to the next line of the calling function)
    5. Restart
    6. Stop

    * until it hits another breakpoint, including within any functions that are called

    While paused, VS Code lets you do several very useful things:

    1. Mouse over any variable to see its current value.
    2. See all variables listed in the sidebar organized by scope.
    3. Add watch expressions in the sidebar if you need to see a live view of any JavaScript expression more complicated than just the current value of a variable, including the results of function calls.
    4. See the current call stack in the sidebar, showing which functions have called each other to lead to the current line of code and are still pending execution, including internal coded and libraries, and click through to see the calling context of any function call.
    5. See loaded scripts in the sidebar.
    6. Manage breakpoints in the sidebar to indicate which ones the debugger should honor and which ones it should ignore. Your colleague thanks you for your help. A while later, they return for further debugging help. Their code has evolved into this:
    'use strict'
    var host = 'localhost:5173',
      url = `http://${host}/birthday-parties`
    
    async function fetchBirthdayData() {
      var response = await fetch(url),
        data = response.json(),
        { birthdayCount, birthdays } = data
      return { birthdayCount, birthdays }
    }
    
    function makeAnnouncement(birthdayData) {
      var index = birthdayData.birthdatesAnnounced ||= 0
      if (index >= birthdayData.birthdayCount) return
      birthdayData.birthdatesAnnounced++
      var { honouree, birthdate } = birthdayData.birthdays[index]
      console.log(`Happy birthday to ${honouree} (${birthdate})!`)
    }
    
    var birthdayData = await fetchBirthdayData()
    makeAnnouncement(birthdayData)
    makeAnnouncement(birthdayData)
    makeAnnouncement(birthdayData)
    

    Unfortunately, when they run this, Node.js crashes with Uncaught TypeError TypeError: Cannot read properties of undefined (reading '0'). Fortunately, it next indicates the line within makeAnnouncement where the crash occurs.

    That's not enough for your colleague to spot their mistake, so it's once again time for the debugger, which has a handy feature specifically for crashes.

    But there's a pitfall, given your colleague's new approach. If you try to Step Into (or Over) on a line containing the await keyword, it will behave as if you clicked Continue. This is because await effectively splits the function at that particular point, letting other code run until the awaited promise resolves — so there's no more code to step through.

    The workaround: Set another breakpoint whenever you step up to an await. Ironically, a missing await in a different spot could have made this much more difficult to debug, even with VS Code's debugger. In particular, VS Code would ignore all breakpoints after the first await within fetchBirthdayData() if it weren't awaited by the caller. In a way, that's a helpful glitch, if you can remember that it exists: If breakpoints are misbehaving in an async function, there's a chance the caller isn't awaiting it. If so, your troubleshooting is done and you have the fix, too. (Unless there are other bugs.)

    --

    Like your colleague, you'd like to move from older promise-based patterns to use async/await best practices. For practice, you endeavor to improve the following code snippet along those lines:

    'use strict'
    
    async function throwsAnError() {
      var random = Math.random()
      await new Promise((resolve) => setTimeout(resolve, random*10))
      if (random > 0.66) {
        encodeURI('\uDFFF')
      } else if (random > 0.33) {
        new Array(-1)
      }
    
      Promise.reject('this line should never be reached')
    }
    
    throwsAnError().catch(function (e) {
      console.error('Message was:', e.message)
    })
    
    console.log('done')
    

    At a quick glance, the intention seems to be to log one of two error messages, followed by the word "done."

    But if you were to open the default Terminal pane in VS Code (with Ctrl + Backtick (`, on many keyboards near the upper-left corner just below Esc), or via the upper-left hamburger menu → ViewTerminal) and run node src/main.mjs several times in a row, you would see that it never reaches the console.log line, instead crashing randomly in one of three different ways:

    1. URIError: URI malformed
    2. RangeError: Invalid array length
    3. TypeError: Cannot read properties of undefined (reading 'catch')

    The first two cases give a helpful enough stack trace and point at the actual cause. But the last one, already a bit cryptic, points at throwsAnError().catch(function (e) {. That's because this code contains a bug — it's chaining catch on a function that's not returning a promise. In fact, the line where return was left out is one that claims it "should never be reached" — another bug.

    The debugger, with Caught Exceptions breakpoints enabled, is an improvement already: it shows Promise.reject to be the cause of the third type of crash, and in all three types, shows the call stack clearly when it pauses. But you don't always have a debugger running when something crashes in real life, and if your server logs are all you have to go on initially, you'll want them to be consistently helpful.

    The main ways to improve this code are to use try/catch in place of catch promise chaining, and to throw an error instead of using Promise.reject, making the "missing return" bug impossible. (You won't be fixing the "should never be reached" logical error — just handling it better.) In production scenarios, the catch block would also be one place you could convert any potential runtime errors that could arise due to user behavior into user-visible error messages with helpful suggestions, and log everything else to a database or logging service.

    As for the improvements you made, if you're curious about how each part of the code pattern you used affects the outcome, expand this to read more:

    Pattern Outcomes

    Here's what happens with .catch:

    | Final Line | Final Line Reached | Other Error Thrown Beforehand | | --------------------------------------- | --------------------------------------------------------------------------- | ----------------------------- | | return Promise.reject(new Error(msg)) | Error output after "done" | Crash with other error | | return Promise.reject(msg) | Error output after "done" and message is undefined | Crash with other error | | return new Error(msg) | TypeError crash (.catch is not a function) | Crash with other error | | return msg | TypeError crash (.catch is not a function) | Crash with other error | | Promise.reject(new Error(msg)) | TypeError crash (Cannot read properties of undefined (reading 'catch')) | Crash with other error | | Promise.reject(msg) | TypeError crash (Cannot read properties of undefined (reading 'catch')) | Crash with other error | | throw new Error(msg) | Crash with this error | Crash with other error |

    Here's what happens with try/catch, with a ✔ meaning that Node.js doesn't crash, and logs the offending error correctly via console.error before logging done via console.log:

    | Final Line | Final Line Reached | Other Error Thrown Beforehand | | --------------------------------------- | ------------------------------------------------------------------------------------------ | ----------------------------- | | return Promise.reject(new Error(msg)) | ✔ | ✔ | | return Promise.reject(msg) | Error output is Message was: undefined | ✔ | | return new Error(msg) | No error output | ✔ | | return msg | No error output | ✔ | | Promise.reject(new Error(msg)) | Crash with this error | ✔ | | Promise.reject(msg) | UnhandledPromiseRejection crash (with irrelevant stack trace, but rejection reason clue) | ✔ | | throw new Error(msg) | ✔ | ✔ |

    Hence it makes sense, from a unified error handling perspective, to prefer throw new Error(msg) and try/catch.

  3. Challenge

    Optimizing Identical Requests

    For the rest of the lab, you'll be using Chrome DevTools (built into your browser) to help optimize code, and use the lab's VS Code interface for editing files, but not debugging.

    To give a somewhat real-world feel to the next two tasks, the first app you'll profile uses modern vanilla web components rather than a specific framework — but its encapsulation approach is nonetheless widely applicable. Here's what the HTML looks like:

    <template id="user-latest-template">
      <style>
        ::slotted([slot='unreads']) {
          background-color: darkolivegreen;
          border: 3px solid darkseagreen;
          border-radius: 10px;
          min-width: 20px;
          text-align: center;
          display: inline-block;
          color: white;
        }
        ::slotted([slot='previewOfLatest']) {
          border: 1px solid black;
          border-radius: 5px;
          padding-bottom: 1.7em;
        }
        ::slotted([slot='topReactions']) {
          border: 1px solid black;
          border-radius: 5px;
          top: -0.9em;
          left: 1em;
          position: relative;
          display: inline;
        }
      </style>
      <div>
        <slot name="unreads"></slot> unread messages. Latest:
        <p><slot name="previewOfLatest"></slot></p>
        <p><slot name="topReactions"></slot></p>
      </div>
    </template>
    
    <user-latest>
      <span slot="unreads">⏳</span>
      <span slot="previewOfLatest">Loading preview of latest unread message…</span>
      <span slot="topReactions">…</span>
    </user-latest>
    

    (If you're not familiar with web components, the idea here is that the <template> provides <slot>s that can be filled by other elements via matching slot attributes to template slot name attributes. The values in <user-latest> are defaults for that particular instance of the template.)

    This approach needs some JavaScript to wire up custom <user-latest> elements to the template using the built-in window.customElements registry:

    'use strict'
    customElements.define(
      'user-latest',
      class extends HTMLElement {
        constructor() {
          var template = document.getElementById('user-latest-template'),
            templateContent = template.content,
            newNode = templateContent.cloneNode(true)
          super().attachShadow({ mode: 'closed' }).appendChild(newNode)
        }
        async init() {
          await fetchSlotData(this, 'unreads')
          await fetchSlotData(this, 'previewOfLatest')
          await fetchSlotData(this, 'topReactions')
        }
      }
    )
    async function fetchSlotData(component, slotName) {
      var url = `${window.location.origin}/${component.localName}`,
        slot = component.querySelector(`[slot=${slotName}]`)
      try {
        var json = await fetchJSON(url)
        if (!json[slotName]) {
          throw new Error(`No data for "${slotName}" in response`)
        }
        slot.innerHTML = json[slotName]
      } catch (e) {
        slot.textContent =
          'Error — please try again and contact the helpdesk if needed.'
        console.error(e)
      }
    }
    async function fetchJSON(url) {
      var response = await fetch(url)
      if (!response.ok) {
        throw new Error(await response.text())
      }
      return await response.json()
    }
    document.querySelector('user-latest').init()
    

    A detailed explanation of web components is beyond the scope of this lab, but in brief:

    1. The class makes it so any change to the contents of <user-latest>'s children will automatically render as declared in the HTML of its corresponding template.
    2. When init is called at the end, it calls fetchSlotData three times, serially.
    3. Each fetchSlotData call results in a network fetch call (via fetchJSON).

    As you'll see when you profile this app's network requests, the API endpoint these fetch calls are hitting is rather slow. You may have noticed those fetch calls are quite slow, taking a whole second each. In this lab, it's assumed you don't have access to the back end to try to optimize it, but one thing you can do from the front end is to parallelize the requests. One technique for that is to use Promise.all to combine them:

    await Promise.all([
      fetchSlotData(this, 'unreads'),
      fetchSlotData(this, 'previewOfLatest'),
      fetchSlotData(this, 'topReactions'),
    ])
    

    This awaits the successful completion of all of them. (Similarly, allSettled can be used if you want to await group completion, whether each one is successful or not — all, by contrast, will short-circuit if any of its promises are rejected. But this lab expects all here.)

    But that technique alone won't suffice:

    1. Chrome automatically re-serializes in cases like this (multiple identical requests).
    2. Though Firefox allows this (as of this writing), it turns out the back end server happens to prevent this technique, rejecting two of the requests with an HTTP 429 Too Many Requests error.

    That's OK: As you saw in the previous task, two of this app's three fetch calls are redundant, anyway. You could make a single call to fetchSlotData update all slot data with one fetch, but for the purposes of this lab, you'll keep them independent (as they might well be in a more complicated real-world scenario). With that restriction, how can you deduplicate these requests closer to the network layer, i.e., without modifying fetchSlotData?

    The answer is by wrapping fetchJSON in a deduplication handler. There are libraries that can do this automatically for you, but the technique is straightforward enough to do manually to see how it works.

    First, you can rename fetchJSON to generateJSONPromise to more accurately capture the return value. Then make a new fetchJSON function (since that is what fetchSlotData will be calling) with a Map persisting outside it, to manage these JSON promises:

    var pendingPromises = new Map()
    async function fetchJSON(url) {
      if (pendingPromises.has(url)) {
        return pendingPromises.get(url)
      }
    
      var jsonPromise = generateJSONPromise(url)
      pendingPromises.set(url, jsonPromise)
    
      try {
        return await jsonPromise
      } finally {
        pendingPromises.delete(url)
      }
    }
    

    The logic is:

    1. If an identical request is already pending, return its promise.
    2. Otherwise, make the request, and store its promise.
    3. Wait for the request to succeed, then return the JSON it promised.
    4. Whether it succeeds or not, stop storing its promise, because it's no longer pending.

    In other words, it's just using the map as a "pending promise" cache. Note that deduplicating requests isn't the only approach to this situation. Persistent caching of the actual data is also an option, but this carries with it the downside that user-visible data may go stale despite internal requests to refresh it. Which technique to use always depends on what's appropriate in the context of your app.

  4. Challenge

    Optimizing Rapid Requests

    Instead of the app from the previous two tasks, the remaining tasks in this lab will use a search bar that shows live results as you type. If you prefer it while manually testing, some CSS in the <head> area will give it a more typical aesthetic:

    <style>
      form {
        width: 300px;
      }
      ul {
        border: 1px solid black;
        list-style: none;
        margin: 0 0 0 4em;
        padding: 1px;
      }
      ul:empty {
        border: none;
      }
      a {
        text-decoration: none;
      }
      li:hover {
        background-color: bisque;
      }
    </style>
    

    The HTML is straightforward, with a <ul> element to display search results:

    <form>
      <div><label>Search: </label><input type="search" /></div>
      <ul></ul>
    </form>
    

    So is the JavaScript:

    'use strict'
    var search = document.querySelector('input'),
      ul = document.querySelector('ul')
    search.addEventListener('input', searchHandler)
    async function searchHandler() {
      var query = encodeURIComponent(search.value),
        url = `${window.location.origin}/search?q=${query}`
      try {
        var response = await fetch(url)
        if (!response.ok) {
          throw new Error(await response.text())
        }
        var { results } = await response.json()
        ul.innerHTML = results
          .map(function resultMapper(result) {
            return `<a href="/${result}"><li>${result}</li></a>`
          })
          .join('')
      } catch (e) {
        ul.textContent =
          'Error — please try again and contact the helpdesk if needed.'
        console.error(e)
      }
    }
    

    Listening for the 'input' event implies that every user keystroke will result in a fetch call.

    Thankfully, the back end this time won't respond with any 429 errors, no matter how fast you type. However, it's certainly less responsive than you'd like, and your colleague who maintains the back end has asked that your front-end code not make so many requests. Having two good reasons to optimize, you decide to have a look at the Network tab to see how it behaves before proceeding. Debouncing is a common technique for optimizing fetching in scenarios like search-as-you-type. It discards all calls to a function except the most recent one, waiting until some interval (you'll use 300 ms for this lab) has passed since the most recent call before actually invoking it.

    In this context, you'll debounce searchHandler so that no fetch actually occurs until 300 ms after the user finishes typing. (For example, if they keep typing a character every 250 ms forever, it will never fetch.)

    First, the event listener will need to use a new function, debouncedSearchHandler, instead of using searchHandler directly. You can create debouncedSearchHandler by calling a function, debounce, passing it the original searchHandler and 300 for how many milliseconds to wait.

    The function debounce is (like the deduplication technique earlier) something you can find in utility libraries, but the concept is straightforward enough to implement so you can see how it works:

    function debounce(func, wait) {
      var timeout
      return function (...args) {
        clearTimeout(timeout)
        timeout = setTimeout(function launch() {
          func.call(this, ...args)
        }, wait)
      }
    }
    

    This code returns a function that can be used as a drop-in replacement for the function you pass it. First, it cancels the timer from any previous call to the debounced replacement (if there wasn't any, no problem — clearTimeout(undefined) won't throw an error). Then, it creates a new timer, ready to call the original function in its original context (this) with all its original arguments if it has any (...args), after a wait of wait milliseconds. Already the UX has improved, and your colleague is also happy with the reduction in server traffic. For their part, they've also been working on optimizing back end performance. They ask that you get some preliminary results with their new version by appending &version=2 to all request URIs while you continue your front-end work, and they tell you that you can decrease the debounce wait time to 200 ms.

    The new back-end version is indeed faster in most cases, with longer search strings taking less time to complete than shorter ones. But then another colleague in QA reports a strange side effect: In some cases, the correct search results flash momentarily on the screen, only to be replaced with incorrect results! The way they were able to reproduce this when searching for "echo" was to type just the first letter, wait very briefly (200-600 ms), then type the rest rapidly.

    It turns out the "echo" query's results are so fast, they get there before the earlier "e" query completes. When it finally does, it replaces the "echo" results.

    Thankfully, the Web API provides a solution to this scenario in the form of AbortController, which makes it possible to abort previous fetch calls — in this case, fixing the UX glitch by allowing your code to abort the "e" query as soon as the "echo" query is made. Your back-end specialist colleague agrees would help even further with server loads.

    Abortable fetches don't require debouncing, but your team determines that integrating the two should allow your project to benefit from both approaches.

    To be able to abort a fetch call, you have to pass a second parameter with an options object having an AbortSignal as the value corresponding to a key called signal:

    fetch(url, { signal }) // where signal is an AbortSignal
    

    This has some other implications for your searchHandler function:

    1. It has to receive this signal as its second parameter so it can fetch as shown above.
    2. Its catch block has to filter out the AbortErrors that are generated when a fetch is aborted, otherwise users will see false error messages under otherwise normal conditions. (For that you can wrap the contents of the block in if (e.name !== 'AbortError') { ... }.)

    Signal generation and management will occur in your debounce function. Alongside timeout, it will also need controller, initially null.

    Then, within your debounced replacement function, the first thing it will need to do is:

    if (controller) controller.abort()
    controller = new AbortController()
    var signal = controller.signal
    

    In other words, abort a previous request if there is one, then make a new AbortController so you can get the AbortSignal from its signal property.

    Finally, within the inner launch function, func.call has to be invoked with this new signal as the last parameter. Congratulations on completing this lab!

Kevin has 25+ years in full-stack development. Now he's focused on PostgreSQL and JavaScript. He's also used Haxe to create indie games, after a long history in desktop apps and Perl back ends.

What's a lab?

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Provided environment for hands-on practice

We will provide the credentials and environment necessary for you to practice right within your browser.

Guided walkthrough

Follow along with the author’s guided walkthrough and build something new in your provided environment!

Did you know?

On average, you retain 75% more of your learning if you get time for practice.