- Lab
- Core Tech

Guided: Vitest Foundations
In this lab, you'll learn how to get set up to write your first Vitest unit tests. You'll then gain hands-on practice doing asynchronous testing and leveraging a few of Vitest's more powerful features.

Path Info
Table of Contents
-
Challenge
Introduction
This lab will guide you through the installation and configuration of Vitest, followed by various testing scenarios. You don't need any prior Vitest or automated testing experience to complete this lab, but it's assumed that you're generally comfortable with programming, and with writing JavaScript code in particular.
Some notes:
- You'll code in
tests/main.test.js
for all your tests except one, which will be explained later. - Generally you'll replace, rather than accumulate, your previous test in the same file.
- Leave
'use strict'
in place at the top of this file. - If you get stuck on a task, you can consult the
solutions/
folder.
- You'll code in
-
Challenge
Vitest Setup and First Test
The first step to using Vitest is to install (
i
) it as a development dependency (-D
) usingnpm i -D vitest
. Normally, you would run this command yourself, but in this lab, it's already been done for you.Vitest can work out-of-the-box without a configuration file, but this lab will require a configuration tweak in a later step. An empty file called
vitest.config.mjs
has been provided for you, but an empty config is worse than none at all — Vitest expects this file to export a configuration object created by itsdefineConfig
function, which you need to import:import { defineConfig } from 'vitest/config'
Then, export the result of calling that function with one parameter: an object containing the key
test
corresponding to an empty object:export default defineConfig({ test: { }, })
Vitest-specific options go within that empty object corresponding to the
test
key. You'll see an example option in a later step, but for now, this is enough of a boilerplate for Vitest to run without complaint.If you add
"test": "vitest"
to the"scripts"
section ofpackage.json
, you can run Vitest in the Terminal with a quicknpm t
. With that, Vitest is ready to run. This lab relies on Task Checks to check your progress, but if you like, you can also keep Vitest running as you code by executingnpm t
in the built-in Terminal tab. (npm t
is shorthand fornpm test
, which then runs the test script you just added topackage.json
.) Vitest will continually watch for changes to the codebase and rerun tests as needed. Sincetests/main.test.js
has no tests yet, Vitest will complain withError: No test suite found...
, but keep running.Either way, you can now write your first test. The code you'll test is a function called
sum
that takes two parameters and adds them together:sum(2, 3) // 5
You can import it like this:
import { sum } from '../src/main.js'
To create a test, you'll need two functions:
it
for creating testsexpect
for adding expectations that must be met for the test to pass
You can import these like so:
import { expect, it } from 'vitest'
Next, call
it
with two parameters: a test label and a function containing your testing logic andexpect
calls:it('can add', function() { // expect() calls (and, in this case, sum() calls) go here })
Notice how the present-tense test label makes the line read like an English phrase about the overall test you're asserting: that "it can add."
What do
expect
calls look like? You give them one parameter — the value you have expectations about — and then chain another Vitest function to assert something about the value in question.Here, Vitest has a rich variety of chainable functions available that make it possible to make your assertions read fairly closely to real English, just like with
it
. In this case, you can chaintoBe
to check for equality:expect(sum(1, 2)).toBe(3)
The call to your function-under-test (
sum
) can be separated from theexpect
logic, if you prefer:var result = sum(11, 23) expect(result).toBe(34)
With that, you're saying that to check whether "it can add" you "expect
result
to be 34". Notes:-
It's conventional to pass the value from the function under test to
expect
, and pass the predetermined value to functions liketoBe
. -
The
it
function is an alias — the canonical function name is actuallytest
, but in many cases,it
makes the resulting label more concise if you prefer your test descriptions to read closer to English.
-
Challenge
In-source Tests
Generally, it's most common to write your tests in files separate from the code they're testing, as you did in the previous task. However, there may be times when accessing a module's private state can be helpful for testing. There are several ways to handle such a case, but one convenient approach, for reasonably small modules, is in-source testing — in other words, having your tests live in the same file as the code under test.
Vitest is capable of this, but its default configuration is to ignore plain js files (i.e., without
.test.
or.spec.
in the filename) as part of the codebase under test, rather than part of the test suite. Specifically, its default is:include: ['**/*.{test,spec}.{js,mjs,cjs,ts,mts,cts,jsx,tsx}'],
To use in-source testing, you need to modify
vitest.config.mjs
and add a key-value pair to the object corresponding to thetest
key. The pair to add, in this case, is very straightforward:includeSource: ['src/main-in-source.js'],
This will enable Vitest to search for in-source tests within
src/main-in-source.js
, which is where — exceptionally — you'll do your coding for the next task.The Scenario
If you open that file, you'll see that it manages some state, tracking which people have been seen already. It exports two functions,
see
(to mark someone as "seen") andseen
(to check whether someone has been seen already).Recently, a colleague of yours had discovered that this module had a bug: sometimes,
seen
would returntrue
when it hadn't seen a person, andfalse
when it had. They fixed this bug and wrote a test to cover its behavior to avoid regressions.Unfortunately, they soon after discovered similar buggy behavior again, but only when the functions were being called with multiple people. They fixed this bug as well, and wanted to add a regression test. They noticed that they would need to find some way to reset the private state (
seenList
) between tests, without unnecessarily expanding the API surface by exporting the state. They heard that Vitest's in-source tests could help, but ran out of time before going on vacation. They left this task to you to finish.At the bottom of the file, note this conditional block:
if (import.meta.vitest) { var { it, expect } = import.meta.vitest // ... }
In-source tests must live within such a block, since you don't want them running when you're not testing.
If you finish writing the second test, only your colleague's test will pass — and their test will cause the second test to fail, because Alice has already been seen in the first test.
To reset the state before each test, you'll need to add
beforeEach
alongside the importation ofit
andexpect
. Then, callbeforeEach
with a function that resets the state (i.e.,seenList = {}
). That was the only in-source test this lab requires you to write. You can return totests/main.test.js
for the rest of your test code writing, and for each task from now on, the Task Checks require you to replace the tests from previous tasks in that file rather than accumulating them. -
Challenge
Side Effects
Your main source file,
src/main.js
, has another exported function that needs testing:writeLog
. This function creates a new log file, writes the provided string (its only parameter), and doesn’t return anything.The first approach you'll take in testing this is to call
writeLog
, then read the log file it created, and verify that it wrote the data you passed it.You can read a file using
readFileSync
, which you can import from the built-infs
module, passing the file's path as the first parameter. If you provide an encoding (e.g.,utf8
) as the second parameter, it will return the file's contents as a string. If you don't, it will return a buffer object, which you can then convert to a string with the built-in functiontoString
. That is, the following are equivalent for the purposes of this lab:readFileSync(latestFile, 'utf8') readFileSync(latestFile).toString()
One complication is that
writeLog
does not expose the filename of the new log file it creates, so you'll need a way to find it. Node.js doesn't include a built-in, straightforward way to find the most recently created file in a directory, so this lab includes a handy function for this purpose,getLatestModifiedFile
, that you can import fromutil/helper.js
(relative to theworkspace
root). You can call it without any parameters to get the path to the correct file. The approach you just used has a drawback: each time you run the test, it creates (and afterward leaves behind) a new file — an external change known as a side effect. If you don't remember to add code to clean up this file, your test suite will pollute your directory.But cleanup has a fragility pitfall: if something goes wrong during the test, the cleanup might not run at all. Worse yet, if
writeLog
fails to create a file, an imprecise cleanup implementation could accidentally delete a legitimate file instead!Moreover, other types of side effects can be difficult or impossible to reverse. For example, a test that makes a request to an external web server causes side effects you may have no control over, like logging the fact that you made the request. If you have lots of tests hitting the same URL, your requests might even start getting rate-limited or banned altogether. Aside from this being not exactly neighborly behavior toward the external host, the resulting limit or ban would also mean that your tests would be (intermittently or permanently) no longer able to pass.
-
Challenge
Handling Side Effects Better
Thankfully, there's a much better way to test functions that have side effects: mocking. Mocking lets you intercept calls to third-party functions either to simply observe them better or to redirect them to run code with an alternative behavior.
In the case of
writeLog
, you want both: you don't want it to actually write a file, but you do want to know what it was trying to write.For this, you need
vi.mock
andvi.fn
, both of which you can have by importing Vitest'svi
alongsideexpect
andit
.One way to mock a module is by calling
vi.mock
with the module's name and a function. The function has to return an object, which Vitest will interject instead of the module wherever it's imported. As such, each key in this object must be one of the module's function names. The corresponding value depends on what you want the mock to do, but one option is to usevi.fn()
, which creates a spy — a function that tracks calls made to it.In this case, it's
writeFileSync
from thefs
module that is causing side effects and whose calls you want to spy on. To mockfs
and spy onwriteFileSync
, Vitest requires that you also import the original into your test withimport { writeFileSync } from 'fs'
.With that setup, you can then use spy-specific tests. For example, to check that
writeFileSync
is called just 1 time, you can write:expect(writeFileSync).toHaveBeenCalledTimes(1)
Another such function,
toHaveBeenCalledWith
, takes expected call parameters. If you don't care about a particular parameter, you can passexpect.anything()
as a placeholder. That's useful in the case ofwriteFileSync
, where you care more about the data to be written (the second parameter) than the path to the file to be written to (the first parameter). An additional benefit to using mocks and spies as you did in this task is that they can be significantly faster than the original functions. -
Challenge
Snapshots
When testing functions that output any non-trivial amount of data, it can become tedious maintaining the test when the canonical, expected value changes. Unlikely in the case of
sum
, but consider something prone to change over time, likerenderHomepage
. It may just be a matter of copying and pasting, but not always — there may be characters that need to be escaped properly for you to store the data, for example.Thankfully, Vitest comes with a time-saving feature called snapshots, in which Vitest manages the storage of expected values for you. To use a flow that's already familiar to you, you'll use the
toMatchInlineSnapshot
assertion in the next task.Snapshots in external files are beyond the scope of this lab.
All that's needed is to chain it (with no parameters) to an
expect
call:expect(someOutput).toMatchInlineSnapshot()
When you run a test containing such an assertion for the first time, you can watch as Vitest modifies your assertion for you to include the output — no need to copy and paste, even once:
expect(someOutput).toMatchInlineSnapshot(`"output here"`)
Furthermore, if you've opted to leave Vitest running in watch mode, you'll see that if you corrupt the expected value — or, in a more common real-life scenario, if the code under test changes its output — Vitest offers the option to
press u to update snapshot
. A single keypress, and your test maintenance is done — unless, of course, the mismatch signals an actual issue with the output. In that case, Vitest is still helpful, showing you exactly where the mismatch is, just as it always does.In case you don't like watch mode, you can also execute
npx vitest run -u
to just update snapshots once.For this task, you need to write a test against a function called
snakeCase
that you'll import fromsrc/main.js
. It will convert any sentence you give it into snake case, a mixture of lowercase letters and underscores. For example,Hello there, friend!
would becomehello_there_friend
. Snapshots aren't just for string output — you can pass booleans, numbers, even entire objects toexpect
, and Vitest will encode the snapshot in a way that works for its comparisons.This means that it encodes basic type information. For example, here's what you would get with
sum
:expect(sum(22, 33)).toMatchInlineSnapshot(`55`)
It's true that the backticks around the
55
make it a string; however, ifsum(22, 33)
were to return a string instead of a number, the test would break. Vitest adds a layer of quotes when encoding a string, as you may have noticed during this task. -
Challenge
Testing Asynchronous Code
Sometimes the functions you need to test contain asynchronous code, even if they're not marked
async
. Testing such functions can be more complicated than it might first appear.For example,
src/main.js
exports a function calledlongRunningComputation
that produces some console output. It takes no parameters and returns a boolean. Your first approach will be to test that that boolean istrue
. This type of test passes, but it doesn't indicate what the result of the computation was. Worse yet, in the case oflongRunningComputation
, it turns out that the boolean it returns only indicates whether the computation successfully began, not whether it successfully completed. The computation will complete later (in this case, one second after it returns).Recall that this function outputs to the console. Here's what it outputs:
Started Finished with result 1234567890123456789
Since
console
isn't imported and therefore can't be used withvi.mock
, you can spy onconsole.log
usingvi.spyOn
:it('computes', function () { var consoleLogSpy = vi.spyOn(console, 'log'), success = longRunningComputation() expect(consoleLogSpy).toHaveBeenCalledWith('Started') expect(consoleLogSpy).toHaveBeenCalledWith('Finished with result 1234567890123456789') expect(success).toBe(true) })
However, while the first
expect(consoleLogSpy)
assertion will succeed, the second will not. The spy will only have been called once when the assertions run because that's immediately afterlongRunningComputation
returns. This indicates that the code you're testing could be more testable. For example, it could:- Avoid returning until the computation is complete (making the above spying work well).
- Additionally, return the actual computation result (making spying altogether unnecessary — your previous task solution would already be enough).
- Alternatively, return immediately, but allow access to the computed result via an internal promise as (or as part of) its return value. If this were the case, you could make your test use an
async
testing function andawait
said promise viaexpect
andresolves
:
it('computes', async function() { await expect(longRunningComputation()).resolves.toBe(1234567890123456789) })
However, you may not always be free to modify the code being tested, such as when setting up a test suite against an existing codebase before refactoring it. In this case, the simplest workaround is to build in an upper limit (say, 2,000 ms) to how long the function should take to execute:
it('computes', async function () { var consoleLogSpy = vi.spyOn(console, 'log'), success = longRunningComputation() expect(consoleLogSpy).toHaveBeenCalledWith('Started') await new Promise(function (resolve) { setTimeout(resolve, 2000) }) expect(consoleLogSpy).toHaveBeenCalledWith('Finished with result 1234567890123456789') expect(success).toBe(true) })
With this approach, if
longRunningComputation
gets refactored in a way that runs significantly slower than normal, the test will fail. But this approach is brittle (it's hardware- and load-dependent) and slower than strictly necessary (it requires the test to always run as long as the upper limit).But in this case, where you expect just two
console.log
calls, you can spy on the first (immediate, synchronous) call, then swap out your spy for a different mock to await the second call. To do that, you would need to replace thesetTimeout
call with:consoleLogSpy.mockImplementation(function (...args) { resolve(args) })
The
mockImplementation
member of any Vitest spy allows you to override the behavior of the function being called (in this case,console.log
). In the above code, the override passes all arguments (i.e., those thatconsole.log
is called with) on toresolve
, resolving the promise and letting the code move on to the secondexpect(consoleLogSpy)
assertion. Like this, the test will wait only as long as needed for the computation to finish.In this context, it's a best practice to end with
consoleLogSpy.mockRestore()
to letconsole.log
behave normally once again for any code that may come afterwards. ### Bonus: Best of Both WorldsThe timed approach does have one upside: the test can't hang indefinitely. The risk is that, for example, the code under test gets refactored improperly and ends up with an infinite loop, or it otherwise takes unreasonably long even under heavy load or on poor hardware.
To have this advantage without slowing the test down, you can race the timed approach against the approach you ended up with in the final task, using the built-in
Promise.race
:await Promise.race([ new Promise(function (resolve) { consoleLogSpy.mockImplementation(function (...args) { resolve(args) }) }), new Promise((_, reject) => setTimeout(function () { reject(new Error('Timed out waiting for second log')) }, 5000) ), ])
Bonus: Coverage
This lab used
@vitest/coverage-v8
internally. Since you have it installed anyway, this is a good opportunity to see a taste of its capabilities. If you're running Vitest in the Terminal tab, quit first, and then run:npx vitest --coverage --coverage.reporter text
With that, you can see exactly how much code is covered (and which lines aren't) in some way by tests present in the codebase (including the solutions).
Congratulations on completing this lab!
What's a lab?
Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.
Provided environment for hands-on practice
We will provide the credentials and environment necessary for you to practice right within your browser.
Guided walkthrough
Follow along with the author’s guided walkthrough and build something new in your provided environment!
Did you know?
On average, you retain 75% more of your learning if you get time for practice.