Featured resource
2025 Tech Upskilling Playbook
Tech Upskilling Playbook

Build future-ready tech teams and hit key business milestones with seven proven plays from industry leaders.

Check it out
  • Lab
    • Libraries: If you want this lab, consider one of these libraries.
    • Core Tech
Labs

Guided: Building a Reactive REST API with R2DBC in Spring Boot 4

Build a modern, fully reactive REST API with Spring Boot 4 and R2DBC. In this hands-on codelab, you’ll learn how to create non-blocking CRUD endpoints with WebFlux, stream data with backpressure, persist reactively to a relational database, and standardize error handling with Problem Details. You’ll also capture lightweight telemetry using Spring Observability to trace database calls and HTTP spans. By the end, you’ll have a versioned, production-ready API that showcases how reactive programming, structured error responses, and built-in observability come together in Spring Boot’s next generation runtime.

Lab platform
Lab Info
Level
Beginner
Last updated
Jan 23, 2026
Duration
1h 0m

Contact sales

By clicking submit, you agree to our Privacy Policy and Terms of Use, and consent to receive marketing emails from Pluralsight.
Table of Contents
  1. Challenge

    Introduction

    Welcome to the Guided: Building a Reactive REST API with Spring WebFlux and R2DBC Lab

    In this hands-on lab, you'll build a non-blocking, reactive REST API using Spring Boot, WebFlux, and R2DBC. You'll learn how to design APIs that handle asynchronous data flows, stream results to clients, persist data reactively, and surface consistent error responses.

    Throughout the lab, you'll work with reactive programming concepts using Project Reactor, returning Mono and Flux types from your controllers and repositories. You'll also add lightweight observability using Micrometer to gain insight into HTTP requests and database interactions without introducing external tracing infrastructure.

    By the end of this lab, you'll have hands-on experience building a modern reactive service that demonstrates how Spring’s reactive stack supports scalability, responsiveness, and operational visibility.

    You'll progress through a series of guided steps that incrementally introduce reactive APIs, persistence, error handling, and observability.


    Step Overview

    Step 2: Define the Product Entity

    In this step, you'll define a Product domain entity that represents the data stored and exposed by your API. You'll map the entity to a relational table using Spring Data annotations and model its fields in a way that works seamlessly with reactive persistence.

    This step establishes the core data structure used throughout the rest of the lab.

    Step 3: Create the R2DBC Repository

    Here, you'll create a reactive repository using Spring Data R2DBC. You'll extend a reactive repository interface to enable non-blocking database access and learn how Spring generates reactive query implementations automatically.

    This step demonstrates how to interact with a relational database without blocking threads.

    Step 4: Implement Basic CRUD Endpoints

    In this step, you'll expose basic Create, Read, Update, and Delete (CRUD) operations using Spring WebFlux. You'll implement REST endpoints that return Mono and Flux types and connect them to your reactive repository.

    This step shows how to build fully non-blocking REST endpoints backed by a reactive data store.

    Step 5: Add a Streaming Endpoint with Backpressure

    You'll add an endpoint that streams product data to clients using a Flux. By returning a reactive stream instead of a fixed response, the API can emit items incrementally and respect client backpressure.

    This step demonstrates how WebFlux supports real-time data delivery and efficient handling of large result sets.

    Step 6: Add Standardized Error Responses

    In this step, you'll implement centralized error handling to ensure all API errors follow a consistent structure. You’ll create a global exception handler that translates exceptions into standardized HTTP error responses.

    This improves API usability and makes client-side error handling more predictable.

    Step 7: Enable Basic Observability

    Here, you'll enable lightweight observability using Micrometer Observation. You'll add annotations to capture telemetry for HTTP requests and database operations, allowing you to observe application behavior without adding full distributed tracing.

    This step highlights how Spring Boot provides built-in operational insight for reactive services.

    Step 8: Add API Versioning

    In the final step, you'll introduce API versioning to make your service forward-compatible. You'll version your endpoints in a way that allows future changes without breaking existing clients.

    This step demonstrates a best practice for evolving REST APIs over time.


    What You'll Learn

    • How to build non-blocking REST APIs using Spring WebFlux
    • How to use Mono and Flux to model asynchronous data flows
    • How to persist and retrieve data reactively with R2DBC
    • How to stream data efficiently to clients
    • How to implement consistent, centralized error handling
    • How to add lightweight telemetry using Micrometer Observation

    By the end of this lab, you'll have a working reactive service that demonstrates modern Spring patterns for scalability, responsiveness, and observability.


    Prerequisites

    You should have a basic understanding of Java and Spring fundamentals, including annotations, dependency injection, and REST concepts. Prior experience with reactive programming is helpful but not required.

    Throughout the lab, you'll run the application and test endpoints using the Terminal tab. All commands should be executed from the project's root directory.

    Tip: If you get stuck at any point, you can refer to the solution directory. It contains example implementations for each step of the lab.

  2. Challenge

    Define the Product Entity

    Define the Product Entity

    In this step, you'll define the core domain model for the reactive API: the Product entity. This class represents a single product in the system and serves as the bridge between your Java code and the relational database layer.

    When using Spring Data R2DBC, entities are plain Java objects (POJOs) that are annotated to describe how they map to database tables and columns. These annotations allow Spring to automatically handle reading and writing rows without requiring you to write SQL for basic operations.

    Over the next few tasks, you'll incrementally build the Product entity by:

    • Mapping it to a database table
    • Identifying its primary key
    • Defining the fields stored and retrieved reactively

    This approach keeps the code easy to reason about while still enabling non-blocking, reactive data access.

    When you're ready, move on to the tasks below to start annotating and building the Product entity step by step.

    Mapping Java Classes to Database Tables with `@Table`

    Spring Data R2DBC uses the @Table annotation to associate a Java class with a relational table. Without this annotation, Spring has no way to know where the entity should be stored.

    For example:

    @Table("products")
    public class Product {
    }
    

    This tells Spring Data:

    • Rows in the products table should be mapped to Product objects
    • Product objects should be persisted back into the products table

    Note: The table name is case-sensitive and must match the table defined in your database schema.

    Identifying the Primary Key with `@Id`

    Every persistent entity needs a unique identifier. In Spring Data R2DBC, the @Id annotation marks the field that represents the primary key column.

    @Id
    private Long id;
    

    This annotation allows Spring to:

    • Detect whether an entity is new or existing
    • Automatically populate the ID when a record is inserted
    • Use the ID for lookups, updates, and deletes

    Unlike traditional JPA, R2DBC does not use entity proxies or lazy loading. Entities are simple data holders designed for fast, non-blocking access.

    Why This Matters in a Reactive System

    In a reactive application:

    • Entities must be lightweight and immutable-friendly
    • Persistence operations must not block threads
    • Mapping must be explicit and predictable

    By defining a clean, well-annotated entity, you're laying the foundation for:

    • Reactive repositories
    • Non-blocking CRUD operations
    • Streaming data pipelines using Flux and Mono

    This step establishes the domain layer that everything else in the application builds upon.

    How the Database Schema Is Created with schema.sql

    Spring Boot is configured to automatically initialize the database schema at application startup using a SQL script.

    In this project, that script lives at:

    products/src/main/resources/schema.sql
    

    When the application starts, Spring Boot detects this file and executes it against the configured R2DBC database. This behavior is controlled by properties defined in application.properties, allowing the schema to be created without manual intervention.

    The current schema looks like this:

    CREATE TABLE IF NOT EXISTS "products" (
      id BIGINT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
      name VARCHAR(255),
      price DECIMAL(19,2) NULL
    );
    

    This schema defines the structure of the products table that your Product entity will map to.

    How This Maps to Product.java

    At the end of this step, your Product entity includes only the fields needed for the current tasks:

    @Id
    private Long id;
    
    private String name;
    

    These fields map directly to the id and name columns in the database table.

    You may notice that the table also includes a price column that is not yet modeled in the Product class. This works because the column is defined as nullable. Spring Data R2DBC will simply ignore columns that are not represented in the entity.

    In a later step, you’ll add the price field to the Product entity and begin working with it in the API. The schema is already prepared for that change.

    Why This Matters

    Initializing the schema at startup ensures that:

    • The database structure is always consistent
    • Learners don’t need to run manual SQL commands
    • The application is ready to accept reactive data operations immediately

    This approach keeps the focus on building reactive APIs and domain models, not on database setup.

  3. Challenge

    Create the R2DBC Repository

    Create the R2DBC Repository

    In this step, you'll connect your Product entity to the database by creating a reactive repository with Spring Data R2DBC. Instead of writing SQL for common operations like create, read, update, and delete, Spring Data can generate the implementation for you when you declare the repository interface correctly.

    Reactive repositories differ from traditional (blocking) repositories. A ReactiveCrudRepository doesn't return Product objects or List<Product>. Instead, it returns reactive types:

    • Mono<Product> for single results (or empty when nothing is found)
    • Flux<Product> for streams of results

    That means database work happens asynchronously, and your API can remain non-blocking end-to-end. In later steps, you'll plug this repository into WebFlux controllers so that HTTP requests and database operations both run reactively.

    In this step, you'll update your empty repository interface so it extends:

    ReactiveCrudRepository<Product, Long>

    This tells Spring Data:

    • The repository manages Product entities
    • The primary key type is Long
    What is `ReactiveCrudRepository`?

    ReactiveCrudRepository<T, ID> is Spring Data's reactive version of a CRUD repository. When you extend it, you instantly get reactive methods like:

    • Mono<T> findById(ID id)

    • Flux<T> findAll()

    • Mono<T> save(T entity)

    • Mono<Void> deleteById(ID id)

  • Challenge

    Implement Basic CRUD Endpoints

    Implement Basic Reactive CRUD Endpoints

    In this step, you'll expose your first reactive REST API endpoints using Spring WebFlux. Controllers in a reactive Spring application handle HTTP requests without blocking threads, allowing the application to scale efficiently under load.

    Instead of returning concrete objects or collections, reactive controllers return Mono and Flux types:

    • Mono<T> represents zero or one result
    • Flux<T> represents zero to many results

    Spring WebFlux automatically adapts these reactive types into HTTP responses.

    You'll also inject your ProductRepository into the controller. This allows the controller to delegate persistence logic to the repository while keeping HTTP concerns and data access concerns separate, following a core Spring design principle.

    Over the next tasks, you'll:

    • Mark the class as a REST controller
    • Define a base request path
    • Inject the reactive repository
    • Implement non-blocking GET and POST endpoints
    What does @RestController do?

    @RestController tells Spring that this class is responsible for handling HTTP requests and returning data directly in the HTTP response body, usually as JSON.

    It's a convenience annotation that combines two annotations:

    • @Controller — marks the class as a web controller
    • @ResponseBody — tells Spring to serialize return values directly into the response

    Because of @RestController, you don't need to annotate each method individually to return JSON.

    What does @RequestMapping do?

    @RequestMapping defines a base URI path for all endpoints in a controller.

    For example:

    @RequestMapping("/api/v1/products")
    

    This means every endpoint defined in the controller will be prefixed with /api/v1/products.

    Using a base path keeps your API organized and makes it easier to version or evolve endpoints over time.

    What are @GetMapping and @PostMapping?

    @GetMapping and @PostMapping are shortcut annotations used to map HTTP requests to controller methods.

    • @GetMapping handles HTTP GET requests (used to retrieve data)
    • @PostMapping handles HTTP POST requests (used to create new data)

    They are specialized versions of @RequestMapping that make your code more readable and expressive.

    What are Mono and Flux?

    Mono and Flux are reactive types provided by Project Reactor.

    • Mono<T> represents zero or one value
    • Flux<T> represents zero to many values over time

    In a reactive REST API:

    • Use Mono when returning a single result (like a created product)
    • Use Flux when returning a stream of results (like a list of products)

    Spring WebFlux automatically adapts these types into HTTP responses without blocking threads.

    What does @RequestBody do?

    @RequestBody tells Spring to deserialize the HTTP request body into a Java object.

    For example, when a client sends JSON like:

    { "name": "New Product" }
    

    Spring converts it into a Product object and passes it to your controller method.

    This works seamlessly with reactive controllers and does not block while reading the request body.

    Why is the repository injected into the controller?

    Controllers are responsible for handling HTTP requests, not for managing database access.

    By injecting ProductRepository into the controller:

    • The controller delegates persistence logic to a dedicated component
    • Database access remains reusable and testable
    • The controller stays focused on HTTP concerns

    This separation of responsibilities follows core Spring design principles and keeps your application easier to maintain.

    ### Run and Test the API

    At this point, your application is runnable. You now have a reactive REST controller backed by a reactive repository, and you can test the API to see non-blocking behavior in action.

    Configure Gradle for This Lab Environment

    Before starting the application, configure Gradle to use a project-local cache. This ensures all required dependencies are loaded from the lab workspace instead of being downloaded from the internet.

    From the root of the project (workspace/products), run the following in the Terminal window:

    export GRADLE_USER_HOME=$PWD/.gradle
    

    This tells Gradle to store and read dependencies from the .gradle directory inside this project, making the build reliable in restricted or offline environments.

    Note: This setting only applies to the current terminal session. If you want to run the server in another terminal window, you will need to run this command again.

    Start the Application

    Now that the Gradle environment is configured, start the Spring Boot application using the Gradle wrapper. From the workspace/products directory, run the following in the Terminal window:

    ./gradlew --no-daemon bootRun
    

    This command:

    • Builds the application using locally cached dependencies
    • Starts the embedded Netty web server
    • Listens for HTTP requests on port 8081 (the port configured in application.properties)

    The application will continue running until you stop it.

    Test the GET Endpoint

    In a new Terminal window, send a GET request to retrieve all products:

    curl http://localhost:8081/api/v1/products
    

    This endpoint returns a Flux<Product>, which Spring WebFlux automatically serializes into a JSON response. If no products exist yet, you’ll receive an empty JSON array.

    Test the POST Endpoint

    Next, create a new product by sending a POST request with a JSON request body:

    curl -X POST 
      -H "Content-Type: application/json" 
      -d '{"name":"Sample Product"}' 
      http://localhost:8081/api/v1/products
    

    This endpoint returns a Mono<Product> once the product has been saved. The response contains the newly created product, including its generated ID. If you send another GET request, the new products appears in the result.


    What to Observe

    • Requests are handled asynchronously and non-blocking
    • The API responds immediately while persistence happens reactively
    • You are interacting with a fully reactive REST API end to end

    From this point forward, you’re encouraged to restart the application and test the API after each step to observe how new reactive features affect behavior.

  • Challenge

    Add a Streaming Endpoint with Backpressure

    Add a Streaming Endpoint with Backpressure

    In this step, you'll add a streaming endpoint that returns products as a continuous reactive stream rather than a single "all at once" JSON response. This is useful for scenarios like dashboards, live feeds, and high-throughput lists where the client benefits from receiving results incrementally.

    In WebFlux, returning a Flux<T> allows Spring to emit items to the client as they become available. When paired with a streaming media type like NDJSON (application/x-ndjson), the response is sent as a sequence of newline-delimited JSON objects instead of a single JSON array.

    What Is Backpressure?

    Backpressure is the idea that a consumer (the client) should be able to signal how quickly it can handle incoming data. Reactive streams support backpressure so the producer doesn't overwhelm the consumer. In this lab, you'll simulate backpressure-friendly behavior using delayElements(...) so streaming is visible and paced.

    You'll implement:

    • A new streaming endpoint: GET /api/v1/products/stream
    • NDJSON output via produces = MediaType.APPLICATION_NDJSON_VALUE
    • A small delay between items to make streaming observable in real time ### Run and Observe Streaming Behavior

    This step introduces a streamed HTTP response using newline-delimited JSON (NDJSON). The server sends each product one at a time, rather than returning a single aggregated response.

    Because this stream is backed by a database query, it completes once all products have been emitted.

    Restart the Application

    If the application is still running, stop it and restart:

    ./gradlew --no-daemon bootRun
    

    Add Multiple Products

    To better observe streaming behavior, first create multiple products:

    curl -X POST 
      -H "Content-Type: application/json" 
      -d '{"name":"Product A"}' 
      http://localhost:8081/api/v1/products
    
    curl -X POST 
      -H "Content-Type: application/json" 
      -d '{"name":"Product B"}' 
      http://localhost:8081/api/v1/products
    

    Call the Streaming Endpoint

    curl http://localhost:8081/api/v1/products/stream
    

    What to Expect

    • Each product is returned as a separate JSON object.
    • Responses are newline-delimited (NDJSON).
    • Products are emitted one at a time, with a short delay between each.

      To see this more clearly, you may run: curl --no-buffer http://localhost:8081/api/v1/products/stream This disables curl’s output buffering so you can see each line as it arrives.

    • The request completes automatically after all products are sent.

    This endpoint demonstrates streamed delivery, not a continuously open connection. Once all database rows have been emitted, the stream completes and the client exits.

  • Challenge

    Add Standardized Error Responses

    Add Standardized Error Responses (Problem Details)

    In this step, you'll implement centralized error handling so that API failures return a consistent JSON structure instead of framework-specific error pages or inconsistent messages.

    When an exception occurs in a web application, it's important to return an error response that is:

    • Predictable for clients
    • Easy to parse
    • Consistent across all endpoints

    To achieve this, you'll build a global exception handler using Spring’s @ControllerAdvice. This allows you to catch exceptions across your controllers and return a standardized response. In this lab, a JSON body (with keys like type, title, status, detail, and instance) will represent a problem details response.

    What is a global exception handler?

    A global exception handler is a centralized place to translate exceptions into HTTP responses.

    Instead of handling errors inside every controller method, you define one shared handler that:

    • Catches exceptions thrown by controllers
    • Chooses an appropriate HTTP status code
    • Returns a standardized response body

    This keeps your controller methods focused on happy-path logic while ensuring error responses are consistent.

    What does @ControllerAdvice do?

    @ControllerAdvice marks a class as a "cross-cutting" web component that applies to multiple controllers.

    Spring uses it to discover:

    • Exception handlers (via @ExceptionHandler)
    • Model attribute behavior
    • Binder configuration

    In this lab, we use it so our GlobalExceptionHandler applies to all controllers without any extra configuration.

    What does @ExceptionHandler(Exception.class) do?

    @ExceptionHandler(Exception.class) declares a method that handles exceptions of a specific type.

    When a controller throws an exception that matches the declared type, Spring calls the handler method instead of returning a default error response.

    In this lab, we start with Exception.class to catch unhandled exceptions broadly. In production APIs, you would typically add more specific handlers (e.g., for not found or validation errors).

    Why use ResponseEntity?

    ResponseEntity<T> lets you control both:

    • The HTTP status code (e.g., 500)
    • The response body content (your Problem Details JSON)

    This is useful for standardized error responses because you can ensure every error returns a predictable status and structure.

    How this relates to Problem Details

    Problem Details is an error response format commonly represented as a JSON object containing fields like:

    • type — a URI reference for the error category
    • title — a short summary of the error
    • status — the HTTP status code
    • detail — a human-readable explanation
    • instance — a URI or identifier for this specific occurrence

    In this lab, we return a Problem Details like JSON object so clients receive a consistent, structured error response.

    ### Run and Verify Standardized Error Responses

    You've now added a global exception handler that returns consistent, Problem Details-style error responses. Restart the application and trigger an error to confirm the behavior.

    Restart the Application

    ./gradlew --no-daemon bootRun
    

    Trigger an Error Response

    One simple way to trigger an error is to send malformed JSON in a POST request:

    curl -X POST 
      -H "Content-Type: application/json" 
      -d '{"name":' 
      http://localhost:8081/api/v1/products
    

    What to Expect

    • The response is a JSON object with consistent fields such as:
      • type
      • title
      • status
      • detail
      • instance
    • The HTTP status code is 500 Internal Server Error
    • You no longer see default framework error pages or inconsistent responses

    Note: The detail field may reference an underlying client-side error (such as malformed JSON or a parsing failure), even though the HTTP response status is 500. This is expected.

    The global exception handler intentionally normalizes all errors into a consistent API response, while still preserving useful diagnostic information in the response body.

    This confirms that your API now returns standardized error responses across all endpoints.

  • Challenge

    Enable Basic Observability

    Enable Basic Observability

    In this step, you'll add lightweight observability to your reactive API using Micrometer Observation. Observability helps you understand what your application is doing at runtime. This is especially true when requests flow through multiple layers like HTTP handlers and database calls.

    You’ll instrument:

    • HTTP endpoints using @Observed
    • A database call using the Observation API directly

    This gives you a consistent way to name and track work happening in your application, and it sets the foundation for richer tracing systems later (like OpenTelemetry), without requiring them for this lab.


    What does @Observed do?

    @Observed is a Micrometer annotation that creates an Observation around a method invocation.

    When applied to a controller method, it helps you:

    • Name the work being done (for example: products.http.getAll)
    • Track timing and failures
    • Connect request behavior to runtime telemetry

    In a full observability stack, these observations can become traces/spans or metrics. In this lab, we focus on applying the annotation so telemetry exists and is consistent.

    What is Observation?

    Observation is Micrometer's programmatic API for creating and managing observations manually. You use it when an annotation isn't enough or when you want to be explicit about what you're measuring.

    This lab uses Observation.createNotStarted(...).observe(...) to wrap a repository call so you can clearly demonstrate a "DB observation" without wiring additional infrastructure.

    Common methods available on Observation

    Micrometer Observation supports several useful methods. The exact usage depends on how you instrument code:

    • Observation.createNotStarted(name, supplier) Creates an observation object (not started yet).
    • observation.start() / observation.stop() Manually control the observation lifecycle.
    • observation.observe(Supplier<T>) / observation.observe(Runnable) Conveniently runs code inside an observation and automatically starts/stops it.
    • observation.error(Throwable) Records an error on the observation.

    In this lab, you'll use observe(() -> repo.findAll()) to wrap a reactive repository call in a named observation.

    ### Run and Verify Basic Observability

    You’ve now added basic observability to your API using Micrometer Observation. Restart the application and call the instrumented endpoints to see how observations are created during request handling and database access.

    Restart the Application

    If the application is currently running, stop it and restart:

    ./gradlew --no-daemon bootRun
    

    Call the Observed HTTP Endpoints

    Trigger the instrumented GET endpoint:

    curl http://localhost:8081/api/v1/products
    

    Create a Product Using the Observed POST Endpoint:

    curl -X POST 
      -H "Content-Type: application/json" 
      -d '{"name":"Observed Product"}' 
      http://localhost:8081/api/v1/products
    

    These endpoints are annotated with @Observed, which creates named observations for each request.

    Call the Database Observed Endpoint

    Now call the endpoint that explicitly wraps a repository call in an Observation:

    curl http://localhost:8081/api/v1/products/db-observe
    

    What to Expect

    • Requests complete as usual, but are now wrapped in named observations
    • Each request and database call is tracked consistently
    • You may see observation-related log output depending on your configuration
    • These observations provide the foundation for metrics, tracing, and spans in a full observability setup

    At this stage, you’re not exporting telemetry to an external system. Instead your goal is to add instrumentation correctly so observability data exists and is well-named.

    From here on, you can continue restarting the application and testing endpoints after each change to reinforce the habit of observing runtime behavior, not just writing code.

  • Challenge

    Add API Versioning

    Add API Versioning

    In this step, you'll introduce an API versioning strategy by creating a new versioned controller under a new base path (/api/v2/...) while leaving the existing v1 API intact (/api/v1/...).

    Versioning helps you evolve your API without breaking existing clients. In real systems, mobile apps, integrations, and downstream services may depend on your current response shape. If you change your API in-place, those consumers can break immediately.

    Instead, you’ll:

    • Add a new field (price) to the Product model for v2 use cases
    • Create a new controller: ProductControllerV2
    • Copy v1 behavior into v2 and add new v2-only endpoints (GET by id, search)
    • Keep v1 endpoints stable so older clients can continue to function
    What do we mean by "API versioning" here?

    API versioning is a technique for changing an API over time without breaking existing clients.

    In this lab, we version by URI path:

    • v1 endpoints live under /api/v1/...
    • v2 endpoints live under /api/v2/...

    This lets you add new behavior (or change response structures) in v2 while keeping v1 stable.

    Why do we copy methods from the v1 controller into v2?

    Copying the v1 methods into v2 gives you a clean baseline:

    • The v2 API starts with the same behavior as v1 API
    • Then you layer on new capabilities (new fields, new endpoints)
    • Now you can change v2 independently later without affecting v1 clients

    This is a common real-world workflow when evolving APIs.

    How does this protect users of the old API?

    Existing clients using /api/v1/products can keep working with the response and request formats they expect.

    New clients can adopt /api/v2/products and use enhanced capabilities (like new endpoints and new fields) when they're ready.

    That separation prevents "breaking changes" from impacting users who haven't migrated yet.

    ### Restart and Test Both API Versions

    Now that you've introduced /api/v2/products, you can test both versions side by side and see how versioning protects existing clients.

    Restart the Application

    Stop the server if it's still running, then restart it:

    ./gradlew --no-daemon bootRun
    

    Test v1 API (existing behavior)

    Use the following commands to test the v1 API:

    • Create a product using v1 (no price field):
      	curl -X POST 
      		-H "Content-Type: application/json" 
      		-d '{"name":"V1 Product"}' 
      		http://localhost:8081/api/v1/products
      
    • Get all products (v1):
      	curl http://localhost:8081/api/v1/products
      
    • Stream products (v1):
      	curl http://localhost:8081/api/v1/products/stream
      

    What to Expect:

    • v1 endpoints continue working as before
    • v1 clients send and receive the fields they originally depended on (such as id and name)
    • v1 remains stable so existing consumers don’t need to change

    Test v2 API (new version)

    Use the following commands to test the v2 API:

    • Create a product using v2, including the new price field:
      	curl -X POST 
      		-H "Content-Type: application/json" 
      		-d '{"name":"V2 Product","price":19.99}' 
      		http://localhost:8081/api/v2/products
      
    • Get all products (v2):
      	curl http://localhost:8081/api/v2/products
      
    • Stream products (v2):
      	curl http://localhost:8081/api/v2/products/stream
      
    • Get by ID (v2):
      	curl http://localhost:8081/api/v2/products/1
      
    • Search by name (v2):
      	curl "http://localhost:8081/api/v2/products/search?name=prod"
      

    What to Expect

    • v2 API endpoints work under the new base path
    • v2 API includes new capabilities (/{id} and /search)
    • v2 API can create products with a price field
    • v1 remains stable and unchanged

    This demonstrates how API versioning supports forward compatibility while protecting existing consumers.

  • Challenge

    Conclusion

    Conclusion

    Congratulations on completing the lab!

    You built a modern, fully reactive REST API from the ground up using Spring Boot 4 and R2DBC. Along the way, you learned how to design non-blocking services that scale efficiently, remain observable in production, and evolve safely over time.

    In this lab, you:

    • Defined a reactive domain model and persisted it using Spring Data R2DBC
    • Built non-blocking CRUD endpoints with WebFlux using Mono and Flux
    • Implemented streaming endpoints to demonstrate backpressure and incremental data delivery
    • Standardized error responses using a global exception handler and Problem Details-style payloads
    • Added lightweight observability with Micrometer to instrument HTTP requests and database operations
    • Introduced API versioning to evolve your API without breaking existing clients

    Together, these techniques form the foundation of production-ready reactive APIs. They support systems that are responsive under load, observable in real-world environments, and flexible enough to grow as requirements change.

    You now have a strong baseline for building reactive microservices with Spring Boot’s next-generation runtime. From here, you could extend this lab by adding request validation, introducing version-specific DTOs, exporting traces to an observability backend, or replacing in-memory filtering with optimized reactive queries.

    Great work!

  • About the author

    Jaecee is a Contract Author at Pluralsight, specializing in hands-on lab content. With a background in software development, she’s skilled in Ruby on Rails, React, and NLP. She enjoys crafting and spending time with family.

    Real skill practice before real-world application

    Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

    Learn by doing

    Engage hands-on with the tools and technologies you’re learning. You pick the skill, we provide the credentials and environment.

    Follow your guide

    All labs have detailed instructions and objectives, guiding you through the learning process and ensuring you understand every step.

    Turn time into mastery

    On average, you retain 75% more of your learning if you take time to practice. Hands-on labs set you up for success to make those skills stick.

    Get started with Pluralsight