- Lab
- Core Tech

Guided: Implement OpenTelemetry for Java Observability
Unlock the full potential of your Java applications by mastering observability with OpenTelemetry. In this hands-on Code Lab, you'll learn how to implement OpenTelemetry in a Java microservice to gain deep insights into your system’s performance. From configuring the OpenTelemetry Agent to collecting and exporting telemetry data like traces and metrics, you'll explore the key techniques that help developers troubleshoot and optimize distributed systems. By the end of this lab, you’ll be equipped with the practical skills to monitor real-world Java applications, pinpoint performance bottlenecks, and enhance the reliability of your microservices. Whether you're looking to improve your system's scalability or simply get more visibility into its inner workings, this lab offers the experience and expertise needed to take your Java observability to the next level.

Path Info
Table of Contents
-
Challenge
Introduction
In this hands-on Code Lab, you will instrument two simple Java microservices (Java 21, Spring Boot 3.4.4) with OpenTelemetry to achieve end-to-end observability. The microservices are a Greeting Service and a Name Service. The Greeting Service calls the Name Service to process input, simulating a distributed application. You will gradually add metrics and tracing, and set up OpenTelemetry tooling (Prometheus for metrics and Jaeger for traces) to monitor and visualize the system.
What You’ll Do:
- Microservice Setup: Run two Spring Boot microservices and explore the default metrics that Spring Boot/Micrometer provides out of the box.
- Custom Metrics: Add your own custom metrics using Micrometer annotations and the Micrometer API to track business-specific information.
- OpenTelemetry Agent: Attach the OpenTelemetry Java Agent to capture telemetry (traces and metrics) without code changes.
- Telemetry Export: Configure an OpenTelemetry Collector to export metrics data to Prometheus and trace data to Jaeger.
- Visualization: Use Prometheus’s UI to query metrics and Jaeger’s UI to view distributed traces spanning both services.
Service Overview and Interaction
This project includes two microservices:
- Greeting Service (
greeting-service
directory, runs on port8081
): Exposes the/greet
endpoint, which returns a friendly message likeHello, ALEX!
- Name Service (
name-service
directory, runs on port8082
): Exposes the/process
endpoint, which takes a name and transforms it (e.g., to uppercase)
When a client sends a request to
/greet
on the Greeting Service, it makes an internal HTTP call to the Name Service’s/process
endpoint to format the name. This simulates a common microservice pattern where services communicate with each other to complete a task.The interaction flow looks like this:
Client → Greeting Service (/greet) → Name Service (/process)
Using the
solution
DirectoryIf you get stuck or want to compare your progress, you can find reference implementations in the
solution
folder. Each step and task is labeled clearly using the following directory structure:step[step number]-[task number]
-
Challenge
Inspect Built-in Metrics Exposed by Micrometer
Spring Boot Actuator integrates with Micrometer to provide many metrics out of the box. In this step, you'll run the two microservices and examine the default metrics they expose (even before adding any custom instrumentation).
Both microservices have Actuator and Micrometer on the classpath via Spring Boot Actuator. You need to ensure the metrics endpoint is exposed. This will launch the Greeting Service on port 8081 and the Name Service on port 8082 (these default ports are set in each app’s
application.properties
files). Once running, you should see startup logs indicating both are up. With the services running, access the Actuator metrics HTTP endpoints to see Micrometer’s built-in metrics. Click the links below to fetch the metrics list from the services:- Greeting Service Metrics Index: {{localhost:8081}}/actuator/metrics
- Name Service Metrics Index: {{localhost:8082}}/actuator/metrics
These endpoints return a JSON with a list of available metric names. You should see dozens of metrics names, including:
- HTTP request metrics:
http.server.requests
, which tracks HTTP requests to your controllers - JVM metrics:
jvm.memory.used
,jvm.gc.pause
, andprocess.cpu.usage
- Log metrics:
logback.events
, if using Logback
-
Challenge
Add Custom Metrics Using Annotations
Micrometer provides powerful annotations like
@Timed
and@Counted
that allow you to capture performance metrics with minimal effort. However, these annotations only work if their corresponding aspects are registered in the Spring context.Before using the annotations, you always want to make sure that Spring AOP depedency has been added into your
pom.xml
for the Name Service, in this case thename-service/pom.xml
file:<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-aop</artifactId> </dependency> ``` Now that Micrometer’s annotation support is enabled, it's time to start using those annotations to gain insights into method performance and usage.
-
Challenge
Add Custom Metrics Programmatically with Micrometer API
Sometimes, you will want to measure how long a specific block of code takes without relying on annotations. Micrometer gives you a way to do this manually using
Metrics.timer().record(...)
. While annotations are quick and convenient, sometimes you need full control — like incrementing a metric under a specific condition or outside of a method. That’s where programmatic metrics come in. -
Challenge
Set up the OpenTelemetry Agent
Before sending telemetry data to other downstream services, you need to first set up a local OpenTelemetry Collector that simply logs received telemetry to the console using the debug exporter. This is a great way to verify that telemetry data is flowing without needing any external backend. Now that you’ve set up the OpenTelemetry Collector configuration file, it’s time to run the collector and start receiving telemetry data from your applications. Keep this Terminal tab open. The collector will now listen for incoming telemetry data on ports 4317 (gRPC) and 4318 (HTTP). If you're using the debug exporter, you'll see telemetry logs directly in this terminal. > Note: If your applications are already running, stop them now before proceeding. The OpenTelemetry agent must be attached at application startup using JVM flags.
Now that the collector is running locally, configure your applications to send telemetry to it.
-
Package both applications using Maven:
Terminal 1:
cd greeting-service # if not already in this directory mvn package
Terminal 2:
cd name-service # if not already in this directory mvn package
-
Start each application using the
java -javaagent
command in theworkspace
directory:Terminal 1:
cd .. java -javaagent:otel-agent/opentelemetry-javaagent.jar -Dotel.service.name=greeting-service -Dotel.exporter.otlp.endpoint=http://localhost:4318 -Dotel.instrumentation.micrometer.enabled=true -jar greeting-service/target/greeting-service-1.0.0.jar
Terminal 2:
cd .. java -javaagent:otel-agent/opentelemetry-javaagent.jar -Dotel.service.name=name-service -Dotel.exporter.otlp.endpoint=http://localhost:4318 -Dotel.instrumentation.micrometer.enabled=true -jar name-service/target/name-service-1.0.0.jar
-
Once both services are running with the agent, send a test request to generate telemetry:
curl "http://localhost:8081/greet?name=Riley"
-
In the Terminal where the OpenTelemetry Collector is running, you should now see trace and metric data printed, confirming that telemetry is being received.
-
-
Challenge
Export Metrics to Prometheus
Prometheus allows you to collect, store, and query time-series metrics from your applications. In this step, you’ll set it up locally and configure it to scrape metrics exposed by the OpenTelemetry Collector. Bring it all together by running Prometheus.
🧠 Note: Keep your OpenTelemetry Collector running in its original Terminal.
- Start Prometheus in the fourth Terminal tab.
Terminal 4:
cd prometheus ./prometheus --config.file=prometheus.yml
- Visit Prometheus by clicking {{localhost:9090}}
- Try searching for a metric like
jvm_memory_used_bytes
orhttp_server_requests_seconds_count
.
Note: To verify that Prometheus is running, you can use the following curl command:
curl http://localhost:9090/-/ready
- Start Prometheus in the fourth Terminal tab.
Terminal 4:
-
Challenge
Export Traces to Jaeger
Now that you’ve configured your applications and the OpenTelemetry Collector to generate and route telemetry, it’s time to visualize your traces.
This is where Jaeger comes in. Jaeger is a distributed tracing system that allows you to see how requests flow through your microservices. It helps you:
- Understand latency across service boundaries
- Identify slow operations or bottlenecks
- Trace the full lifecycle of a request Once Jaeger starts, open this URL in your browser by clicking on the following link: {{localhost:16686}}
You’ll land on the Jaeger search interface, where you can:
- Select a service (like
greeting-service
). - View spans and traces from real requests.
- Dive into trace timing, dependencies, and more.
Note: You might need to create some traffic in the sixth Terminal in order to see traces:
curl http://localhost:8081/greet?name=Kai
Once you execute the requests, refresh the Jaeger interface so that the new traces will show up in the dropdown.
-
Challenge
Conclusion
Congratulations on successfully completing this Code Lab!
You’ve built and instrumented two Java microservices with full observability using OpenTelemetry, Prometheus, and Jaeger. Along the way, you configured metrics, enabled tracing, and learned how distributed systems can be monitored and debugged effectively.
📊 What You’ve Learned
- How to expose built-in and custom metrics using Micrometer annotations and programmatic APIs
- How to configure the OpenTelemetry Collector to receive, process, and export telemetry data
- How to export metrics to Prometheus and visualize them
- How to export traces to Jaeger and view distributed request flows across microservices
- How to test observability setups using curl, health endpoints, and Prometheus API queries
🚀 Ideas to Extend the Project
- Add New Services: Introduce a third service and trace requests across more hops.
- Add Error Scenarios: Inject artificial delays or failures and observe their impact in Jaeger and Prometheus.
- Visualize in Grafana: Use Grafana to create dashboards based on Prometheus metrics.
- Add Logs: Integrate a log exporter (like Loki or FluentBit) for full observability (metrics + traces + logs).
- Set Up Alerts: Configure Prometheus alerting rules for latency or error rate thresholds.
📚 Related Courses on Pluralsight
Want to keep learning? Check out these recommended Pluralsight courses:
What's a lab?
Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.
Provided environment for hands-on practice
We will provide the credentials and environment necessary for you to practice right within your browser.
Guided walkthrough
Follow along with the author’s guided walkthrough and build something new in your provided environment!
Did you know?
On average, you retain 75% more of your learning if you get time for practice.