- Lab
-
Libraries: If you want this lab, consider one of these libraries.
Guided: Adding Observability in ASP.NET Core 10 Web API
In this lab, you will add end-to-end observability to an ASP.NET Core 10 Web API using OpenTelemetry. You will configure structured logging with request context, enable request-level metrics to capture latency and error outcomes, and introduce distributed tracing to correlate activity across service boundaries. By generating live API traffic and examining logs, metrics, and traces, you will validate how observability provides a coherent, actionable view of runtime behavior in a modern .NET application.
Lab Info
Table of Contents
-
Challenge
Overview of the Lab
In this lab, you will explore observability in the context of an ASP.NET Core 10 Web API.
Observability is the ability to understand what a system is doing by examining the signals it emits at runtime. In modern applications, these signals typically take the form of logs, metrics, and traces. Together, they allow you to answer critical operational questions such as what happened, how often it is happening, and where time is being spent during request processing.
Throughout this lab, you will implement structured logging with correlation identifiers, enable request-level telemetry to capture latency and outcomes, and configure distributed tracing to observe execution flow through the request pipeline.
By the end, the API will not only function correctly, but will also provide clear visibility into its runtime behavior.
-
Challenge
Verify the Project Runs
Before you begin adding structured logging, metrics, and tracing, confirm that the application builds, runs, and responds to requests as expected.
Nothing is modified in this step. You are simply verifying that the foundation is solid before beginning to instrument the application.
Once you confirm the API is running properly, you will begin layering in observability. Call the
Pingendpoint to validate that the API pipeline is functioning correctly in its initial state. -
Challenge
Create Correlation Identifier
In a real system, a single request can produce many log entries and potentially touch multiple components. A correlation identifier provides a consistent value that can be attached to everything that happens during request processing.
In this step, you will introduce a correlation identifier and ensure it is included with every request.
A correlation ID gives a stable handle for a specific request, allowing you to tie together everything that happens while it is being processed. If a caller sends an
X-Correlation-IDheader, the code will honor it. If not, a new value will be generated.That value will be stored in the request context so downstream code can access it, and it will be included it in the response headers so callers can use it in their own logs. This establishes a foundation that you will build on in later steps when you start enriching logs and emitting telemetry. Now that you have implemented the correlation ID, test it to confirm that IDs are generated and preserved correctly. After updating the logging scope, run the API and call the Ping and Orders endpoints.
Observe the console output for each request.
For every request, you should now see log entries that include:
CorrelationIdEndpointMethodPathTraceId(if tracing is enabled)
These values should appear consistently in both the request start and request completion log entries.
The
Endpointvalue should reflect the route being executed (for example,GET /api/pingorPOST /api/orders).All log entries written during a single request should share the same:
CorrelationIdTraceId(if present)
When you call different endpoints, you should see:
- Different correlation IDs per request (unless explicitly provided)
- Different endpoint names
- Separate log entries for each request lifecycle
At this point, your logs are enriched with structured request context. This provides consistent metadata that can be used to trace activity across the application.
-
Challenge
Enrich the Logging Scope
In this step, you will enrich the existing logging scope in
CorrelationIdMiddleware.The middleware already creates a logging scope for each request. You will now populate it with request context so that all logs written during request processing include consistent metadata. Update the logging scope in
CorrelationIdMiddleware.csto include the current endpoint name. The logging scope also includes a placeholder for trace information. Now that the scope includes correlation, endpoint, and trace information, verify that logs include these values. -
Challenge
Use the Correlation ID in Controllers
In this step, you will start using the correlation ID in your API endpoints. The middleware already assigns a correlation ID to each request and stores it in the request context.
Now you will read that value in your controllers so it can be included in responses and application logs. After calling the endpoints, observe both the API responses and the console output.
For each request, you should see:
- An
X-Correlation-IDheader in the response. - A
correlationIdproperty included in the JSON response body.
The value in the response header and the response body should match.
Each request should have its own correlation ID unless you explicitly provide one.When calling the Orders endpoints, you should also see structured log entries in the console.
For example:
- When creating an order, the log entry should include the
customervalue as a named property. - When retrieving an order, the log entry should include the
orderIdas a named property.
Because the middleware establishes a logging scope, these log entries should also include the correlation ID automatically.
At this point, the correlation ID should appear consistently:
- In the response header
- In the response body
- In the structured log entries
This confirms that request context is flowing from middleware into your application code.
- An
-
Challenge
Enable Request Telemetry with OpenTelemetry
In the previous steps, you established correlation IDs and enriched your logs with request context.
In this step, you will enable request-level telemetry so the API can emit metrics and traces for incoming requests. This provides visibility into request volume, latency, and outcomes, and adds trace context that can be correlated with your logs. After calling the Ping and Orders endpoints, the console should begin displaying OpenTelemetry output.
You will see metric entries for each route you invoked, such as:
/api/ping/api/orders
Each metric entry includes attributes like:
http.request.methodhttp.response.status_codehttp.route
You will also see values such as:
Count— how many requests were recordedMin— the fastest requestMax— the slowest requestSum— total accumulated request duration
Each unique route and HTTP method combination produces its own metric entry.
You will notice the word Histogram and a series of ranges such as:
(-Infinity,0.005] (0.005,0.01] (0.01,0.025] ...This represents how request durations are distributed across predefined time ranges.
Each range shows how many requests fell within that latency window.
For example:
- A bucket labeled
(-Infinity,0.005]: 2means two requests completed in 5 milliseconds or less.
You do not need to analyze each bucket in detail for this lab. The key takeaway is that request duration is recorded as a distribution, not just a single average value.
About the author
Real skill practice before real-world application
Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.
Learn by doing
Engage hands-on with the tools and technologies you’re learning. You pick the skill, we provide the credentials and environment.
Follow your guide
All labs have detailed instructions and objectives, guiding you through the learning process and ensuring you understand every step.
Turn time into mastery
On average, you retain 75% more of your learning if you take time to practice. Hands-on labs set you up for success to make those skills stick.