Hamburger Icon
  • Labs icon Lab
  • Core Tech
Labs

Guided: Implement OpenTelemetry for .NET and C# Observability

Welcome to the Code Lab: Guided: Implement OpenTelemetry for .NET and C# Observability. This lab is an essential resource for .NET and C# developers looking to enhance their applications with robust observability features. How do you instrument a .NET application for distributed tracing? How can you capture and analyze performance metrics across microservices? This Code Lab is designed to help you answer these questions through practical, hands-on learning. Modern enterprise applications require effective monitoring to ensure performance and reliability. By the end of this Code Lab, you will have the skills to configure the OpenTelemetry SDK in a .NET/C# application, collect valuable telemetry data, and export it to tools like Jaeger or Azure Monitor. This will empower you to diagnose bottlenecks, optimize your applications, and improve overall system resilience.

Labs

Path Info

Level
Clock icon Beginner
Duration
Clock icon 45m
Published
Clock icon Feb 13, 2025

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Table of Contents

  1. Challenge

    Introduction and Setup

    Modern software systems often comprise multiple services and technologies working together. As these systems grow in complexity, it becomes harder to understand their behavior, troubleshoot performance bottlenecks, and detect failures before they impact users. This is where observability comes in.

    Observability refers to a system’s ability to allow developers and operators to infer the internal states of a system by analyzing the data it generates—primarily logs, metrics, and traces. By instrumenting your services and collecting telemetry data, you can quickly identify the root causes of performance issues, spot trends in system metrics (like CPU and memory usage), and pinpoint exactly where requests are slowing down or failing.

    OpenTelemetry (OTel) is an industry-standard framework that helps capture and export telemetry data from various languages, frameworks, and platforms. In .NET, OpenTelemetry offers libraries and SDKs that make it easy to integrate tracing, metrics, and logging into your applications. Once your application is instrumented, you can export data to visualization and analysis tools (e.g., Jaeger, Azure Monitor, Grafana, etc.). In the project folder WeatherForecast, there is a .NET web API that you will apply OpenTelemtry on it for tracing, metrics and exporting.

    Run the following commands in the terminal and make sure you are always running the commands inside the folder WeatherForecast:

    cd WeatherForecast
    dotnet run
    

    Then open the other terminal tab and run the following curl command to request the API and explore the output

    cd WeatherForecast
    curl -k http://localhost:5162/weatherforecast
    

    Note: whenever you do changes in the code while working in the lab's tasks, you need to re-run the app in the terminal so that it builds and run your changes. You can do that by closing the running session using CTRL+C in the terminal then run the command dotnet run again. Explore the file WeatherForecast/WeatherForecast.csproj to see the needed dependencies for OpenTelemetry that includes:

    1. OpenTelemetry
    2. OpenTelemetry.Extensions.Hosting
    3. OpenTelemetry.Instrumentation.AspNetCore
    4. OpenTelemetry.Instrumentation.Http
    5. OpenTelemetry.Exporter.Jaeger
    6. Azure.Monitor.OpenTelemetry.Exporter
    7. OpenTelemetry.Exporter.Console
    8. OpenTelemetry.Metrics

    If you get stuck in any of the tasks you can check the Solutions folder.

  2. Challenge

    Implementing Distributed Tracing

    Distributed Tracing provides insight into how a single request flows through your application (and potentially through multiple services). Each request is represented as a trace, consisting of one or more spans. A span marks a specific operation—such as a controller action or a call to a downstream service—and includes metadata like timing, status, and attributes (key-value pairs describing the operation). With OpenTelemetry, you can automatically trace common operations (like incoming HTTP requests in ASP.NET Core or outgoing HTTP calls via HttpClient) and also create custom spans around critical sections of your code. These traces can then be sent to backends such as Jaeger, Azure Monitor, or other compatible observability tools.

  3. Challenge

    Capturing Metrics

    Metrics

    Where tracing shows the path of a single request, metrics track aggregated performance indicators over time (e.g., request rate, CPU usage, memory usage). You can also define business or domain-specific metrics, such as how many times specific endpoint was invoked.

  4. Challenge

    Exporting Telemetry Data

    Now that you have traces and metrics being captured locally, the next step is to send that data to one or more observability backends. For example, you can configure:

    • Jaeger (commonly used for distributed tracing)
    • Azure Monitor (covers both traces and metrics in Application Insights)

    You can choose any combination of exporters that suits your environment. Some organizations even export to multiple backends simultaneously.

    Note: in this task you will learn how to enable Jaeger and AzureMonitor in your app but you will not be able to browse them and see the UI in the lab's environment. ###### Azure Monitor In order to be able to use Azure Monitor you will need to configure an Application Insights resource in Azure portal which will pickup the traces of your application.

  5. Challenge

    Conclusion and Best Practices

    By now, you have traces and metrics flowing from your .NET application to Jaeger, Azure Monitor, or both. This provides near real-time insights into how your application behaves under different conditions and loads.

    However, observability is a continuous journey. As your system grows, you’ll want to:

    • Correlate Logs with Traces: Use the same trace and span identifiers in logs, so you can jump from logs to specific spans and vice versa.
    • Optimize Sampling: If your application processes a high volume of requests, consider sampling strategies to balance overhead and data granularity.
    • Add More Instrumentation: Extend instrumentation to database layers, message queues, or external APIs.
    • Adopt Consistent Naming Conventions by standardizing metric and span naming across teams ensures cohesive dashboards.

What's a lab?

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Provided environment for hands-on practice

We will provide the credentials and environment necessary for you to practice right within your browser.

Guided walkthrough

Follow along with the author’s guided walkthrough and build something new in your provided environment!

Did you know?

On average, you retain 75% more of your learning if you get time for practice.