You can add consistent monitoring to your whole application with Docker, the same for every container in every environment. This course teaches you how to expose metrics from Linux and Windows containers, collect them, and display them in dashboards.
It's easy to run new and old applications in Docker, but you can't put containerized apps into production without monitoring. In this course, Monitoring Containerized Application Health with Docker, you'll learn how to implement effective monitoring for Linux and Windows containers. First, you'll learn how to gather and visualize metrics from containers using Prometheus and Grafana. Next, you'll see how to add metrics to your application, and export metrics from the Java and .NET runtimes and from the Docker platform. Finally, you'll explore how to build an effective dashboard with a single view over the health of your whole application. When you're finished with this course, you'll be ready to add monitoring to your application and move confidently to production.
Course Overview Hey, how you doing? My name's Elton, and this is Monitoring Containerized Application Health with Docker. I've been running containers in production since before Docker even got to version one, and that experience has shown me that monitoring is one of the biggest advantages that Docker brings, both in production and in development. Containers are the best way to run new and old server applications, but before you go to production you need to understand a new approach to monitoring, one that works when you have dozens or hundreds of short-lived containers. This course gives you that understanding. I'll explain how monitoring works in Docker, how you expose metrics from your containers, and run other containers to collect the metrics and visualize them in a friendly dashboard. You'll learn how to build three levels of monitoring into your dashboards, so you can see what's happening in your applications, in your containers, and in the Docker platform itself. I'll be using Java apps in Linux containers and. NET apps in Windows containers, so you'll see how you can make monitoring consistent across different technology stacks, and I'll be running on single Docker servers and on Docker Swarm, so you'll learn how container monitoring works the same way in every environment. By the end of the course you'll understand how to add effective monitoring into your own applications using industry standard tools and techniques, so stick with me, and in just under 3 hours you'll learn all about monitoring containerized application help with Docker.
Collecting Metrics with Prometheus A metrics server is the central point for collecting and storing monitoring data in containerized applications. Prometheus is the most popular metrics server. It's open-source, cross-platform, Docker friendly, and extremely powerful, and in this module you'll learn all about it. My name's Elton and this is Collecting Metrics with Prometheus, the next module in Pluralsight's Monitoring Containerized Application Health with Docker. In the last module you learned how all the pieces of the monitoring solution fit together, and now you'll get started with the main component. I'll start by showing you how to run Prometheus in Docker and explaining why the metrics server should be in a container alongside your application containers. The Prometheus team provide a Docker image for running on Linux, but not for Windows, so I'll show you how to use the standard Linux image and how to package your own Windows Docker image. Prometheus is driven by a simple configuration file, which you've already seen briefly, and in this module you'll see exactly what you can configure, and also learn the options for providing your configuration to the Prometheus container. The last thing for this module is to cover the types of data Prometheus can work with. You've seen basic incrementing counters, and there are three other data types you need to be aware of to cover all the monitoring scenarios. I'll also spend a bit more time in the Prometheus UI to show you how to query those different data types. First let's understand why Prometheus should run in its own container.
Exposing Runtime Metrics to Prometheus There's different types of information you can use to monitor your apps, and a lot of really useful stuff comes for free in the operating system and the application runtime. You can make that available to your monitoring server just by packaging an existing utility inside your container image. You don't even need to change application code. My name's Elton, and this is Exposing Runtime Metrics to Prometheus, the next module in Pluralsight's Monitoring Containerized Application Health with Docker. All the major application runtimes collect their own metrics; Java has JVM metrics,. NET uses Windows Performance counters. Web applications also have metrics collected by the web server. Tomcat and IIS record useful data about requests and responses. A lot of essential information is already collected by the runtime, you just need a way to expose it from your container. That's what this module is all about, showing you how to take advantage of monitoring data that's already been collected for you. I'll show you how to package a metrics exporter utility in your application containers. That exporter reads the metrics your app runtime and operating system is already collecting, and makes them available in Prometheus format as a metrics endpoint. Some of the stats you've already seen being collected by Prometheus have come from exporting these runtime metrics, and this module's going to show you how to do it. I'll cover the specifics of Java and Tomcat, and. NET, and IIS, and you'll also learn the principles, which apply to any runtime that collects metrics. I'll start by talking about exporter utilities.
Exposing Docker Metrics to Prometheus In a containerized solution, everything runs in containers, all the components of your monitoring architecture, as well as all the parts of your app. It's critical to have insight into what the Docker platform is doing to manage those containers, if you want to be confident about going to production. My name's Elton and this is Exposing Docker Metrics to Prometheus, the next module in Pluralsight's Monitoring Containerized Application Health with Docker. The Docker engine has a built-in feature to export metrics in Prometheus format. It provides all the key metrics you need to monitor your containers in production, and the feature works in the same way across all versions of Docker. You can enable metrics in Docker Engine on the server, and in Docker Desktop on Mac and Windows, so you get the same consistency across all your environments. There are metrics covering the engine and containers, and in Swarm mode there are additional metrics about the cluster. There are even metrics about image builds, so when your CI process is all running in Docker, you can add monitoring to your CI servers using the same tools you use for application monitoring. In this module I'll show you how to enable metrics in Docker Desktop on Mac and Windows 10, and Docker Engine on Ubuntu and Windows Server. I'll look at the key metrics you get from the Docker engine and the Swarm, and I'll scrape those metrics and query them in Prometheus. I'll start by looking at how the Docker metrics work.