Managing Load Balancing and Scale in Docker Swarm Mode Clusters

Swarm mode is the clustering technology built right into Docker. This course teaches you how load balancing and scale work in swarm mode, so you can run reliable and scalable apps in production.
Course info
Level
Intermediate
Updated
Mar 23, 2018
Duration
1h 58m
Table of contents
Course Overview
Understanding Load Balancing and Service Discovery
Scaling Services and Nodes in Swarm Mode
Managing Request Routing and Data Storage
Supporting Production Maintenance and Deployments
Description
Course info
Level
Intermediate
Updated
Mar 23, 2018
Duration
1h 58m
Description

Docker swarm mode is a production-grade container orchestrator with built-in features for load-balancing and scaling your applications. In this course, Managing Load Balancing and Scale in Docker Swarm Mode Clusters, you'll learn how to deploy and manage applications in swarm mode for high availability, high performance, and easy scale. First, you'll learn how load balancing and service discovery works in swarm mode. Then you'll learn how to scale your services and your swarm - with Linux and Windows nodes. Finally, you'll learn how to run multiple applications and maximize the use of your cluster, and how swarm mode supports production maintenance and deployment. When you’re finished with this course, you will have the skills and knowledge to run performance reliable apps in production with Docker swarm mode.

About the author
About the author

Elton is an independent consultant specializing in systems integration with the Microsoft stack. He is a Microsoft MVP, blogger, and practicing Technical Architect.

More from the author
Modernizing .NET Framework Apps with Docker
Intermediate
3h 42m
Dec 28, 2017
More courses by Elton Stoneman
Section Introduction Transcripts
Section Introduction Transcripts

Course Overview
Hey, how are you doing? My name is Elton and this is Managing Load Balancing and Scale in Docker Swarm Mode Clusters. I've always found swarm mode to be a powerful container orchestrator that's easy to use, and I've been running it in production since it was released in 2016. If you're ready to take your containerized apps to production, but you need to understand how the orchestrator helps you with load balancing, scale, updates, and maintenance, then this course is for you. Over the next 2 hours I'll show you how a service discovery and load balancing works in swarm mode and the configuration options that Docker gives you. I'll show you how to scale up your services and your swarm and how to get the most out of it by running multiple apps and fronting them with a reverse proxy running in a container. I'll show you how to take nodes out of the swarm safely for maintenance and how you can configure your services with rolling updates and automatic rollbacks, which makes your deployment fast and reliable. And my demo solution uses a mixture of Windows and Linux containers, so I'll be showing you how all that works with a hybrid swarm, made up with a mixture of Linux and Windows nodes. By the end of the course, you'll understand just how powerful swarm mode is and you'll be comfortable moving your Dockerized apps into production knowing that you can scale and manage them easily.

Scaling Services and Nodes in Swarm Mode
Hey, how are you doing? I'm Elton and this is Scaling Services and Nodes in Swarm Mode, the next module in Managing Load Balancing and Scale in Docker Swarm Mode Clusters. Now you understand how to manage the traffic getting into the services running on your swarm, and how to manage traffic between services, this module is all about scale. I'll cover the different options for scaling services in swarm mode, that give you high availability for your applications, and high throughput. Scale, load balancing and service discovery are linked together, and I'll show you how the options work in different combinations. Scaling services lets you get the most out of your compute power, but when demand increases you'll eventually need to scale out the cluster. I'll cover that by adding nodes. I'll join Windows and Linux worker nodes to my cluster, which gives me a hybrid swarm that can run a mixture of Linux and Windows containers. Then I'll show you what happens when you have services running at scale and you add more nodes. I'll be using an evolution of my demo app to show off these features, so before I get started, I'll walk through how the app has changed.

Managing Request Routing and Data Storage
Hey, how are you doing? I'm Elton and this is Managing Request Routing and Data Storage, the next module in Managing Load Balancing and Scale in Docker Swarm Mode Clusters. In this module I'm going to show you how to get the most from your cluster, how to run multiple applications, and make them all publicly available on standard HTTP ports. And I'll also cover stateful applications, and show you how to run a stateful app at scale in the swarm, so you get redundancy for your data storage too. Right now my swarm is hosting a stateless web app and a stateless REST API. One is running on Linux nodes and the other one on Windows nodes, so I have two different public entry points, using different load balancers. I want to have a single entry point to the cluster, so I can configure my public DNS CNAMEs to one load balancer address, and have the swarm route application requests for me. Swarm mode doesn't have native functionality to support that, so in this module, I'll be deploying a proxy in a container, technically a reverse proxy. That will be the public entry point for all the apps on my cluster, and it will route requests to the right set of containers. Swarm mode does support Docker volumes, so you can have stateful services deployed at scale. But the swarm doesn't automatically replicate data between nodes, so your application needs to manage replication. I'll be showing you that in this module, using Nginx for the reverse proxy and Elasticsearch as my stateful application.