Docker in Production Using Amazon Web Services

Learn how to harness the power of Docker and Amazon Web Services to test, build, deploy and operate your container applications in production using Ansible, CloudFormation, Lambda and so much more.
Course info
Rating
(33)
Level
Intermediate
Updated
Dec 1, 2017
Duration
10h 1m
Table of contents
Course Overview
Course Introduction
Creating the Sample Application
Creating Docker Release Images
Setting up AWS Access
Running Docker Applications Using the EC2 Container Service
Customizing ECS Container Instances
Deploying AWS Infrastructure Using Ansible and CloudFormation
Architecting and Preparing Applications for ECS
Defining ECS Applications Using Ansible and CloudFormation
Deploying ECS Applications Using Ansible and CloudFormation
Creating CloudFormation Custom Resources Using AWS Lambda
Managing Secrets in AWS
Managing ECS Infrastructure Lifecycle
Auto Scaling ECS Applications
Continuous Delivery Using CodePipeline
Description
Course info
Rating
(33)
Level
Intermediate
Updated
Dec 1, 2017
Duration
10h 1m
Description

Docker has become the modern standard for distributing and running cloud native applications, whilst Amazon Web Services provides the world's most powerful and popular cloud computing platform. Together, these technologies can help you deliver your applications faster, more reliably, and at scale. In this course, Docker in Production Using Amazon Web Services, you'll learn how to master these technologies and create a powerful framework and toolset that you can use in the real world for your own applications. First, you'll discover how to leverage the power of Ansible and CloudFormation to create a generic and reusable tool chain for deploying not just Docker applications, but any cloud service you can think of using a fully automated, infrastructure as code approach. Next, you'll use this tool chain to deploy foundational resources in your AWS account, EC2 container registry repositories, Virtual Private Cloud (VPC) networking resources, and an HTTP proxy service that secures outbound communication for your applications. With this foundation in place, you'll create a production-class environment for a Microservices application that leverages a number of native AWS services including the EC2 container service, Relational Database Service, Autoscaling Groups and Application Load Balancers. Finally, you'll learn how to solve operational challenges including how to extend CloudFormation to perform custom provisioning tasks using AWS Lambda functions, how to securely manage and inject secrets into your Docker applications and supporting resources, and so much more. By the end of this course, you'll have developed an advanced understanding of how you can use Docker and AWS to deploy and run your applications faster, smarter, and more reliably than ever before.

About the author
About the author

Justin is a full stack technologist working with organizations to build large scale applications and platforms, with a focus on end-to-­end application architecture, cloud, continuous delivery, and infrastructure automation.

More from the author
Continuous Delivery Using Docker And Ansible
Intermediate
7h 13m
May 10, 2016
Section Introduction Transcripts
Section Introduction Transcripts

Course Overview
Hi everyone, my name is Justin Menga, and welcome to my course, Docker in Production Using Amazon Web Services. Docker provides you with the best way to build, package, and run modern applications, whilst Amazon Web Services is the world's most popular cloud computing platform. And in this course, you will learn how you can leverage both of these exciting technologies to create a truly powerful platform from which you can deploy and operate your mission-critical production applications. Some of the major topics that we will cover include building, testing, and publishing Docker images; deploying Infrastructure as Code using Ansible and CloudFormation; running Docker applications using the AWS EC2 container service; addressing key operational challenges such as running custom deployment tasks, secrets management, and auto scaling your applications; and building an end-to-end continuous delivery pipeline. By the end of this course, you will have a complete framework and methodology to build, test, deploy, and operate your Docker applications at scale in a fully-automated fashion using Amazon Web Services. I hope you'll join me on this journey to learn how to run your Docker applications in the cloud, with the Docker in Production Using Amazon Web Services course, at Pluralsight.

Course Introduction
Hi, my name is Justin Menga, and welcome to Docker in Production Using AWS. Docker has taken the technology world by storm, and in the very short time since its inception in 2013, has established itself as the modern technology of choice to universally package, distribute, and run applications locally in your datacenter and in the public cloud. Amazon Web Services is by a significant margin recognized as the leading public cloud provider in terms of market share, maturity, and functionality, and today is trusted by millions of customers to run their production workloads will all of the benefits the cloud promises to provide. Together, Docker and AWS make a powerful combination. And in this course, you will learn how to build, deploy, and run Docker applications in production using AWS.

Creating the Sample Application
Hi, my name is Justin Menga, and welcome to Creating the Sample Application. Before we can deploy the sample application to AWS, it is important to have a solid understanding of how the application is architected, how to build and test the application, and the expected end-user functionality. In this module, we're going to install and run the sample application in your local development environment, which will provide you with the core foundational understanding of how to build, run, and test the application. The sample application is based upon a microservices architecture, so we will first discuss each of the different microservices and how they interact to provide the end-to-end functionality of the application. With a high-level understanding of the application, we will proceed to fork the application's source code from GitHub and clone the application's source locally. We will briefly examine the structure of the application repository so that you have an understanding of where to find the source code, test specifications, and build specifications for each of the application microservices. We will proceed to build the various application artifacts for each microservice, and then start and run the application. At this point, we will be able to take the application for a test drive, after which we will run acceptance tests that verify the external end-to-end functionality of the application.

Creating Docker Release Images
Hi, my name is Justin Menga, and welcome to Creating Docker Release Images. In the previous module, you were introduced to the sample microtrader application, and you were able to install, build, run, and test the application in your local development environment. We verified the application is functional and operating as expected, but before we are able to deploy and operate the application in AWS, we need to package each of the application microservices into Docker images. In this module, we will learn how to run a Docker-based workflow for building, testing, and publishing Docker release images for the sample application. The workflow is based upon a powerful continuous delivery methodology I describe in my Pluralsight course, Continuous Delivery Using Docker And Ansible. And for this course, I have already created the workflow for you. To ensure you have an understanding of the workflow that has been created, we will first discuss the target end-to-end continuous delivery architecture, and then describe what I refer to as the release pipeline workflow for building, testing, and publishing Docker release images. The release pipeline workflow consists of a number of stages, the first of which is the test stage, which uses Docker to run unit tests and build application artifacts. The next stage is the release stage, which is responsible for building and testing Docker release images for each of the application services. The final stages of the workflow allow you to tag and publish Docker release images into a Docker registry, ready for deployment into the various AWS environments we will create later on in this course.

Setting up AWS Access
Hi, my name is Justin Menga and welcome to Setting Up AWS Access. We are almost ready to run our sample application in AWS but before we can do this, we need to ensure our AWS account is configured appropriately to allow authorized users the ability to publish, deploy and run applications and infrastructure in AWS. We will first discuss best practices for configuring the AWS Identity and Access Management or IAM service which includes configuring multi-factor authentication and delegating privileges to IAM roles which users can then assume. We will set up IAM for our account following these core principles, configuring IAM policies, roles, users and groups. We will also set up operational access to our AWS infrastructure creating an EC2 key pair for securely providing SSH access to EC2 hosts and then configuring the AWS command line tool to work with our IAM setup to provide multi-factor authentication and automatic role assumption from the command line.

Running Docker Applications Using the EC2 Container Service
Hi. My name is Justin Menga, and welcome to Running Docker Applications Using the EC2 Container Service. In the previous modules, you were introduced to the sample microtrader application and learned how to test, build, and run the application locally and how to create, tag, and publish docker release images for each of the microtrader application services. We are now ready to run our application in AWS, and in this module, we will learn how to run Docker applications in AWS using the EC2 container service. We will first discuss the EC2 container service and EC2 container registry, learning about the high-level architectural components of these services and how they interact with each other. We will then proceed to publish each of our Docker images to the EC2 container registry, or ECR, which will require us to first create ECR repositories for each image and then publish each image using the release pipeline workflow included in the sample microtrader application. With our Docker images published in ECR, we will establish the various ECS components required to run our applications, including creating ECS clusters, creating ECS task definitions, and defining ECS services and tasks that represent the required runtime state of our application.

Architecting and Preparing Applications for ECS
Hi, my name is Justin Menga, and welcome to Architecting and Preparing Applications for ECS. In the previous module we established core foundational infrastructure required to support the deployment of our microtrader application to AWS, and learned how we can use Ansible and CloudFormation to define infrastructure as code, and very quickly create, update, and destroy complete environments. We are now ready to focus on the task of running the microtrader application in AWS. And in this module we will first discuss how we will architect the CloudFormation stack that will run the microtrader application in its supporting resources and then discuss specific challenges we will face running the microtrader application in AWS, including how microtrader cluster discovery works in AWS environments where traditional application discovery techniques do not work. Another challenge we need to overcome is the ability to generate configuration files on the fly to configure our application to run appropriately or for a local Docker environment, as well as an AWS ECS environment. And we will learn how to configure our containers to use a tool called ConfD that can generate environment-specific configuration files on the fly in a manageable and scalable manner.

Defining ECS Applications Using Ansible and CloudFormation
Hi, my name is Justin Menga and welcome to Defining AWS Infrastructure using Ansible and CloudFormation. In the previous module we discussed the architecture of the CloudFormation stack we will create to run the Microtrader application in AWS and in this module we will cerate our initial stack and define all of the supporting resources. We will get started by establishing an Ansible playbook for the stack, leveraging the same AWS CloudFormation role we introduced in earlier modules and define a development environment for the Microtrader application. We will then proceed to define the various components of the stack in our playbook creating an EC2 autoscaling group defining a public and internal application load balancer and associated DNS record, configuring an RDS instance that will run the audit database, create CloudWatch logs for storing system and container logs, and configure security groups and IAM roles that link our resources and allow them to communicate with each other. By the end of this module you will have a firm understanding of how to build a complex set of AWS resources and combine them to from a strong architectural foundation for ECS-based applications.

Deploying ECS Applications Using Ansible and CloudFormation
Hi! My name is Justin Menga and welcome to deploying ECS applications using Ansible and CloudFormation. We have created an Ansible playbook and CloudFormation template for our Microtrader application stack, and defined all of the supporting resources in our stack, including auto scaling groups, application load balancers, an RDS instance, and CloudWatch log groups. After an introduction to how ECS allocates system resources, including CPU, memory and network ports, we will focus on the ECS CloudFormation resources required in our CloudFormation stack. We will focus on the EC2 container service components of our application, and we'll define an ECS cluster resource, which ECS container instances and the application auto scaling group will join, along with ECS task definitions and ECS services for each of our Microtrader application services. We will then deploy our stack to AWS, which for the most part will be in a working state. However, as we will see, there will be an issue with the application and we will learn how we can troubleshoot and remediate the issue.

Managing ECS Infrastructure Lifecycle
Hi, my name is Justin Menga, and welcome to Managing ECS Infrastructure Lifecycle. One challenge with operating ECS clusters is managing the lifecycle of the underlying EC2 Auto Scaling group instances that your ECS cluster runs on top of. EC2 constructs such as Auto Scaling groups have no awareness of the implications running on top of them, which means if you instruct the ECS service to remove or terminate an EC2 instance from an Auto Scaling group, by default, this will be performed immediately without any consideration of the ECS cluster or ECS services running on your instances. The scenario of immediately terminating EC2 instances is clearly undesirable as it likely will have very negative effects on your applications, and more importantly, user experience. So before we terminate an EC2 instance, we need a mechanism that will gracefully migrate running ECS services to another instance in our ECS cluster and Auto Scaling group. The good news is that AWS does offer a solution to this. And in this module, we will learn how to consume a feature called Auto Scaling lifecycle hooks, which provide the ability to notify you via the Simple Notification Service of an impending lifecycle event such as the creation or termination of an EC2 instance. We will learn how we can use a lambda function to consume this event, interact with the EC2 container service to gracefully drain all ECS services from an instance that is about to be terminated, and then notify the EC2 Auto Scaling service to proceed with termination of the instance.