Blog articles

How to: Production Deployment With Docker

By Pluralsight    |    January 16, 2015

Docker is a great tool for developing applications. For software teams, it’s much easier to build an app without having to ensure each engineer’s computer is configured properly. Docker runs the same whether they’re using Mac, Linux, or Windows.

The thing Docker is still a bit shaky on, at least from a Ruby on Rails perspective, is deploying that application to production. After searching and testing different deployment methods and Docker images, there really is not a single best practice that stands out. This post will show you the best way I have found for deploying a Rails app to production.

Criteria

Before getting into how to do it, let’s discuss some of the criteria we’ll need to deploy using Docker versus just a regular deployment pipeline, like Capistrano.

  • Ease of Use : Deployment should be easy, otherwise it will be a hindrance and deploying new code will be a scary task.
  • Zero Downtime : Let’s face it — deploying Ruby on Rails applications without causing downtime for your application should be the de facto standard in this day and age.
  • Automated Deployment : I love being able to push code to my remote repository, have something like Codeship run the tests, and then automatically deploy the code to my production server. I want the same for Docker deployment.

The Process

As I’ve said previously, I want this to be as easy as possible. If you’ve watched the Docker: Part 4 screencast on Code TV, you can see the commands I had to run to start the containers and link them together:

If you’re like me, running these commands a lot and making sure you don’t forget a flag or option seems like a nightmare. This is where Fig comes in.

FIG

If your Dockerfile describes how to build your individual containers, Fig provides a way to specify your entire container infrastructure. With Fig, you use a single YAML file to add volumes, link containers, and open ports. Here’s an example fig.yml for the Code TV Journal application:

I’ve basically specified two containers: web and db. The web container builds from the Dockerfile in the current directory, exposes port 80, and then links to the db container. The db container uses the postgreSQL image from Docker Hub and then exposes for 5432 to the other linked containers. With this configuration in place, we can build the containers and then start them with the following commands:

Fig will actually start up the linked db container first so the web container is not running without the database connection. The -d flag tells Fig to run in the background so we can log off while the containers are still up. Make sure to check out the Fig site for documentation and configuration options.

DEPLOYMENT

We can now easily start our Docker containers, but how will it work on a production server? Assuming Docker and Fig are both installed, all we’d need to do is clone our remote repository and run the previous fig commands to bring up our containers. The problem we now have is how to pull in changes to our codebase.

Unfortunately, while Fig is great at starting containers, it isn’t so great at restarting them. While it’s definitely possible to pull the remote changes and then re-run the fig commands, there will be no containers available to serve requests while they are being recreated. For this, we’re going to actually use the docker commands directly and then balance requests using Nginx.

We first need to change the ports that are exposed for the web container so it isn’t running on port 80, as that’s where Nginx will listen. Let’s change it to the following:

Now, when we start Fig for the first time, our web container will handle requests to port 8080. For our Nginx configuration, we’re going to balance between ports 8080 and 8081. This is what it will look like in the default site configuration for Nginx:

After reloading Nginx, it will start balancing requests between 8080 and 8081. If one of them is not available, then it will be marked as failed and won’t send requests to it until it’s back up. We can now pull our remote changes with Git.

After we’re confident it’s serving requests, we can then stop the other container. I’d recommend just using the docker command to do this instead of Fig so it doesn’t mess with the running database container.

Note that we can start any number of arbitrary containers like this as long as we change the name and the host port, while also updating the Nginx configuration.

AUTOMATION

Now, how can we automate this process? For one, the docker commands could probably be abstracted into a simple script that starts a new container and then stops the old one. That can be fed into your deployment pipeline after the tests are run. Another option would be to set up automatic service discovery using something like Consul or etcd, though that’s a bit more advanced.

As you can see, deploying to production with Docker is not the easiest thing in the world. I want to encourage you to try this process and see if you can help the community. Even just writing about your experiences so others can take advantage is a huge help. Docker is still a pretty young product and this subject is definitely something that can and will be improved. Let us know your thoughts and preferred deployment methods in the comments section below!

About the author

Pluralsight is the technology skills platform. We enable individuals and teams to grow their skills, accelerate their careers and create the future.