Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

Multi-Container App Management Using Docker-Compose

Containerizing apps with Docker is a fast-growing practice in DevOps. Docker Compose separates apps into containers to manage the running of the containers.

Sep 30, 2020 • 6 Minute Read

Introduction

Containerization of apps with Docker is a fast-growing practice within the DevOps community. As apps get more complex, it requires more skill and robustness to manage them.

An easy management paradigm is separation of concerns. This is where the designer of a system creates logical layers that are separate but interdependent. With Docker, this is possible with the ability to separate an app into several containers.

This guide will explore multi-container apps and how you can use Docker Compose to manage the running of these containers.

It is assumed that you have at least beginner knowledge of Django apps and Docker. A guide that introduces Docker can be found here.

Docker Compose

Docker Compose is a tool for defining and running multi-container apps of any kind. To get started, Docker Compose needs to be installed. To install, run the command

      pip install docker-compose
    

Docker Compose directives are written in YAML format and are hosted in a file commonly named docker-compose.yml. Common Docker Compose commands include:

  • docker-compose up --build builds the images for every service defined in the docker-compose.yml file and run the containers in the predefined order.

  • docker-compose down --volumes removes all the containers built by your compose script as well as any storage volumes.

  • docker-compose ps lists all currently active containers.

  • docker-compose stop halts the running of the currently active containers.

To further explore more Docker Compose commands, follow this guide.

Sample Script

For this sample script, suppose you are designing a Docker Compose file for a Django microservice app that is to be deployed to production and managed using Docker.

The setup involves three containers: an app container that runs the Django app, a Postgres container that runs the database, and an NGINX container that serves the app.

There are also static storage volumes where data and static files are stored. Since there are several containers at play, there should be an order in which they start up. Docker Compose addresses this with the depends_on directive. The directive informs Docker Compose which containers should start running before others can be executed.

In the sample script below, the common Docker Compose directives that have been discussed previously are put into practice.

      version: '3.7'

services:
  nginx: # service name
    build: ./nginx # location of the dockerfile to build the image for this service
    ports: # host:container- Bind the container port(80) to the host port(1339) Any traffic from the host via port 1339 will go to the docker container via port 80 
      - 1339:80 
    volumes: # define location where static files will live
      - static_volume:/home/app/microservice/static
    depends_on:
      - web # web should be up and running for nginx to start
    restart: "on-failure" # restart nginx container if it fails
  web:
    build: . #build the image for the web service from the dockerfile in parent directory
    # issue commands to the application n the container
    command: sh -c "python manage.py makemigrations &&
                    python manage.py migrate &&
                    python manage.py collectstatic &&
                    gunicorn django_microservice.wsgi:application --bind 0.0.0.0:${APP_PORT}"
    volumes:
      - .:/microservice:rw # map data and files from parent directory in host to microservice directory in docker containe
      - static_volume:/home/app/microservice/static
    env_file: # set the location and name of the env file to use when building the containers
      - .env
    image: django_microservice # image name

    expose:
      - ${APP_PORT} # internally expose the given port to other containers within the docker network
    restart: "on-failure"
    depends_on:
      - db # web will only start if db is up and running
  db: # service name
    image: postgres:11-alpine # base image from dockerhub
    volumes:
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql
      - postgres_data:/var/lib/postgresql/data/ # define where the postgres data will live within the postgres container
    environment: # set environment variables from the .env file set using the env_file directive
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${DB_NAME}
      - PGPORT=${DB_PORT}
      - POSTGRES_USER=${POSTGRES_USER}
    restart: "on-failure" # restart db service if it fails


volumes:
  postgres_data: # setup storage volume for data held by the Postgres db
  static_volume: # setup storage volume for static files such as css files and images.
    

Conclusion

The skills covered in this guide are vital for DevOps and full stack developer positions in any organization. The next level above multi-container single app management within the Docker ecosystem is multi-app or microservice management with technologies like Docker Swarm and Kubernetes.

These are referred to as container orchestration tools. They are used to manage the lifecycle and running of large apps designed as several individual containerized components referred to as microservices. To learn more about how container orchestration tools are used to manage microservice-based apps, follow this guide.