Docker: 3 things you need to know to get started

There’s a really simple problem any developer and DevOps engineer runs into when preparing to deploy an application: How do you ensure it runs the same way across different environments? We all know that if something works in a development and staging phase, that doesn’t guarantee it will work in production. 

In a nutshell, that’s what Docker solves. Using a container system, Docker enables you to separate your application from the infrastructure you built it on, making applications more portable and reducing environmental issues with each deployment. It also gives you the ability to replicate and scale applications across multiple servers easily. And there’s more. With Docker, you can:

  • Accelerate developer onboarding: Imagine setting up a developer with a complex system in a matter of minutes by pulling images down instead of installing all the software directly on your computer.

  • Eliminate app conflicts: Have different versions of an app running on a VM but in different containers. Multiple versions exist side by side without friction. It’s easy to stop and start applications with Docker commands. 

  • Ship software faster: Ship more than just code. With Docker, you’re running containers in different environments. The entire program ships, including its infrastructure, ensuring it runs better across any environment. 

Sounds great, right? Let’s dive into what you need to know to get started.

Working with Docker Desktop and Docker Hub

Once you’ve downloaded Docker Desktop, from your terminal view, you’re able to use commands. There’s a big list of commands you can run on Docker, and you can see them all by running the command docker. If you’re just getting started on Docker, you won’t have anything to pull up when you run any of the command options listed. Before you can do that, you first need an initial docker image on your machine.

A Docker image is essentially a layered filesystem that could contain things like servers, APIs, code and even databases. Images are containers that aren’t being run. You can create a base image from an existing container, use a Dockerfile to create an image or get an image from the Docker Hub repo.

Let’s say you need Nginx for your application. To pull it from, where you can find pre-built images with commonly used confirmations and software packages, you’ll search Nginx, click on its name, and see that there is a command listed on the right hand side reading docker pull nginx. If you run that command on your Docker Desktop (make sure to include the exact version you want), then boom, you have Nginx on your machine. 

Now, Nginx isn’t running. All you’ve done is pull an image. Running requires another command, and a little more information, like which port you want it to run on and what port your image currently runs on. In the case of Nginx, it always runs on port 80, and you can choose which port you want the container to run on your system. Run a command reading docker run -p 8080:80 nginx. (8080 is just a port picked for this example - any number here will work.) After hitting enter, go to your localhost:8080, and you’ll see Nginx up and running.

You can see all your running containers using command docker ps -a. From this view, you can see that Docker has assigned your container a long alpha-numeric name, which you’ll use to stop running your container if you use the command docker stop [container name] or remove if you use the command docker rmi [container name].

Using custom code in Docker

Docker Hub images provide a good starting point for your configuration, but you’ll need to customize it further for your needs. To do that, you’ll create your own image using a command called docker build.  This makes a Dockerfile, which brings together a base image and your own code. Let’s take a really simple example using Nginx and an index.html file. 


Using Docker Desktop, and the Dockerfile functionality, create a file using these parts:

FROM [baseImage]

LABEL [author=”name”]

COPY [your code into destination folder]


Now, plug in some real code. It would look like this:

FROM nginx

LABEL author=”Dan Wahlin”

COPY index.html /usr/share/nginx/html


In this Docker file, you’re specifying that you want to copy index.html to the Nginx public directory. The directory location specified here is the default Nginx directory, which is specified in the Docker Hub Nginx image.

Now, you need to build  and run the code. To build, run the command docker build -t my-nginx. The -t in this instance is how you give your image a name, in this case my-nginx. And the “.” placed after that references the Dockerfile.

Running this Dockerfile is as simple as running the original Nginx image. Continuing on this example, the command would look like docker run -p 8080:80 my-nginx.

It really is that easy to throw together a combination that's configured exactly like your production server—and with Docker, it’s also easy to share. Feel free to pass it along to others on your team, like a tester or another developer.

Stepping up your game with Docker Compose

While the commands like Docker Pull, Run and Build should be enough to get you started, you should know the next step in working with Docker is a command called Compose. Using YML files, Docker Compose lets you construct your application in a single container. So, for example, you might combine Nginx, Mongo, Node and some local code. You typically wouldn’t use a Compose to create actual production containers, but it can be useful to test your application across different environments.

There you have it: The first three things you need to know to start diving into Docker. Dive in and get acquainted. You’ll soon realize just how much Docker can improve your software development and application deployment.

To get a deeper dive into how to get started with Docker, watch the on-demand webinar here.