Every developer is familiar with the "works on my machine" battle, where an application works great locally, but runs into issues once it's moved to a different environment. This problem always raises the same questions: What's the difference between my machine and the target environment? Was an environment variable missing? Was a different security patch applied? Did some other elusive problem cause the issue?
If you already use or are starting to explore Docker, you know it’s a powerful tool to simplify building, shipping and running apps—and ultimately eliminate the "works on my machine" issue altogether.
So how does it work? By using Docker and something called Dockerfiles, you can create custom images that can be deployed anywhere containers are able to run. The Docker images contain everything the application needs to run successfully—from environment variables and security settings to the actual version of the server and framework.
The benefits of Dockerfiles
If you're new to Dockerfiles, you can think of them as being similar to a recipe for a layered cake. You first instruct Docker to bundle all the “ingredients,” such as your code, framework, server, settings, environment variables and configuration. You then use Docker to “bake” the ingredients, and out comes an image. From there the image can be pushed to different locations, such as a local machine, on-prem server or the cloud.
Microsoft provides several ASP.NET Core images (off-the-shelf cake recipes with all the ingredients included) that can help get you started using containers for development or production. With these as your base, you can take the image they provide and build on top of it to create a custom image. Let's examine this more by first taking a look at some of the ASP.NET Core images Microsoft provides.
Accessing the Microsoft ASP.NET Core images
To get started using Microsoft ASP.NET Core images, you’ll need to pull from a registry such as Docker Hub or Microsoft’s Container Registry. As mentioned earlier, you can think of these images as a cake recipe with all of the necessary ingredients included. It will serve as the foundation for your image, and then all of the custom functionality you add goes on top of the base image.
There are two different images you’ll normally work with if you're working with ASP.NET Core:
- mcr.microsoft.com/dotnet/core/sdk: You’ll use this for development environments, and you can run this on a machine that doesn’t have ASP.NET Core. For example, if you’re using a CI/CD system, you won’t need to update the server. You can simply pull the new SDK image and do your builds within a container.
- mcr.microsoft.com/dotnet/core/aspnet: This is your production version, made for runtime instead of builds. It’s smaller and faster, which increases cold-start performance.
Once you have Docker Desktop installed and running, you can use the following command to pull an image to your machine:
docker pull mcr.microsoft.com/dotnet/core/sdk docker pull mcr.microsoft.com/dotnet/core/aspnet
Creating a custom Dockerfile for development
Now it’s time to build out the rest of that “recipe,” or the custom image. Once you've pulled the SDK image to your machine, you can use that as the base of your Dockerfile. A Dockerfile is a simple text file that contains instructions. It can be named "Dockerfile" (with no extension) or be given another name if desired. Here's an example of the first instruction that you'll normally see in a Dockerfile.
The FROM instruction defines the base image that will be used. Next, you'll set up an author label, to define who built this when it is referenced in the future.
You can also add environment variables that will be used by the application as well with the ENV instruction. By default, Docker runs on port 80 with ASP.NET Core, but you can override that. In the example below, the Kestrel server that will run in the container is being configured to listen on port 5000. The other environment variable is simply specifying our environment, which is development in this case.
ENV ASPNETCORE_URLS=http://+:5000 ENV ASPNETCORE_ENVIRONMENT=”development”
Next, you'll expose port 5000 and set up a working directory. The example below shows a made-up file path (it can be any valid path that you'd like), but it creates a folder inside of the container.
EXPOSE 5000 WORKDIR /app
The final instruction in this image gets the Kestrel server started. The “-c” command-line flag is used to run a command that restores NuGet packages and then runs the application.
CMD [“/bin/bash”, “-c”, “dotnet restore && dotnet run”]
Once the Dockerfile is complete, it can be used to build the image. That is done using the docker build command. This is similar to putting the cake mix (the Dockerfile instructions and associated ingredients) into the oven.
docker build -t my-dev-image-name .
This will build the image and place it on your local machine where you can now use it to create a running container.
You may notice that there isn’t any code in this image. For now you can assume that a "pointer" is created from the running container back to the source code on our local machine using a Docker volume. Here's an example of defining a volume when starting up a container:
docker run -it -p 8080:5000 -v $(pwd):/app -w "/app" [my-custom-image-name]
Although a complete discussion of volumes is outside the scope of this article, a volume creates a type of "pointer" from an /app directory in the container to the directory on your machine where this command is run (for example, the directory where your ASP.NET Core application lives).
Note that the $(pwd) syntax (Print Working Directory) only works on Mac or Linux. For Windows the syntax varies depending on the type of command window used. Go here for more details. You can learn more about volumes here.
Creating a custom Dockerfile for production
Once you’re done building and running locally, you will need to create a Dockerfile for your production environment and build the image. You don’t want to use the SDK—remember, that’s only for the build and development stage, not production. Instead, you'll use the mcr.microsoft.com/dotnet/core/aspnet runtime image.
Before creating the image you'll need to publish your code in release mode. You can use the dotnet publish command to do that. You can run this command manually, through Visual Studio, or even automate it using a CI/CD server.
dotnet publish “-c” Release -o dist
Once you’ve run the dotnet publish command, you can build your production Docker image. Take a look at the Dockerfile below:
FROM mcr.microsoft.com/dotnet/core/aspnet LABEL author=”Name” ENV ASPNETCORE_URLS=http://*:5000 ENV ASPNETCORE_ENVIRONMENT=”production” EXPOSE 5000 WORKDIR /app COPY ./dist . ENTRYPOINT [“dotnet”, “Your-Project-Name.dll”]
Notice some of the differences between this image and the build image. First, you're using aspnet instead of sdk as the base image since this is for production. You're also copying the code from the publish folder—dist—into the working container. Finally, you're defining the .dll that will be used to run the Kestrel server.
Now, you can use Docker commands like docker build and docker push to build and push this image to a registry such as Docker Hub or to a custom one.
Creating multi-stage Dockerfiles
What if you want to automate the process of building your code, publishing it and creating a production Docker image? The good news is you can use images and containers for these steps by creating something called a "multi-stage Dockerfile". This type of Dockerfile provides the following benefits:
- Avoids manual creation of intermediate images
- Reduces complexity
- Selectively copies artifacts from one stage to another
- Minimizes the final image size
A multi-stage Dockerfile combines development and production instructions into a single Dockerfile.
Stage 1: Define base image that will be used for production FROM mcr.microsoft.com/dotnet/core/aspnet AS base WORKDIR /app EXPOSE 80 Stage 2: Build and publish the code FROM mcr.microsoft.com/dotnet/core/sdk AS build WORKDIR /app COPY Angular_ASPNETCore_CustomersService.csproj . RUN dotnet restore COPY . . RUN dotnet build -c Release FROM build AS publish RUN dotnet publish -c Release -o /publish Stage 3: Build and publish the code FROM base AS final WORKDIR /app COPY --from=publish /publish . ENTRYPOINT ["dotnet", "App-Name.dll"]
There are a few new things in this image we haven’t seen before. Stage 1 sets up the image that will be used for production (aliased as "base"). Stage 2 uses an sdk image (aliased as "build"), copies our project code into a working directory, restores NuGet packages, builds the code and publishes it to a directory named publish. Stage 3 copies the publish directory into the production image's working directory and defines the dotnet command to run once the container is running.
What was at first two different images are now combined into one using the multi-stage Dockerfile. The end result is a production image that can be used to run the container on your machine, on a server or in the cloud.
In this article you've learned how to get started building custom ASP.NET Core Docker images that can be run as containers. To build a custom image you first start by adding instructions to a Dockerfile. Instructions are used to define the base image, environment variables, code that should be included, configuration, frameworks to use and more. Once the instructions are completed, the docker build command is used to create the image. From there, the image can be pushed to a container registry and pulled to a server to be run as a container.
In situations where you'd like to automate the process of building the code, publishing it, and creating the Docker image, multi-stage Dockerfiles can be used. They have several benefits including consolidation of multiple steps and a smaller final image size.
For more detailed instructions and tips on building ASP.NET Core containers, watch Dan's free on-demand webinar on the same topic here, or watch his Docker for Web Developers course on Pluralsight.
5 keys to successful organizational design
How do you create an organization that is nimble, flexible and takes a fresh view of team structure? These are the keys to creating and maintaining a successful business that will last the test of time.Read more
Why your best tech talent quits
Your best developers and IT pros receive recruiting offers in their InMail and inboxes daily. Because the competition for the top tech talent is so fierce, how do you keep your best employees in house?Read more
Technology in 2025: Prepare your workforce
The key to surviving this new industrial revolution is leading it. That requires two key elements of agile businesses: awareness of disruptive technology and a plan to develop talent that can make the most of it.Read more