Building an ASP.NET Core Application with Docker [Devops Tutorial]

ProfilePicture of Rafael Carvalho
Rafael Carvalho
Senior Developer
Docker logo illustration with Windows logo in one of the containers

Applications are being built, shipped and updated at an increasingly fast pace. This trend has generated interest in solutions that will help facilitate this complex process. The result is a flood of new methodologies and tools for DevOps engineers. In this article, I will focus on one of these tools: Docker. More specifically, Docker on Windows, along with a sample application in ASP.NET.

Table Of Contents

What is DevOps?

The AWS site describes DevOps as “the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity.”

In other words, DevOps is about merging the Development and Operations silos into one team, so engineers work across the entire application lifecycle.

What is Docker for Windows?

Docker performs operating system level virtualization, a process often referred to as containerization (hence the term “Docker containers”). It was initially developed for Linux, but it is now fully supported on macOS and Windows, as well as all major cloud service providers (including AWS, Azure and Google Cloud). Containers help you package an application, and all of the software required to run it, into a single container, which you can then run from a developer’s local development environment all the way to production.

Docker has become one of the darlings of the DevOps community because it enables true independence between applications, environments, infrastructure, and developers.

Docker Containers vs Virtual Machines

You may ask yourself: “If containers are just another virtualization strategy, why should I consider it if I am already using some kind of Virtual Machine? ”

There are a few outstanding benefits to using containers instead of virtual machines. But before we talk about this, I think it’s important for us to understand the main differences between these virtualization types.

Docker containers vs virtual machines characteristics

Virtual machines (VMs) are an abstraction of physical hardware, turning one server (hardware) into many servers (virtualized). The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, one or more apps, necessary binaries, and libraries. VMs are usually slower to boot comparing to the same OS installed on a bare metal server.

Containers are an abstraction at the application layer that packages code and dependencies together. Multiple containers can run on the same machine and operating system, sharing the kernel with other containers, each running as isolated processes in the user space.

Key Benefits of Docker Containers

Composing Containers

You might have applications that are composed of different technology stacks, such as in a microservices architecture. You might also have cases in which you need to use a combination of Linux and Windows containers.

This is where Docker Compose comes in. It helps create multiple isolated environments on a single host – very handy for development environments where you can have all application dependencies running together as different containers on the same host. Composed containers only recreate containers that have been changed, helping to speed up development time.

You can also configure the order of containers, and their respective dependencies, so they are bootstrapped in the correct order.

Hands On

Now that we understand the concept and ideas behind containers, It’s time to get our hands dirty. I’ll guide you, step by step, through the process of setting up a container environment.

Not a Windows developer? Don’t worry!

I will be demoing all of this in a Windows environment, using an ASP.NET Core application and a Redis container. Don’t worry though, many of the concepts I will go through in this article apply equally to non-Windows developers, such as “must know” Docker commands, running your first container, composing containers and Orchestration tools. The examples serve to illustrate concepts that are valid across platforms.

How to Install Docker on Windows 10

Before we can continue any further, you will need to install Docker. The installation is simple. If you’re on Windows 10, like me, you will need to have a Professional license as it requires Hyper-V to run on Windows.

Getting Started with Docker on Windows

Open a terminal and execute docker to verify that everything was installed correctly. If it was, you should see the list of commands below:

Screenshot of terminal for docker installation in Windows

Docker Pillars

OK, we’re on our way. Before we get too deep, It’s important to go over the big picture: what I refer to as the Docker Pillars. Knowing these will help you understand how everything is interconnected.

Pillar 1: Daemon

The Daemon can be considered as the brain of the whole operation. The Daemon is responsible for taking care of the lifecycle of containers and handling things with the Operating System. It does all of the heavy lifting every time a command is executed.

Pillar 2: Client

The Client is an HTTP API wrapper that exposes a set of commands interpreted by the Daemon.

Pillar 3: Registries

Responsible for storing images. Can be public or private and are available with different providers (Azure has its own container registry). Docker is configured to look for images on Docker Hub by default.

Must-Know Commands for Windows

Any interaction with Docker is done via the command line, and there is a whole subset of commands and subcommands to manage the daemon and hub.

Below is a list of some of the key commands you need to know to get started.

a table showing the most important docker commands to know

Your First Docker Container on Windows

Let’s start by running a simple container. Open a new command prompt/terminal window and execute the following commands in order:

1docker pull hello-world
2docker images
3docker run hello-world

You should get a “Hello from Docker!” message after running this.

1docker ps -a
2docker rm [container-id] (Use the container ID that was shown on the result of the previous command)
3docker rmi hello-world

Let’s see what we’ve accomplished with the above:

Let’s make things a bit more interesting by setting up an ASP.NET Core project with Visual Studio, Redis Cache, Docker and Docker Compose.

Again, if you are not a Microsoft developer it doesn’t matter as we will be going through concepts that are broadly applicable to all software development.

Visual Studio Integration

I want to take you through a simple ASP.NET Core project running on Docker, so we can have a look at the concept of a ‘dockerfile’.

Start by opening a new instance, and creating a new ASP.NET Core Web Application project.

Screenshot of Visual Studio ASP.NET Core Web Application project creation

After that, you will be prompted to choose the type of template you want to use, to create your new application. Choose ‘Web Application (Model-View-Controller)’ and make sure you have the option ‘Enable Docker Support’ checked.

Screenshot of MVC template selection for Docker project

When you choose ‘Enable Docker Support’, Visual Studio will create your new Web Application project with a dockerfile. The dockerfile is a declarative file that tells Docker how to build and run customized images. You will also notice that a command prompt/terminal will open up and a ‘docker pull’ will be executed to pull the microsoft/aspnetcore image. This is the base image that will be used in our new application and in our ‘dockerfile’ which we will inspect in detail later on in this article.

A docker-compose yaml file will also be generated as part of the bootstrapping of our new solution. While at first, this is not strictly necessary, please don’t remove it. Visual Studio debugs your application using this file. We will be running the project as a single container instance, but later on, in this article, we will configure Docker compose to use a Redis container image as part of our solution.

If you look at the top bar of your Visual Studio instance, you will notice that the Debug button is displaying a ‘Docker’ label. Hit that button and your project will run in debug mode on top of a new container.

After a few seconds, a new browser tab should open up and you should be seeing something similar to this:

Screenshot of a Docker Demo in a browser tab

Open a new command prompt/terminal and execute docker ps. You should then see something similar to:

Screenshot of terminal running docker ps command

At this point, we have Visual Studio debugging an ASP.NET Core application on a container. Debugging works just as you would expect from when you run a project using IIS Express. This shows us why Docker is gaining a lot of traction, and big companies like Microsoft are putting all efforts to catch up with Docker.

If you execute ‘docker images’ in your command prompt/terminal, you will see that our new image was built and is ready to run on any docker daemon. At this point, you can even push this image to a container registry.

What Is a ‘dockerfile’?

If you inspect in detail our new ASP.NET Core application project, you will see under the root of its directory a file named as ‘dockerfile’ without any extension. This is a declarative .yaml file that instructs the docker daemon how to build a custom container based on another image followed by a set of commands that we can configure.

Screenshot of a Dockerfile

Let’s inspect what a few of these commands are doing:

Composing a Container Dependency to Our Application

Our application might be composed of several containers, and these containers might depend on other containers. To demonstrate how Docker Compose can help in such situations, let’s add a Redis container to our Docker Compose and indicate that our ASP NET Core application depends on it.

Here is my updated docker.compose.yml file:

Screenshot of Docker Compose file

By adding a new image, which I’ve named as ‘redis’, we now have instructed docker-compose command to run them both. The ‘depends_on’ property configures the that the dockerdemo image will only be started after our redis image is ready.

If you run the project again from within Visual Studio, you will notice that there are now 2 running containers after we run ‘docker ps’. This might take a while as a new container will be pulled from the registry…

Screenshot of terminal running docker ps command with docker containers

At this point, the containers can send network traffic among themselves, and a Redis connection pointing to ‘redis’ host would work just fine.

You can still use Redis on Cloud providers for your production environments, and tweak what dependencies are composed based on which environment you are running your ‘docker-compose’ command by using the ‘docker-compose.override.yml’ file. In addition to that, you could also be running a SQL Express container for your development environment to speed up the induction time of new developers to your team.

So, is it necessary for us to use Docker Compose whenever we get to use Docker? Nope. But I believe it is important for you to know what it is and what you can achieve with it. Whether you are trying to put the parts that compose your application development environment together, such as cache and database dependencies, or if you are architecting a microservices solution composed of several smaller applications that work in conjunction.

Docker Compose is on your side whenever you need to build up independent containers that will interact with each other.

Orchestration Tools

In the next article in this series, we will be looking at Docker Orchestrators, which allow us to automate deployment, scale and manage containerized applications. There are a few different options out there.

Want to get a head start? Here is a list of the most popular Docker Orchestrators:

a table of the five most popular docker orchestrators: swarm, kubernetes, amazon ecs, azure container service, and google container engine


So there you go. Docker is a fast and consistent way to accelerate and automate the shipping of software. It saves developers from having to set up and configure multiple development environments each time they test or deploy code. That time can then be spent developing quality software instead. Like all great solutions, it is simple and intuitive.

Assuming I did a good job, this article should have gotten you interested in Docker while guiding you through those first steps. I would love to hear your thoughts on both the article and how you feel about docker itself. For example: ‘how do you feel it is evolving and being used by the community?’

Originally published on Apr 24, 2018Last updated on Feb 6, 2023

Looking to hire?

Join our newsletter

Join thousands of subscribers already getting our original articles about software design and development. You will not receive any spam, just great content once a month.


Read Next

Browse Our Blog