What Is Docker for Linux and How to Get Started

ProfilePicture of Jakub Musko
Jakub Musko
Senior Software Engineer
Docker logo illustration with linux logo in one of the containers

Applications are being built, shipped and updated at an increasingly fast pace. It’s a trend that has generated interest in solutions to facilitate this complex process. The result has been a flood of new methodologies and tools into the DevOps space. In this article, I will focus on two of these tools: Docker and Docker Compose. More specifically, using them on Linux to build an API in Flask.

If you prefer working in the Windows environment, we’ve got you covered. Check out our Docker for Windows version of this article.

What is DevOps?

The AWS site describes DevOps as “the combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity”.

In other words, DevOps is about merging the Development and Operations silos into one team, so engineers work across the entire application lifecycle. If you build it, you own it and you’re responsible for making sure your application works as expected in all environments.

What is Docker for Linux?

Docker performs operating system level virtualization (a process often referred to as containerization; hence the term Docker Containers). It was initially developed for Linux, but it is now fully supported on macOS and Windows, as well as all major cloud service providers (including AWS, Azure, Google Cloud). With Docker, you can package your application, and all of the software required to run it, into a single container. You can then run that container in your local development environment all the way to production.

Docker has become one of the darlings of the DevOps community because it enables true independence between applications, environments, infrastructure, and developers.

Not a Linux Developer? Don’t Worry!

I will be demoing all of this on a Linux environment, using a Python Flask application and a PostgreSQL container. Don’t worry though, many of the concepts I will go through in this article apply equally to development across all platforms. You can also switch to the Windows version of this article by clicking here.

Requirements

Before we can continue any further, you will need to install Docker and Docker Compose. The installation is simple and step by step instructions for your platform can be found on the official Docker website. Once Docker installation is complete, it’s time to install Docker Compose. The process is even simpler than for Docker and the official instructions are available here.

Let’s Go!

To verify the installation has been completed successfully, run the following commands in the terminal:

1$ docker --version
2$ docker-compose --version
3

If everything has been set up correctly, the commands will return the versions of the tools installed (the versions in your environment might differ slightly):

1jakub:~/dev/docker-demo$ docker --version
2Docker version 17.12.1-ce, build 7390fc6
3jakub:~/dev/docker-demo$ docker-compose --version
4docker-compose version 1.21.2, build a133471
5

Docker Internals

OK, we’re on our way! Before we get too deep, it’s useful to know a few Docker terms. Knowing these will help you understand how everything is interconnected.

Daemon

The Daemon can be considered the brain of the whole operation. The Daemon is responsible for taking care of the lifecycle of containers and handling things with the Operating System. It does all of the heavy lifting every time a command is executed.

Client

The Client is an HTTP API wrapper that exposes a set of commands interpreted by the Daemon.

Registries

The Registries are responsible for storing images. They can be public, or private, and are available with different providers (Azure has its own container registry). Docker is configured to look for images on Docker Hub by default. To see how they interact with each other, let’s run our first image:

1$ docker run hello-world
2
1jakub:~/dev/docker-demo$ docker run hello-world
2Unable to find image 'hello-world:latest' locally
3latest: Pulling from library/hello-world
49bb5a5d4561a: Pull complete
5Digest: sha256:f5233545e43561214ca4891fd1157e1c3c563316ed8e237750d59bde73361e77
6Status: Downloaded newer image for hello-world:latest
7
8Hello from Docker!
9This message shows that your installation appears to be working correctly.
10
11To generate this message, Docker took the following steps:
12 1. The Docker client contacted the Docker daemon.
13 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
14 (amd64)
15 3. The Docker daemon created a new container from that image which runs the
16 executable that produces the output you are currently reading.
17 4. The Docker daemon streamed that output to the Docker client, which sent it
18 to your terminal.
19
20To try something more ambitious, you can run an Ubuntu container with:
21 $ docker run -it ubuntu bash
22
23Share images, automate workflows, and more with a free Docker ID:
24 https://hub.docker.com/
25
26For more examples and ideas, visit:
27 https://docs.docker.com/engine/userguide/
28

The above list explains what happened behind the scenes: the client (the docker command-line tool), contacted the daemon to run the hello-world image. Since the image wasn’t available locally, it had to be downloaded from the registry (Docker Hub is set as the default). The daemon then created a container from the image and ran it, sending the output generated to the client and making it appear in your terminal.

If you’re experiencing problems with the example above, e.g. you’re required to run the Docker client using sudo, there are a few post-installation steps you might want to go through described here. In short, you’ll need to add your current user to the docker user group, removing the need for elevated privileges. This can be done with the following commands:

1sudo groupadd docker
2sudo usermod -aG docker $USER
3newgrp docker
4

The Docker Client

Now that you know how to run a docker image, let’s look at the basics of image management. You can see what images are currently downloaded using docker images.

1jakub:~/dev/docker-demo$ docker images
2REPOSITORY TAG IMAGE ID CREATED SIZE
3hello-world latest e38bc07ac18e 6 weeks ago 1.85kB
4

Right now, we only have the hello-world image we’ve downloaded in the previous step. Let’s download a Linux image and use it to execute custom commands. The image we’re going to use is Alpine, a lightweight Docker image based on Alpine Linux. We’re going to use docker pull to explicitly download the image from the image registry:

1docker pull alpine
2docker images
3
1jakub:~/dev/docker-demo$ docker pull alpine
2Using default tag: latest
3latest: Pulling from library/alpine
4ff3a5c916c92: Pull complete
5Digest: sha256:7df6db5aa61ae9480f52f0b3a06a140ab98d427f86d8d5de0bedab9b8df6b1c0
6Status: Downloaded newer image for alpine:latest
7jakub:~/dev/docker-demo$ docker images
8REPOSITORY TAG IMAGE ID CREATED SIZE
9hello-world latest e38bc07ac18e 6 weeks ago 1.85kB
10alpine latest 3fd9065eaf02 4 months ago 4.15MB
11

We now have two images at our disposal. Let’s run a command using the new image:

1docker run alpine cat /etc/os-release
2
1jakub:~/dev/docker-demo$ docker run alpine cat /etc/os-release
2NAME="Alpine Linux"
3ID=alpine
4VERSION_ID=3.7.0
5PRETTY_NAME="Alpine Linux v3.7"
6HOME_URL="http://alpinelinux.org"
7BUG_REPORT_URL="http://bugs.alpinelinux.org"
8

Printing the contents of the /etc/os-release file on the guest filesystem we can see which version of Alpine it is running. Using docker run creates a new container and runs the command inside the container until completion. If you want to run an interactive command inside the container, you’ll need to pass the -i -t flags to the run command.

1docker run -it alpine sh
2
1jakub:~/dev/docker-demo$ docker run -i -t alpine sh
2/ # env
3HOSTNAME=7c56653a2e05
4SHLVL=1
5HOME=/root
6TERM=xterm
7PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
8PWD=/
9/ # exit
10jakub:~/dev/docker-demo$
11

We now have an interactive shell inside the Docker container. This means that the container is running until we exit the shell. You can verify that by opening another terminal window and running docker ps in it.

1CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
24cba0288c612 alpine "sh" 5 seconds ago Up 4 seconds kind_nobel
3

You can confirm that it’s the same container by comparing the CONTAINER_ID against the HOSTNAME from the previous command. You can use that value in order to connect to an already running container and you will see that the changes in one shell session are visible in the other.

1docker exec -it container_id sh
2
1jakub:~/dev/docker-demo$
2jakub:~/dev/docker-demo$ docker run -i -t alpine sh
3/ # ls home/
4/ # mkdir home/hello
5/ # ls home/
6hello
7/ #
8
1jakub:~/dev/docker-demo$ docker ps
2CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
34cba0288c612 alpine "sh" 6 minutes ago Up 6 minutes kind_nobel
4jakub:~/dev/docker-demo$ docker exec -it 4cba0288c612 sh
5/ # ls home/
6hello
7

The changes will not persist between different runs of the same container though.

1jakub:~/dev/docker-demo$ docker run -it alpine sh
2/ # ls home/
3/ # mkdir home/hello
4/ # ls home/
5hello
6/ # exit
7jakub:~/dev/docker-demo$ docker run -it alpine sh
8/ # ls home/
9/ # exit
10

You can see the running containers with docker container ls or. By adding the -a flag you can see all previously running containers. To remove the stopped ones use docker rm.

1ocker container ls -a
2docker rm container_id/container_name
3
1akub:~/dev/docker-demo$ docker container ls -a
2CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3fb7d54d3259c alpine "sh" About a minute ago Exited (0) About a minute ago dreamy_wozniak
48a25c328e100 alpine "sh" About a minute ago Exited (0) About a minute ago boring_neumann
54cba0288c612 alpine "sh" 17 minutes ago Exited (0) 3 seconds ago kind_nobel
67c56653a2e05 alpine "sh" 19 minutes ago Exited (0) 19 minutes ago determined_bohr
731e357a13333 alpine "cat /etc/os-release" 20 minutes ago Exited (0) 20 minutes ago admiring_mayer
8339e5694c55d hello-world "/hello" 27 minutes ago Exited (0) 27 minutes ago amazing_panini
9jakub:~/dev/docker-demo$ docker container rm fb7d54d3259c
10fb7d54d3259c
11jakub:~/dev/docker-demo$ docker container rm boring_neumann
12boring_neumann
13jakub:~/dev/docker-demo$ docker container ls -a
14CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
154cba0288c612 alpine "sh" 17 minutes ago Exited (0) 3 seconds ago kind_nobel
167c56653a2e05 alpine "sh" 19 minutes ago Exited (0) 19 minutes ago determined_bohr
1731e357a13333 alpine "cat /etc/os-release" 20 minutes ago Exited (0) 20 minutes ago admiring_mayer
18339e5694c55d hello-world "/hello" 27 minutes ago Exited (0) 27 minutes ago amazing_panini
19
20

The containers that stopped running are preserved on disk by default. In some contexts, they might be used to debug an issue after the run completed. To automatically clean them up, the --rm flag can be added to the run command. Additionally, you might have noticed that containers are given names such as boring_neumann in the example above. Unless you pass one explicitly with the --name flag, a random name is generated. If you know the name of the container, you can replace the container id in the commands that require it, so it’s good practice to name your containers.

1jakub:~/dev/docker-demo$ docker container ls -a
2CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3jakub:~/dev/docker-demo$ docker container ls -a
4jakub:~/dev/docker-demo$ docker run --name our_container alpine echo 'hello world'
5hello world
6jakub:~/dev/docker-demo$ docker container ls -a
7CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
855189f4487f6 alpine "echo 'hello world'" 2 seconds ago Exited (0) 1 second ago our_container
9jakub:~/dev/docker-demo$ docker run --name will_be_autoremoved --rm alpine echo 'hello world'
10hello world
11jakub:~/dev/docker-demo$ docker container ls -a
12CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1355189f4487f6 alpine "echo 'hello world'" 29 seconds ago Exited (0) 29 seconds ago our_container
14

Building Your First Docker Container on Linux

With the basics of the Docker client mastered, it’s time to build a container that will host an API service. It will have a single endpoint: returning Hello, world!. The code is available on GitHub:

1from flask import Flask
2
3app = Flask(__name__)
4
5@app.route('/')
6def hello_world():
7 return 'Hello, World!\n'
8
1Flask==1.0.2
2

In order to run the service, a couple of steps have to be completed:

  • The Python module dependencies need to be installed.
  • The FLASK_APP shell variable has to point to the python file with the code.
  • flask run can then be invoked to start the service.

Since there isn’t an existing image that does all that, we will need to create our own. All image definitions are contained inside the Dockerfiles, which specify the parent image and a set of instructions that you want to be executed.

The Dockerfile for the service looks as follows:

1FROM python:alpine3.6
2
3EXPOSE 5000
4ENV FLASK_ENV=development
5ENV FLASK_APP=/api/api.py
6CMD ["flask", "run", "--host=0.0.0.0"]
7
8COPY ./api /api
9RUN pip3 install -r /api/requirements.txt
10

There’s a lot to take in here so let’s go step by step.

1FROM python:alpine3.6
2
  • Specify the base image in the name:tag format. In this case, it’s the Alpine distribution containing Python 3.6.
1EXPOSE 5000
2
  • Have the container listen on port 5000. That does not mean this port will be available for communication outside the container – it has to be published separately. More on that soon.
1ENV FLASK_ENV development
2ENV FLASK_APP /api/api.py
3
  • Set environmental variables consumed by the code.
1CMD ["flask", "run", "--host=0.0.0.0"]
2
  • The default command executed when the container is running.
1COPY ./api /api
2
  • Copy the source code directory from the host to the image.
1RUN pip3 install -r /api/requirements.txt
2
  • Install the dependencies inside the container.

To build the newly defined image we will use docker build:

1docker build -t api:latest
2
1jakub:~/dev/docker-demo/01-single-container$ docker build -t api:latest .
2Sending build context to Docker daemon 4.608kB
3Step 1/7 : FROM python:alpine3.6
4alpine3.6: Pulling from library/python
5605ce1bd3f31: Pull complete
655018be3009c: Pull complete
704cbc77bcb89: Pull complete
83a765a92b253: Pull complete
9c704f41e2979: Pull complete
10Digest: sha256:2e2b36d517371ae8e5954ddeb557dca0d236de14734b03bd5d4a53069ba4e637
11Status: Downloaded newer image for python:alpine3.6
12 ---> 08d365ef6f23
13Step 2/7 : EXPOSE 5000
14 ---> Running in 1be019e3540f
15Removing intermediate container 1be019e3540f
16 ---> 908124e3cbe3
17Step 3/7 : ENV FLASK_ENV=development
18 ---> Running in 749e0457771b
19Removing intermediate container 749e0457771b
20 ---> 1409475bda5e
21Step 4/7 : ENV FLASK_APP=/api/api.py
22 ---> Running in 52ccf914d98a
23Removing intermediate container 52ccf914d98a
24 ---> a4af46f27885
25Step 5/7 : CMD ["flask", "run", "--host=0.0.0.0"]
26 ---> Running in 42138eb88d7f
27Removing intermediate container 42138eb88d7f
28 ---> 6a5ec9dd6d94
29Step 6/7 : COPY ./api /api
30 ---> d144346d8ef9
31Step 7/7 : RUN pip3 install -r /api/requirements.txt
32 ---> Running in fb1c31e24689
33Collecting Flask==1.0.2 (from -r /api/requirements.txt (line 1))
34 Downloading https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl (91kB)
35Collecting Werkzeug>=0.14 (from Flask==1.0.2->-r /api/requirements.txt (line 1))
36 Downloading https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl (322kB)
37Collecting Jinja2>=2.10 (from Flask==1.0.2->-r /api/requirements.txt (line 1))
38 Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB)
39Collecting itsdangerous>=0.24 (from Flask==1.0.2->-r /api/requirements.txt (line 1))
40 Downloading https://files.pythonhosted.org/packages/dc/b4/a60bcdba945c00f6d608d8975131ab3f25b22f2bcfe1dab221165194b2d4/itsdangerous-0.24.tar.gz (46kB)
41Collecting click>=5.1 (from Flask==1.0.2->-r /api/requirements.txt (line 1))
42 Downloading https://files.pythonhosted.org/packages/34/c1/8806f99713ddb993c5366c362b2f908f18269f8d792aff1abfd700775a77/click-6.7-py2.py3-none-any.whl (71kB)
43Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->Flask==1.0.2->-r /api/requirements.txt (line 1))
44 Downloading https://files.pythonhosted.org/packages/4d/de/32d741db316d8fdb7680822dd37001ef7a448255de9699ab4bfcbdf4172b/MarkupSafe-1.0.tar.gz
45Building wheels for collected packages: itsdangerous, MarkupSafe
46 Running setup.py bdist_wheel for itsdangerous: started
47 Running setup.py bdist_wheel for itsdangerous: finished with status 'done'
48 Stored in directory: /root/.cache/pip/wheels/2c/4a/61/5599631c1554768c6290b08c02c72d7317910374ca602ff1e5
49 Running setup.py bdist_wheel for MarkupSafe: started
50 Running setup.py bdist_wheel for MarkupSafe: finished with status 'done'
51 Stored in directory: /root/.cache/pip/wheels/33/56/20/ebe49a5c612fffe1c5a632146b16596f9e64676768661e4e46
52Successfully built itsdangerous MarkupSafe
53Installing collected packages: Werkzeug, MarkupSafe, Jinja2, itsdangerous, click, Flask
54Successfully installed Flask-1.0.2 Jinja2-2.10 MarkupSafe-1.0 Werkzeug-0.14.1 click-6.7 itsdangerous-0.24
55Removing intermediate container fb1c31e24689
56 ---> b9c17651e402
57Successfully built b9c17651e402
58Successfully tagged api:latest
59

The -t flag allows to specify the name and tag for the new image. Once the image is built, it will appear on the list of your images:

1docker images
2
1akub:~/dev/docker-demo/01-single-container$ docker images
2REPOSITORY TAG IMAGE ID CREATED SIZE
3api latest b9c17651e402 About a minute ago 95.4MB
4python alpine3.6 08d365ef6f23 5 weeks ago 84.9MB
5hello-world latest e38bc07ac18e 6 weeks ago 1.85kB
6alpine latest 3fd9065eaf02 4 months ago 4.15MB
7

You can see it’s using the name and tag specified. The order of instructions in the Dockerfile might seem confusing at first. It might appear as if the application is being run before its dependencies are installed. That’s not the case though, the entry command specified by CMD does not get executed until the container is started. Additionally, ordering the commands in this way takes advantage of image build cache. Each build step is being cached, so that if any line in the Dockerfile is changed, only it and the lines following it are re-evaluated. Trying to rebuild the image again would result in all steps being evaluated from cache.

1docker build -t api:latest
2
1jakub:~/dev/docker-demo/01-single-container$ docker build -t api:latest .
2Sending build context to Docker daemon 4.608kB
3Step 1/7 : FROM python:alpine3.6
4 ---> 08d365ef6f23
5Step 2/7 : EXPOSE 5000
6 ---> Using cache
7 ---> 908124e3cbe3
8Step 3/7 : ENV FLASK_ENV=development
9 ---> Using cache
10 ---> 1409475bda5e
11Step 4/7 : ENV FLASK_APP=/api/api.py
12 ---> Using cache
13 ---> a4af46f27885
14Step 5/7 : CMD ["flask", "run", "--host=0.0.0.0"]
15 ---> Using cache
16 ---> 6a5ec9dd6d94
17Step 6/7 : COPY ./api /api
18 ---> Using cache
19 ---> d144346d8ef9
20Step 7/7 : RUN pip3 install -r /api/requirements.txt
21 ---> Using cache
22 ---> b9c17651e402
23Successfully built b9c17651e402
24Successfully tagged api:latest
25

Running the Service Inside the Container

While running the container using docker run API does start the Flask service, it won’t be working as expected.

1jakub:~/dev/docker-demo/01-single-container$ docker run --rm --name api api
2 * Serving Flask app "/api/api.py" (lazy loading)
3 * Environment: development
4 * Debug mode: on
5 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
6 * Restarting with stat
7 * Debugger is active!
8 * Debugger PIN: 191-351-897
9

That’s because the image is configured to listen on port 5000, but the port hasn’t been forwarded to the host. In order to make the port available on the host, it has to be published:

1docker run --rm --name api -p 8082:5000 api
2
1jakub:~/dev/docker-demo/01-single-container$ docker run --rm --name api -p 8082:5000 api
2 * Serving Flask app "/api/api.py" (lazy loading)
3 * Environment: development
4 * Debug mode: on
5 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
6 * Restarting with stat
7 * Debugger is active!
8 * Debugger PIN: 191-351-897
9172.17.0.1 - - [29/May/2018 14:38:15] "GET / HTTP/1.1" 200 -
10
1jakub:~/dev/docker-demo/01-single-container$ curl http://0.0.0.0:8082/
2Hello, World!
3

This forwards hosts port 8082 to container’s 5000. You can see port forwarding configuration of a container using docker port.

1docker port container_id/container_name
2
1jakub:~/dev/docker-demo/01-single-container$ docker port api
25000/tcp -> 0.0.0.0:8082
3

So far we’ve been starting the container in the foreground and closing the terminal window would stop the container. If you want the container to be running in the background without eating up one of your terminals, you can run it in detached mode, with the -d flag.

1docker run --rm --name api -p 8082:5000 -d api
2docker ps
3docker logs container_id/container_name
4
1jakub:~/dev/docker-demo/01-single-container$ docker run --rm --name api -p 8082:5000 -d api
29db82080612b8314cf8935a75450cd2b0a244da3e21361edb90c0b3e10a4b14a
3jakub:~/dev/docker-demo/01-single-container$ docker ps
4CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
59db82080612b api "flask run --host=0.…" 8 seconds ago Up 7 seconds 0.0.0.0:8082->5000/tcp api
6jakub:~/dev/docker-demo/01-single-container$ docker logs api
7 * Serving Flask app "/api/api.py" (lazy loading)
8 * Environment: development
9 * Debug mode: on
10 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
11 * Restarting with stat
12 * Debugger is active!
13 * Debugger PIN: 191-351-897
14

docker stop can then be used in order to stop the server and bring the container down.

1docker stop container_id/container_name
2docker ps
3
1jakub:~/dev/docker-demo/01-single-container$ docker stop api
2api
3jakub:~/dev/docker-demo/01-single-container$ docker ps
4CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5

Working with Multiple Containers

Most web services depend on a database, so let’s add one to this project.

1docker run --rm --name postgres_db -p 5435:5432 -e POSTGRES_PASSWORD=postgrespassword -d postgres
2
1jakub:~/dev/docker-demo/01-single-container$ docker run --rm --name postgres_db -p 5435:5432 -e POSTGRES_PASSWORD=postgrespassword -d postgres
2Unable to find image 'postgres:latest' locally
3latest: Pulling from library/postgres
4f2aa67a397c4: Pull complete
58218dd41bf94: Pull complete
6e9b7fa2e6bd8: Pull complete
77288a45ee17f: Pull complete
80d0f8a67376c: Pull complete
9972b115243de: Pull complete
10d38528c83dd1: Pull complete
119be166d23dee: Pull complete
1212015b5ceae7: Pull complete
13363876c09ce9: Pull complete
14b810ba8b2ac0: Pull complete
15e1ee11d636cf: Pull complete
1650d32813cba1: Pull complete
174f0109485c03: Pull complete
18Digest: sha256:1acf72239c685322579be2116dc54f8a25fc4523882df35171229c9fee3b3b17
19Status: Downloaded newer image for postgres:latest
20fd2c38057d5f8db0b407736af70df940b11a86786af9af546caabc75eed58dcc
21

The default postgres image is downloaded and started in detached mode. The default postgres port is forwarded to 5435 on the host (-p flag) and we set the POSTGRES_PASSWORD> shell variable inside the container (-e flag) which is used to set as the database password. Let’s verify the database is running correctly.

1docker logs --tail 20 postgres_db
2
1jakub:~/dev/docker-demo/02-multiple-containers$ docker logs --tail 20 postgres_db
2ALTER ROLE
3
4
5/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
6
7waiting for server to shut down...2018-05-29 14:43:42.323 UTC [43] LOG: received fast shutdown request
8.2018-05-29 14:43:42.324 UTC [43] LOG: aborting any active transactions
92018-05-29 14:43:42.325 UTC [43] LOG: worker process: logical replication launcher (PID 50) exited with exit code 1
102018-05-29 14:43:42.325 UTC [45] LOG: shutting down
112018-05-29 14:43:42.379 UTC [43] LOG: database system is shut down
12 done
13server stopped
14
15PostgreSQL init process complete; ready for start up.
16
172018-05-29 14:43:42.434 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
182018-05-29 14:43:42.434 UTC [1] LOG: listening on IPv6 address "::", port 5432
192018-05-29 14:43:42.449 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
202018-05-29 14:43:42.474 UTC [61] LOG: database system was shut down at 2018-05-29 14:43:42 UTC
212018-05-29 14:43:42.485 UTC [1] LOG: database system is ready to accept connections
22
1PGPASSWORD=postgrespassword psql -h localhost -p 5435 -U postgres -c '\l'
2
1jakub:~/dev/docker-demo/02-multiple-containers$ PGPASSWORD=postgrespassword psql -h localhost -p 5435 -U postgres -c '\l'
2 List of databases
3 Name | Owner | Encoding | Collate | Ctype | Access privileges
4-----------+----------+----------+------------+------------+-----------------------
5 postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
6 template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
7 | | | | | postgres=CTc/postgres
8 template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
9 | | | | | postgres=CTc/postgres
10(3 rows)
11

With the database running as expected, let’s update the application to connect to the database and define our database models. We’re going to use SQLAlchemy as the ORM and the Flask-SQLAlchemy library to make the integration easier.

1from datetime import datetime
2from flask import Flask
3from flask_sqlalchemy import SQLAlchemy
4
5app = Flask(__name__)
6app.config['SQLALCHEMY_DATABASE_URI'] = 'postgres://postgres:postgrespassword@postgres_db:5432/postgres'
7db = SQLAlchemy(app)
8
9class RequestLog(db.Model):
10 id = db.Column(db.Integer, primary_key=True)
11 timestamp = db.Column(db.DateTime, nullable=False)
12
13db.drop_all()
14db.create_all()
15
16@app.route('/')
17def hello_world():
18 # Log the request.
19 request = RequestLog(timestamp=datetime.utcnow())
20 db.session.add(request)
21 db.session.commit()
22
23 return f'Hello, World! Your request ID is {request.id}\n'
24
1Flask==1.0.2
2Flask-SQLAlchemy==2.3.2
3psycopg2==2.7.4
4

The updated web server logs the time of each request and saves it with a numeric ID. Let’s look at the changes in detail:

1app.config['SQLALCHEMY_DATABASE_URI'] = 'postgres://postgres:postgrespassword@postgres_db:5432/postgres'
2db = SQLAlchemy(app)
3

When starting the server, we configure the database connection. It’s using the password passed earlier to the postgres image and it’s connecting to the postgres_db host, which is the other container.

1class RequestLog(db.Model):
2 id = db.Column(db.Integer, primary_key=True)
3 timestamp = db.Column(db.DateTime, nullable=False)
4

A simple request logging model – it stores an incremental ID of the request and the time of the request.

1db.drop_all()
2db.create_all()
3

When starting the service, create tables on the database for our models.

1@app.route('/')
2def hello_world():
3 # Log the request.
4 request = RequestLog(timestamp=datetime.utcnow())
5 db.session.add(request)
6 db.session.commit()
7
8 return f'Hello, World! Your request ID is {request.id}\n'
9

Now every incoming request will be saved to the database and the returned message will contain the requests database id. The updated source code is available on GitHub. With the updates in place, the API image needs to be rebuilt.

1docker build -t api:latest
2
1jakub:~/dev/docker-demo/02-multiple-containers$ docker build -t api:latest .
2Sending build context to Docker daemon 5.12kB
3Step 1/8 : FROM python:alpine3.6
4 ---> 08d365ef6f23
5Step 2/8 : EXPOSE 5000
6 ---> Using cache
7 ---> 908124e3cbe3
8Step 3/8 : ENV FLASK_ENV=development
9 ---> Using cache
10 ---> 1409475bda5e
11Step 4/8 : ENV FLASK_APP=/api/api.py
12 ---> Using cache
13 ---> a4af46f27885
14Step 5/8 : CMD ["flask", "run", "--host=0.0.0.0"]
15 ---> Using cache
16 ---> 6a5ec9dd6d94
17Step 6/8 : RUN apk update && apk add --virtual build-deps gcc python3-dev musl-dev && apk add postgresql-dev
18 ---> Running in d6dfd875db6c
19fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/main/x86_64/APKINDEX.tar.gz
20fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/community/x86_64/APKINDEX.tar.gz
21v3.6.2-317-g2ee6af5577 [http://dl-cdn.alpinelinux.org/alpine/v3.6/main]
22v3.6.2-305-gbd91af380d [http://dl-cdn.alpinelinux.org/alpine/v3.6/community]
23OK: 8442 distinct packages available
24(1/16) Installing binutils-libs (2.28-r3)
25(2/16) Installing binutils (2.28-r3)
26(3/16) Installing gmp (6.1.2-r0)
27(4/16) Installing isl (0.17.1-r0)
28(5/16) Installing libgomp (6.3.0-r4)
29(6/16) Installing libatomic (6.3.0-r4)
30(7/16) Installing pkgconf (1.3.7-r0)
31(8/16) Installing libgcc (6.3.0-r4)
32(9/16) Installing mpfr3 (3.1.5-r0)
33(10/16) Installing mpc1 (1.0.3-r0)
34(11/16) Installing libstdc++ (6.3.0-r4)
35(12/16) Installing gcc (6.3.0-r4)
36(13/16) Installing python3 (3.6.1-r3)
37(14/16) Installing python3-dev (3.6.1-r3)
38(15/16) Installing musl-dev (1.1.16-r14)
39(16/16) Installing build-deps (0)
40Executing busybox-1.26.2-r9.trigger
41OK: 192 MiB in 51 packages
42(1/10) Upgrading libressl2.5-libcrypto (2.5.5-r0 -> 2.5.5-r1)
43(2/10) Upgrading libressl2.5-libssl (2.5.5-r0 -> 2.5.5-r1)
44(3/10) Installing libressl2.5-libtls (2.5.5-r1)
45(4/10) Installing libressl-dev (2.5.5-r1)
46(5/10) Installing db (5.3.28-r0)
47(6/10) Installing libsasl (2.1.26-r10)
48(7/10) Installing libldap (2.4.44-r5)
49(8/10) Installing libpq (9.6.9-r0)
50(9/10) Installing postgresql-libs (9.6.9-r0)
51(10/10) Installing postgresql-dev (9.6.9-r0)
52Executing busybox-1.26.2-r9.trigger
53OK: 217 MiB in 59 packages
54Removing intermediate container d6dfd875db6c
55 ---> 7fea1f903ce9
56Step 7/8 : COPY ./api /api
57 ---> f15cd36f4885
58Step 8/8 : RUN pip3 install -r /api/requirements.txt
59 ---> Running in c5519535071c
60Collecting Flask==1.0.2 (from -r /api/requirements.txt (line 1))
61 Downloading https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl (91kB)
62Collecting Flask-SQLAlchemy==2.3.2 (from -r /api/requirements.txt (line 2))
63 Downloading https://files.pythonhosted.org/packages/a1/44/294fb7f6bf49cc7224417cd0637018db9fee0729b4fe166e43e2bbb1f1c8/Flask_SQLAlchemy-2.3.2-py2.py3-none-any.whl
64Collecting psycopg2==2.7.4 (from -r /api/requirements.txt (line 3))
65 Downloading https://files.pythonhosted.org/packages/74/83/51580322ed0e82cba7ad8e0af590b8fb2cf11bd5aaa1ed872661bd36f462/psycopg2-2.7.4.tar.gz (425kB)
66Collecting Jinja2>=2.10 (from Flask==1.0.2->-r /api/requirements.txt (line 1))
67 Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB)
68Collecting itsdangerous>=0.24 (from Flask==1.0.2->-r /api/requirements.txt (line 1))
69 Downloading https://files.pythonhosted.org/packages/dc/b4/a60bcdba945c00f6d608d8975131ab3f25b22f2bcfe1dab221165194b2d4/itsdangerous-0.24.tar.gz (46kB)
70Collecting click>=5.1 (from Flask==1.0.2->-r /api/requirements.txt (line 1))
71 Downloading https://files.pythonhosted.org/packages/34/c1/8806f99713ddb993c5366c362b2f908f18269f8d792aff1abfd700775a77/click-6.7-py2.py3-none-any.whl (71kB)
72Collecting Werkzeug>=0.14 (from Flask==1.0.2->-r /api/requirements.txt (line 1))
73 Downloading https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl (322kB)
74Collecting SQLAlchemy>=0.8.0 (from Flask-SQLAlchemy==2.3.2->-r /api/requirements.txt (line 2))
75 Downloading https://files.pythonhosted.org/packages/b4/9c/411a9bac1a471bed54ec447dc183aeed12a75c1b648307e18b56e3829363/SQLAlchemy-1.2.8.tar.gz (5.6MB)
76Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->Flask==1.0.2->-r /api/requirements.txt (line 1))
77 Downloading https://files.pythonhosted.org/packages/4d/de/32d741db316d8fdb7680822dd37001ef7a448255de9699ab4bfcbdf4172b/MarkupSafe-1.0.tar.gz
78Building wheels for collected packages: psycopg2, itsdangerous, SQLAlchemy, MarkupSafe
79 Running setup.py bdist_wheel for psycopg2: started
80 Running setup.py bdist_wheel for psycopg2: finished with status 'done'
81 Stored in directory: /root/.cache/pip/wheels/43/ff/71/a0b0d6dbf71f912b95cf18101bca206b40eed5086d8fdb4ed9
82 Running setup.py bdist_wheel for itsdangerous: started
83 Running setup.py bdist_wheel for itsdangerous: finished with status 'done'
84 Stored in directory: /root/.cache/pip/wheels/2c/4a/61/5599631c1554768c6290b08c02c72d7317910374ca602ff1e5
85 Running setup.py bdist_wheel for SQLAlchemy: started
86 Running setup.py bdist_wheel for SQLAlchemy: finished with status 'done'
87 Stored in directory: /root/.cache/pip/wheels/df/fc/61/df2f43ec3f11f864554bdc006a866a3ffffa59740bcf3674ef
88 Running setup.py bdist_wheel for MarkupSafe: started
89 Running setup.py bdist_wheel for MarkupSafe: finished with status 'done'
90 Stored in directory: /root/.cache/pip/wheels/33/56/20/ebe49a5c612fffe1c5a632146b16596f9e64676768661e4e46
91Successfully built psycopg2 itsdangerous SQLAlchemy MarkupSafe
92Installing collected packages: MarkupSafe, Jinja2, itsdangerous, click, Werkzeug, Flask, SQLAlchemy, Flask-SQLAlchemy, psycopg2
93Successfully installed Flask-1.0.2 Flask-SQLAlchemy-2.3.2 Jinja2-2.10 MarkupSafe-1.0 SQLAlchemy-1.2.8 Werkzeug-0.14.1 click-6.7 itsdangerous-0.24 psycopg2-2.7.4
94Removing intermediate container c5519535071c
95 ---> 0d12b12c1582
96Successfully built 0d12b12c1582
97Successfully tagged api:latest
98

Using the build cache, only the last two steps of the Dockerfile had to be executed – the Python source files were copied over and dependencies were installed. By default, Docker containers can only communicate with the host, not with each other. In order to allow them to communicate with each other, additional configuration is required. As long as the postgres_db image is running first, the API can be started with a link to it, allowing to resolve database connections. With the link configured, we can see that the inter-communication is working correctly.

1docker run --rm --name api -p 8082:5000 --link postgres_db:postgres_db -d api
2curl localhost:8082
3docker logs api
4docker stop api postgres_db
5
1jakub:~/dev/docker-demo/02-multiple-containers$ docker run --rm --name api -p 8082:5000 --link postgres_db:postgres_db -d api
2b112118717e1cc9599f2f2a7285f87f9914e293b5ee8b01defe7711d67257f5c
3jakub:~/dev/docker-demo/02-multiple-containers$ curl localhost:8082
4Hello, World! Your request ID is 1
5jakub:~/dev/docker-demo/02-multiple-containers$ curl localhost:8082
6Hello, World! Your request ID is 2
7jakub:~/dev/docker-demo/02-multiple-containers$ curl localhost:8082
8Hello, World! Your request ID is 3
9jakub:~/dev/docker-demo/02-multiple-containers$ docker logs api
10 * Serving Flask app "/api/api.py" (lazy loading)
11 * Environment: development
12 * Debug mode: on
13 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
14 * Restarting with stat
15 * Debugger is active!
16 * Debugger PIN: 221-683-573
17172.17.0.1 - - [29/May/2018 14:52:25] "GET / HTTP/1.1" 200 -
18172.17.0.1 - - [29/May/2018 14:52:26] "GET / HTTP/1.1" 200 -
19172.17.0.1 - - [29/May/2018 14:52:27] "GET / HTTP/1.1" 200 -
20jakub:~/dev/docker-demo/02-multiple-containers$ docker stop api postgres_db
21api
22postgres_db
23

It might seem that the advantages of using containers are outweighed by the cumbersome setup: you have to start the containers individually, in the correct sequence and explicitly link them in order for them to work together.

That’s where Docker Compose comes in.

Meet Docker Compose

Docker Compose is a tool for running multi-container Docker applications. While it requires some additional configuration (in the form of a docker-compose.yaml file containing the definition of the application’s services), multiple containers can then be built and run with a single command. Docker.compose is not a replacement of the Docker command line client, but an abstraction layer on top of it. Our docker-compose.yaml file will contain the definition of the API and the database service.

1version: '3.1'
2
3services:
4
5 postgres_db:
6 container_name: postgres_db
7 image: postgres
8 ports:
9 - 5435:5432
10 environment:
11 - POSTGRES_PASSWORD=postgrespassword
12 healthcheck:
13 test: exit 0
14
15 api:
16 container_name: api
17 build:
18 context: .
19 dockerfile: Dockerfile
20 ports:
21 - 8082:5000
22 depends_on:
23 - postgres_db
24

The version directive specifies which version of the Docker Compose syntax we’re using. It’s important to provide it, as there are non-backwards compatible changes between versions. You can read more about this in the official documentation. In the services section, the containers we will be running are described. The postgres_db definition should look familiar to you, as it contains the arguments that used to be passed to docker run:

1docker run --rm -p 5435:5432 -e POSTGRES_PASSWORD=postgrespassword --name postgres_db -d postgres
2

The advantage of storing them in the docker-compose.yaml file is that you won’t have to remember them when you to start a container. The service uses a public image instead of a local Dockerfile. The opposite is the case for the API image so we need to provide the build context (path to the Dockerfile) and the Dockerfile to build the image from. The configuration also ensures that the API container is started after postgres_db since the former depends on the later.

With the container built, and run-time configuration specified in the docker-compose.yaml file, both containers can be built and started with a single command.

1docker-compose build
2docker-compose up -d
3curl localhost:8082
4docker-compose down
5
1jakub:~/dev/docker-demo/03-using-compose$ docker-compose build
2postgres_db uses an image, skipping
3Building api
4Step 1/8 : FROM python:alpine3.6
5 ---> 08d365ef6f23
6Step 2/8 : EXPOSE 5000
7 ---> Using cache
8 ---> 908124e3cbe3
9Step 3/8 : ENV FLASK_ENV=development
10 ---> Using cache
11 ---> 1409475bda5e
12Step 4/8 : ENV FLASK_APP=/api/api.py
13 ---> Using cache
14 ---> a4af46f27885
15Step 5/8 : CMD ["flask", "run", "--host=0.0.0.0"]
16 ---> Using cache
17 ---> 6a5ec9dd6d94
18Step 6/8 : RUN apk update && apk add --virtual build-deps gcc python3-dev musl-dev && apk add postgresql-dev
19 ---> Using cache
20 ---> 7fea1f903ce9
21Step 7/8 : COPY ./api /api
22 ---> 74cac33aee4b
23Step 8/8 : RUN pip3 install -r /api/requirements.txt
24 ---> Running in 5c2266489bf2
25Collecting Flask==1.0.2 (from -r /api/requirements.txt (line 1))
26 Downloading https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl (91kB)
27Collecting Flask-SQLAlchemy==2.3.2 (from -r /api/requirements.txt (line 2))
28 Downloading https://files.pythonhosted.org/packages/a1/44/294fb7f6bf49cc7224417cd0637018db9fee0729b4fe166e43e2bbb1f1c8/Flask_SQLAlchemy-2.3.2-py2.py3-none-any.whl
29Collecting psycopg2==2.7.4 (from -r /api/requirements.txt (line 3))
30 Downloading https://files.pythonhosted.org/packages/74/83/51580322ed0e82cba7ad8e0af590b8fb2cf11bd5aaa1ed872661bd36f462/psycopg2-2.7.4.tar.gz (425kB)
31Collecting click>=5.1 (from Flask==1.0.2->-r /api/requirements.txt (line 1))
32 Downloading https://files.pythonhosted.org/packages/34/c1/8806f99713ddb993c5366c362b2f908f18269f8d792aff1abfd700775a77/click-6.7-py2.py3-none-any.whl (71kB)
33Collecting itsdangerous>=0.24 (from Flask==1.0.2->-r /api/requirements.txt (line 1))
34 Downloading https://files.pythonhosted.org/packages/dc/b4/a60bcdba945c00f6d608d8975131ab3f25b22f2bcfe1dab221165194b2d4/itsdangerous-0.24.tar.gz (46kB)
35Collecting Jinja2>=2.10 (from Flask==1.0.2->-r /api/requirements.txt (line 1))
36 Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB)
37Collecting Werkzeug>=0.14 (from Flask==1.0.2->-r /api/requirements.txt (line 1))
38 Downloading https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl (322kB)
39Collecting SQLAlchemy>=0.8.0 (from Flask-SQLAlchemy==2.3.2->-r /api/requirements.txt (line 2))
40 Downloading https://files.pythonhosted.org/packages/b4/9c/411a9bac1a471bed54ec447dc183aeed12a75c1b648307e18b56e3829363/SQLAlchemy-1.2.8.tar.gz (5.6MB)
41Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->Flask==1.0.2->-r /api/requirements.txt (line 1))
42 Downloading https://files.pythonhosted.org/packages/4d/de/32d741db316d8fdb7680822dd37001ef7a448255de9699ab4bfcbdf4172b/MarkupSafe-1.0.tar.gz
43Building wheels for collected packages: psycopg2, itsdangerous, SQLAlchemy, MarkupSafe
44 Running setup.py bdist_wheel for psycopg2: started
45 Running setup.py bdist_wheel for psycopg2: finished with status 'done'
46 Stored in directory: /root/.cache/pip/wheels/43/ff/71/a0b0d6dbf71f912b95cf18101bca206b40eed5086d8fdb4ed9
47 Running setup.py bdist_wheel for itsdangerous: started
48 Running setup.py bdist_wheel for itsdangerous: finished with status 'done'
49 Stored in directory: /root/.cache/pip/wheels/2c/4a/61/5599631c1554768c6290b08c02c72d7317910374ca602ff1e5
50 Running setup.py bdist_wheel for SQLAlchemy: started
51 Running setup.py bdist_wheel for SQLAlchemy: finished with status 'done'
52 Stored in directory: /root/.cache/pip/wheels/df/fc/61/df2f43ec3f11f864554bdc006a866a3ffffa59740bcf3674ef
53 Running setup.py bdist_wheel for MarkupSafe: started
54 Running setup.py bdist_wheel for MarkupSafe: finished with status 'done'
55 Stored in directory: /root/.cache/pip/wheels/33/56/20/ebe49a5c612fffe1c5a632146b16596f9e64676768661e4e46
56Successfully built psycopg2 itsdangerous SQLAlchemy MarkupSafe
57Installing collected packages: click, itsdangerous, MarkupSafe, Jinja2, Werkzeug, Flask, SQLAlchemy, Flask-SQLAlchemy, psycopg2
58Successfully installed Flask-1.0.2 Flask-SQLAlchemy-2.3.2 Jinja2-2.10 MarkupSafe-1.0 SQLAlchemy-1.2.8 Werkzeug-0.14.1 click-6.7 itsdangerous-0.24 psycopg2-2.7.4
59Removing intermediate container 5c2266489bf2
60 ---> 53add68d9400
61Successfully built 53add68d9400
62Successfully tagged 03-using-compose_api:latest
63jakub:~/dev/docker-demo/03-using-compose$ docker-compose up -d
64Creating network "03-using-compose_default" with the default driver
65Creating postgres_db ... done
66Creating api ... done
67jakub:~/dev/docker-demo/03-using-compose$ docker-compose ps
68 Name Command State Ports
69--------------------------------------------------------------------------------------------
70api flask run --host=0.0.0.0 Up 0.0.0.0:8082->5000/tcp
71postgres_db docker-entrypoint.sh postgres Up (health: starting) 0.0.0.0:5435->5432/tcp
72jakub:~/dev/docker-demo/03-using-compose$ curl localhost:8082
73Hello, World! Your request ID is 1
74jakub:~/dev/docker-demo/03-using-compose$ curl localhost:8082
75Hello, World! Your request ID is 2
76jakub:~/dev/docker-demo/03-using-compose$ curl localhost:8082
77Hello, World! Your request ID is 3
78jakub:~/dev/docker-demo/03-using-compose$ docker-compose down
79Stopping api ... done
80Stopping postgres_db ... done
81Removing api ... done
82Removing postgres_db ... done
83Removing network 03-using-compose_default
84

Docker Compose for Local Development

In the Dockerfile we copied the source code from the host machine to the API container – so any changes made locally are not picked up until the image is rebuilt. To avoid having to rebuild the image every time the application code is updated, it’s possible to mount a local directory inside the container, allowing modifications in one environment to be present in the other.

That change, applied to the docker-compose.yaml file, would work great for development, but it’s not a configuration that would be welcome in production. In production, you want to avoid having the ability to circumvent the release process and edit a production application in situ. Fortunately, there’s no need to entirely duplicate the docker-compose.yaml file for each environment as using docker-compose.override.yaml allows you to compose two files; one as the base and the other overlaying modifications on top of it. In this case, the only modification we want locally is to mount the source code directory inside the container.

1version: '3.1'
2
3services:
4
5 api:
6 volumes:
7 - ./api/:/api/
8

You can find the changes on GitHub. When running docker-compose now, the API service will contain the configuration from the ‘docker-compose.yaml’ file with the values from docker-compose.override.yaml taking precedence over duplicates. Once the API service is started, the Flask dev server will be monitoring the code for changes, and if any occur, the dev server will restart inside the container. It’s worth pointing out that changes to the compose or override files will require the images to be rebuilt.

1docker-compose up -d
2docker logs -f api
3curl localhost:8082
4
1jakub:~/dev/docker-demo/04-compose-override$ docker-compose up -d
2Creating network "04-compose-override_default" with the default driver
3Building api
4[...]
5Successfully built d4f0d4739d3b
6Successfully tagged 04-compose-override_api:latest
7WARNING: Image for service api was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
8Creating postgres_db ... done
9Creating api ... done
10jakub:~/dev/docker-demo/04-compose-override$ docker logs -f api
11 * Serving Flask app "/api/api.py" (lazy loading)
12 * Environment: development
13 * Debug mode: on
14 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
15 * Restarting with stat
16 * Debugger is active!
17 * Debugger PIN: 229-906-818
18172.18.0.1 - - [29/May/2018 15:14:47] "GET / HTTP/1.1" 200 -
19172.18.0.1 - - [29/May/2018 15:14:48] "GET / HTTP/1.1" 200 -
20172.18.0.1 - - [29/May/2018 15:14:48] "GET / HTTP/1.1" 200 -
21 * Detected change in '/api/api.py', reloading
22 * Restarting with stat
23 * Debugger is active!
24 * Debugger PIN: 229-906-818
25172.18.0.1 - - [29/May/2018 15:15:03] "GET / HTTP/1.1" 200 -
26172.18.0.1 - - [29/May/2018 15:15:04] "GET / HTTP/1.1" 200 -
27
1jakub:~/dev/docker-demo/04-compose-override$ curl http://0.0.0.0:8082/
2Hello, World! Your request ID is 1
3jakub:~/dev/docker-demo/04-compose-override$ curl http://0.0.0.0:8082/
4Hello, World! Your request ID is 2
5jakub:~/dev/docker-demo/04-compose-override$ curl http://0.0.0.0:8082/
6Hello, World! Your request ID is 3
7jakub:~/dev/docker-demo/04-compose-override$ sed -i 's/World/Docker/g' api/api.py
8jakub:~/dev/docker-demo/04-compose-override$ curl http://0.0.0.0:8082/
9Hello, Docker! Your request ID is 1
10jakub:~/dev/docker-demo/04-compose-override$ curl http://0.0.0.0:8082/
11Hello, Docker! Your request ID is 2
12

Where to Go From Here?

If you’ve enjoyed this experience with Docker for Linux, there’s plenty of ways you can further your knowledge. The official Docker website is a goldmine of information, you can familiarise yourself with the guides or learn, in depth, about the Docker client commands and config file syntax. Once you’ve quenched your thirst for Docker knowledge, you can move on to exploring the world of container orchestration frameworks which work on top of container technologies, such as Docker, and help automate container deployment, scaling and management tasks. The frameworks you might want to read about are:

  • Kubernetes
  • Docker Swarm
  • Amazon ECS
  • Azure Container Service
  • Google Container Engine

Conclusion

So there you go. Docker is a fast and consistent way to accelerate and automate the shipping of software. It saves developers from having to set up multiple development environments each time they test and deploy code. That time can then be spent developing quality software instead.

Hopefully, this article has sparked your interest in Docker for Linux. I would love to hear where you take this new knowledge and what you think about Docker. So feel free to comment below.

This article is based on Rafael Carvalhos’ recent post ‘How to Get Started with Docker on Windows’. It has been adapted and expanded for the Linux platform by Jakub Musko.

If you are looking for more content for DevOps Engineers, we have an article about Continuous Integration with all the aspects of this software project practice.


Looking to hire?

Join our newsletter

Join thousands of subscribers already getting our original articles about software design and development. You will not receive any spam. just great content once a month.

 

Read Next

Browse Our Blog