How to Build a Python Flask API App Using Docker on Linux
Docker has become a darling of the DevOps community because it enables true independence between applications, environments, infrastructure, and developers. The tool, first released in 2013, was initially developed for Linux, but is now fully supported on macOS and Windows, as well as all major cloud service providers (including AWS, Azure and Google Cloud).
In this article, I’ll focus on two of these tools: Docker and Docker Compose. More specifically, using them on Linux to build an API in Flask. I will be demoing all of this on a Linux environment, but many of the concepts apply equally to development across all platforms.
If you prefer working in the Windows environment, we’ve got you covered. You can also switch to the Windows version of this article.
Table Of Contents
What Is Docker for Linux?
Docker is a simple, intuitive tool that performs operating system-level virtualization, a process also referred to as containerization. With Docker, you can package your application, and all of the software required to run it, into a single container. You can then run that container in your local development environment all the way to production.
Docker has many benefits, including:
- Enables developers to spend less time on configuration and more time building software.
- Allows you to build up independent containers that will interact with each other seamlessly with accuracy and speed.
- A fast and consistent way to accelerate and automate the shipping of software.
- Saves developers from having to set up and configure multiple development environments each time they test or deploy
Requirements
Before we can continue any further, you will need to install Docker and Docker Compose. The installation is simple and step by step instructions for your platform can be found on the official Docker website. Once Docker installation is complete, it’s time to install Docker Compose. The process is even simpler than for Docker and the official instructions are available here.
Let’s Go!
To verify the installation has been completed successfully, run the following commands in the terminal:
1$ docker --version2$ docker-compose --version3
If everything has been set up correctly, the commands will return the versions of the tools installed (the versions in your environment might differ slightly):
1jakub:~/dev/docker-demo$ docker --version2Docker version 17.12.1-ce, build 7390fc63jakub:~/dev/docker-demo$ docker-compose --version4docker-compose version 1.21.2, build a1334715
Docker Internals
OK, we’re on our way! Before we get too deep, it’s useful to know a few Docker terms. Knowing these will help you understand how everything is interconnected.
Daemon
The Daemon can be considered the brain of the whole operation. The Daemon is responsible for taking care of the lifecycle of containers and handling things with the Operating System. It does all of the heavy lifting every time a command is executed.
Client
The Client is an HTTP API wrapper that exposes a set of commands interpreted by the Daemon.
Registries
The Registries are responsible for storing images. They can be public, or private, and are available with different providers (Azure has its own container registry). Docker is configured to look for images on Docker Hub by default. To see how they interact with each other, let’s run our first image:
1$ docker run hello-world2
1jakub:~/dev/docker-demo$ docker run hello-world2Unable to find image 'hello-world:latest' locally3latest: Pulling from library/hello-world49bb5a5d4561a: Pull complete5Digest: sha256:f5233545e43561214ca4891fd1157e1c3c563316ed8e237750d59bde73361e776Status: Downloaded newer image for hello-world:latest78Hello from Docker!9This message shows that your installation appears to be working correctly.1011To generate this message, Docker took the following steps:12 1. The Docker client contacted the Docker daemon.13 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.14 (amd64)15 3. The Docker daemon created a new container from that image which runs the16 executable that produces the output you are currently reading.17 4. The Docker daemon streamed that output to the Docker client, which sent it18 to your terminal.1920To try something more ambitious, you can run an Ubuntu container with:21 $ docker run -it ubuntu bash2223Share images, automate workflows, and more with a free Docker ID:24 https://hub.docker.com/2526For more examples and ideas, visit:27 https://docs.docker.com/engine/userguide/28
The above list explains what happened behind the scenes: the client (the docker command-line tool), contacted the daemon to run the hello-world
image. Since the image wasn’t available locally, it had to be downloaded from the registry (Docker Hub is set as the default). The daemon then created a container from the image and ran it, sending the output generated to the client and making it appear in your terminal.
If you’re experiencing problems with the example above, e.g. you’re required to run the Docker client using sudo, there are a few post-installation steps you might want to go through described here. In short, you’ll need to add your current user to the docker user group, removing the need for elevated privileges. This can be done with the following commands:
1sudo groupadd docker2sudo usermod -aG docker $USER3newgrp docker4
The Docker Client
Now that you know how to run a docker image, let’s look at the basics of image management. You can see what images are currently downloaded using docker images.
1jakub:~/dev/docker-demo$ docker images2REPOSITORY TAG IMAGE ID CREATED SIZE3hello-world latest e38bc07ac18e 6 weeks ago 1.85kB4
Right now, we only have the hello-world
image we’ve downloaded in the previous step. Let’s download a Linux image and use it to execute custom commands. The image we’re going to use is Alpine, a lightweight Docker image based on Alpine Linux. We’re going to use docker pull
to explicitly download the image from the image registry:
1docker pull alpine2docker images3
1jakub:~/dev/docker-demo$ docker pull alpine2Using default tag: latest3latest: Pulling from library/alpine4ff3a5c916c92: Pull complete5Digest: sha256:7df6db5aa61ae9480f52f0b3a06a140ab98d427f86d8d5de0bedab9b8df6b1c06Status: Downloaded newer image for alpine:latest7jakub:~/dev/docker-demo$ docker images8REPOSITORY TAG IMAGE ID CREATED SIZE9hello-world latest e38bc07ac18e 6 weeks ago 1.85kB10alpine latest 3fd9065eaf02 4 months ago 4.15MB11
We now have two images at our disposal. Let’s run a command using the new image:
1docker run alpine cat /etc/os-release2
1jakub:~/dev/docker-demo$ docker run alpine cat /etc/os-release2NAME="Alpine Linux"3ID=alpine4VERSION_ID=3.7.05PRETTY_NAME="Alpine Linux v3.7"6HOME_URL="http://alpinelinux.org"7BUG_REPORT_URL="http://bugs.alpinelinux.org"8
Printing the contents of the /etc/os-release
file on the guest filesystem we can see which version of Alpine it is running. Using docker run
creates a new container and runs the command inside the container until completion. If you want to run an interactive command inside the container, you’ll need to pass the -i -t
flags to the run
command.
1docker run -it alpine sh2
1jakub:~/dev/docker-demo$ docker run -i -t alpine sh2/ # env3HOSTNAME=7c56653a2e054SHLVL=15HOME=/root6TERM=xterm7PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin8PWD=/9/ # exit10jakub:~/dev/docker-demo$11
We now have an interactive shell inside the Docker container. This means that the container is running until we exit the shell. You can verify that by opening another terminal window and running docker ps
in it.
1CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES24cba0288c612 alpine "sh" 5 seconds ago Up 4 seconds kind_nobel3
You can confirm that it’s the same container by comparing the CONTAINER_ID
against the HOSTNAME
from the previous command. You can use that value in order to connect to an already running container and you will see that the changes in one shell session are visible in the other.
1docker exec -it container_id sh2
1jakub:~/dev/docker-demo$2jakub:~/dev/docker-demo$ docker run -i -t alpine sh3/ # ls home/4/ # mkdir home/hello5/ # ls home/6hello7/ #8
1jakub:~/dev/docker-demo$ docker ps2CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES34cba0288c612 alpine "sh" 6 minutes ago Up 6 minutes kind_nobel4jakub:~/dev/docker-demo$ docker exec -it 4cba0288c612 sh5/ # ls home/6hello7
The changes will not persist between different runs of the same container though.
1jakub:~/dev/docker-demo$ docker run -it alpine sh2/ # ls home/3/ # mkdir home/hello4/ # ls home/5hello6/ # exit7jakub:~/dev/docker-demo$ docker run -it alpine sh8/ # ls home/9/ # exit10
You can see the running containers with docker container ls
or. By adding the -a
flag you can see all previously running containers. To remove the stopped ones use docker rm.
1ocker container ls -a2docker rm container_id/container_name3
1akub:~/dev/docker-demo$ docker container ls -a2CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES3fb7d54d3259c alpine "sh" About a minute ago Exited (0) About a minute ago dreamy_wozniak48a25c328e100 alpine "sh" About a minute ago Exited (0) About a minute ago boring_neumann54cba0288c612 alpine "sh" 17 minutes ago Exited (0) 3 seconds ago kind_nobel67c56653a2e05 alpine "sh" 19 minutes ago Exited (0) 19 minutes ago determined_bohr731e357a13333 alpine "cat /etc/os-release" 20 minutes ago Exited (0) 20 minutes ago admiring_mayer8339e5694c55d hello-world "/hello" 27 minutes ago Exited (0) 27 minutes ago amazing_panini9jakub:~/dev/docker-demo$ docker container rm fb7d54d3259c10fb7d54d3259c11jakub:~/dev/docker-demo$ docker container rm boring_neumann12boring_neumann13jakub:~/dev/docker-demo$ docker container ls -a14CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES154cba0288c612 alpine "sh" 17 minutes ago Exited (0) 3 seconds ago kind_nobel167c56653a2e05 alpine "sh" 19 minutes ago Exited (0) 19 minutes ago determined_bohr1731e357a13333 alpine "cat /etc/os-release" 20 minutes ago Exited (0) 20 minutes ago admiring_mayer18339e5694c55d hello-world "/hello" 27 minutes ago Exited (0) 27 minutes ago amazing_panini1920
The containers that stopped running are preserved on disk by default. In some contexts, they might be used to debug an issue after the run completed. To automatically clean them up, the --rm
flag can be added to the run
command. Additionally, you might have noticed that containers are given names such as boring_neumann
in the example above. Unless you pass one explicitly with the --name
flag, a random name is generated. If you know the name of the container, you can replace the container id in the commands that require it, so it’s good practice to name your containers.
1jakub:~/dev/docker-demo$ docker container ls -a2CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES3jakub:~/dev/docker-demo$ docker container ls -a4jakub:~/dev/docker-demo$ docker run --name our_container alpine echo 'hello world'5hello world6jakub:~/dev/docker-demo$ docker container ls -a7CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES855189f4487f6 alpine "echo 'hello world'" 2 seconds ago Exited (0) 1 second ago our_container9jakub:~/dev/docker-demo$ docker run --name will_be_autoremoved --rm alpine echo 'hello world'10hello world11jakub:~/dev/docker-demo$ docker container ls -a12CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES1355189f4487f6 alpine "echo 'hello world'" 29 seconds ago Exited (0) 29 seconds ago our_container14
Building Your First Docker Container on Linux
With the basics of the Docker client mastered, it’s time to build a container that will host an API service. It will have a single endpoint: returning Hello, world!
. The code is available on GitHub:
1from flask import Flask23app = Flask(__name__)45@app.route('/')6def hello_world():7 return 'Hello, World!\n'8
1Flask==1.0.22
In order to run the service, a couple of steps have to be completed:
- The Python module dependencies need to be installed.
- The
FLASK_APP
shell variable has to point to the python file with the code. flask run
can then be invoked to start the service.
Since there isn’t an existing image that does all that, we will need to create our own. All image definitions are contained inside the Dockerfiles, which specify the parent image and a set of instructions that you want to be executed.
The Dockerfile for the service looks as follows:
1FROM python:alpine3.623EXPOSE 50004ENV FLASK_ENV=development5ENV FLASK_APP=/api/api.py6CMD ["flask", "run", "--host=0.0.0.0"]78COPY ./api /api9RUN pip3 install -r /api/requirements.txt10
There’s a lot to take in here so let’s go step by step.
1FROM python:alpine3.62
- Specify the base image in the name:tag format. In this case, it’s the Alpine distribution containing Python 3.6.
1EXPOSE 50002
- Have the container listen on port 5000. That does not mean this port will be available for communication outside the container – it has to be published separately. More on that soon.
1ENV FLASK_ENV development2ENV FLASK_APP /api/api.py3
- Set environmental variables consumed by the code.
1CMD ["flask", "run", "--host=0.0.0.0"]2
- The default command executed when the container is running.
1COPY ./api /api2
- Copy the source code directory from the host to the image.
1RUN pip3 install -r /api/requirements.txt2
- Install the dependencies inside the container.
To build the newly defined image we will use docker build
:
1docker build -t api:latest2
1jakub:~/dev/docker-demo/01-single-container$ docker build -t api:latest .2Sending build context to Docker daemon 4.608kB3Step 1/7 : FROM python:alpine3.64alpine3.6: Pulling from library/python5605ce1bd3f31: Pull complete655018be3009c: Pull complete704cbc77bcb89: Pull complete83a765a92b253: Pull complete9c704f41e2979: Pull complete10Digest: sha256:2e2b36d517371ae8e5954ddeb557dca0d236de14734b03bd5d4a53069ba4e63711Status: Downloaded newer image for python:alpine3.612 ---> 08d365ef6f2313Step 2/7 : EXPOSE 500014 ---> Running in 1be019e3540f15Removing intermediate container 1be019e3540f16 ---> 908124e3cbe317Step 3/7 : ENV FLASK_ENV=development18 ---> Running in 749e0457771b19Removing intermediate container 749e0457771b20 ---> 1409475bda5e21Step 4/7 : ENV FLASK_APP=/api/api.py22 ---> Running in 52ccf914d98a23Removing intermediate container 52ccf914d98a24 ---> a4af46f2788525Step 5/7 : CMD ["flask", "run", "--host=0.0.0.0"]26 ---> Running in 42138eb88d7f27Removing intermediate container 42138eb88d7f28 ---> 6a5ec9dd6d9429Step 6/7 : COPY ./api /api30 ---> d144346d8ef931Step 7/7 : RUN pip3 install -r /api/requirements.txt32 ---> Running in fb1c31e2468933Collecting Flask==1.0.2 (from -r /api/requirements.txt (line 1))34 Downloading https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl (91kB)35Collecting Werkzeug>=0.14 (from Flask==1.0.2->-r /api/requirements.txt (line 1))36 Downloading https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl (322kB)37Collecting Jinja2>=2.10 (from Flask==1.0.2->-r /api/requirements.txt (line 1))38 Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB)39Collecting itsdangerous>=0.24 (from Flask==1.0.2->-r /api/requirements.txt (line 1))40 Downloading https://files.pythonhosted.org/packages/dc/b4/a60bcdba945c00f6d608d8975131ab3f25b22f2bcfe1dab221165194b2d4/itsdangerous-0.24.tar.gz (46kB)41Collecting click>=5.1 (from Flask==1.0.2->-r /api/requirements.txt (line 1))42 Downloading https://files.pythonhosted.org/packages/34/c1/8806f99713ddb993c5366c362b2f908f18269f8d792aff1abfd700775a77/click-6.7-py2.py3-none-any.whl (71kB)43Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->Flask==1.0.2->-r /api/requirements.txt (line 1))44 Downloading https://files.pythonhosted.org/packages/4d/de/32d741db316d8fdb7680822dd37001ef7a448255de9699ab4bfcbdf4172b/MarkupSafe-1.0.tar.gz45Building wheels for collected packages: itsdangerous, MarkupSafe46 Running setup.py bdist_wheel for itsdangerous: started47 Running setup.py bdist_wheel for itsdangerous: finished with status 'done'48 Stored in directory: /root/.cache/pip/wheels/2c/4a/61/5599631c1554768c6290b08c02c72d7317910374ca602ff1e549 Running setup.py bdist_wheel for MarkupSafe: started50 Running setup.py bdist_wheel for MarkupSafe: finished with status 'done'51 Stored in directory: /root/.cache/pip/wheels/33/56/20/ebe49a5c612fffe1c5a632146b16596f9e64676768661e4e4652Successfully built itsdangerous MarkupSafe53Installing collected packages: Werkzeug, MarkupSafe, Jinja2, itsdangerous, click, Flask54Successfully installed Flask-1.0.2 Jinja2-2.10 MarkupSafe-1.0 Werkzeug-0.14.1 click-6.7 itsdangerous-0.2455Removing intermediate container fb1c31e2468956 ---> b9c17651e40257Successfully built b9c17651e40258Successfully tagged api:latest59
The -t
flag allows to specify the name and tag for the new image. Once the image is built, it will appear on the list of your images:
1docker images2
1akub:~/dev/docker-demo/01-single-container$ docker images2REPOSITORY TAG IMAGE ID CREATED SIZE3api latest b9c17651e402 About a minute ago 95.4MB4python alpine3.6 08d365ef6f23 5 weeks ago 84.9MB5hello-world latest e38bc07ac18e 6 weeks ago 1.85kB6alpine latest 3fd9065eaf02 4 months ago 4.15MB7
You can see it’s using the name and tag specified. The order of instructions in the Dockerfile might seem confusing at first. It might appear as if the application is being run before its dependencies are installed. That’s not the case though, the entry command specified by CMD does not get executed until the container is started. Additionally, ordering the commands in this way takes advantage of image build cache. Each build step is being cached, so that if any line in the Dockerfile is changed, only it and the lines following it are re-evaluated. Trying to rebuild the image again would result in all steps being evaluated from cache.
1docker build -t api:latest2
1jakub:~/dev/docker-demo/01-single-container$ docker build -t api:latest .2Sending build context to Docker daemon 4.608kB3Step 1/7 : FROM python:alpine3.64 ---> 08d365ef6f235Step 2/7 : EXPOSE 50006 ---> Using cache7 ---> 908124e3cbe38Step 3/7 : ENV FLASK_ENV=development9 ---> Using cache10 ---> 1409475bda5e11Step 4/7 : ENV FLASK_APP=/api/api.py12 ---> Using cache13 ---> a4af46f2788514Step 5/7 : CMD ["flask", "run", "--host=0.0.0.0"]15 ---> Using cache16 ---> 6a5ec9dd6d9417Step 6/7 : COPY ./api /api18 ---> Using cache19 ---> d144346d8ef920Step 7/7 : RUN pip3 install -r /api/requirements.txt21 ---> Using cache22 ---> b9c17651e40223Successfully built b9c17651e40224Successfully tagged api:latest25
Running the Service Inside the Container
While running the container using docker run API does start the Flask service, it won’t be working as expected.
1jakub:~/dev/docker-demo/01-single-container$ docker run --rm --name api api2 * Serving Flask app "/api/api.py" (lazy loading)3 * Environment: development4 * Debug mode: on5 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)6 * Restarting with stat7 * Debugger is active!8 * Debugger PIN: 191-351-8979
That’s because the image is configured to listen on port 5000, but the port hasn’t been forwarded to the host. In order to make the port available on the host, it has to be published:
1docker run --rm --name api -p 8082:5000 api2
1jakub:~/dev/docker-demo/01-single-container$ docker run --rm --name api -p 8082:5000 api2 * Serving Flask app "/api/api.py" (lazy loading)3 * Environment: development4 * Debug mode: on5 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)6 * Restarting with stat7 * Debugger is active!8 * Debugger PIN: 191-351-8979172.17.0.1 - - [29/May/2018 14:38:15] "GET / HTTP/1.1" 200 -10
1jakub:~/dev/docker-demo/01-single-container$ curl http://0.0.0.0:8082/2Hello, World!3
This forwards hosts port 8082 to container’s 5000. You can see port forwarding configuration of a container using docker port.
1docker port container_id/container_name2
1jakub:~/dev/docker-demo/01-single-container$ docker port api25000/tcp -> 0.0.0.0:80823
So far we’ve been starting the container in the foreground and closing the terminal window would stop the container. If you want the container to be running in the background without eating up one of your terminals, you can run it in detached mode, with the -d
flag.
1docker run --rm --name api -p 8082:5000 -d api2docker ps3docker logs container_id/container_name4
1jakub:~/dev/docker-demo/01-single-container$ docker run --rm --name api -p 8082:5000 -d api29db82080612b8314cf8935a75450cd2b0a244da3e21361edb90c0b3e10a4b14a3jakub:~/dev/docker-demo/01-single-container$ docker ps4CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES59db82080612b api "flask run --host=0.â¦" 8 seconds ago Up 7 seconds 0.0.0.0:8082->5000/tcp api6jakub:~/dev/docker-demo/01-single-container$ docker logs api7 * Serving Flask app "/api/api.py" (lazy loading)8 * Environment: development9 * Debug mode: on10 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)11 * Restarting with stat12 * Debugger is active!13 * Debugger PIN: 191-351-89714
docker stop
can then be used in order to stop the server and bring the container down.
1docker stop container_id/container_name2docker ps3
1jakub:~/dev/docker-demo/01-single-container$ docker stop api2api3jakub:~/dev/docker-demo/01-single-container$ docker ps4CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES5
Working with Multiple Containers
Most web services depend on a database, so let’s add one to this project.
1docker run --rm --name postgres_db -p 5435:5432 -e POSTGRES_PASSWORD=postgrespassword -d postgres2
1jakub:~/dev/docker-demo/01-single-container$ docker run --rm --name postgres_db -p 5435:5432 -e POSTGRES_PASSWORD=postgrespassword -d postgres2Unable to find image 'postgres:latest' locally3latest: Pulling from library/postgres4f2aa67a397c4: Pull complete58218dd41bf94: Pull complete6e9b7fa2e6bd8: Pull complete77288a45ee17f: Pull complete80d0f8a67376c: Pull complete9972b115243de: Pull complete10d38528c83dd1: Pull complete119be166d23dee: Pull complete1212015b5ceae7: Pull complete13363876c09ce9: Pull complete14b810ba8b2ac0: Pull complete15e1ee11d636cf: Pull complete1650d32813cba1: Pull complete174f0109485c03: Pull complete18Digest: sha256:1acf72239c685322579be2116dc54f8a25fc4523882df35171229c9fee3b3b1719Status: Downloaded newer image for postgres:latest20fd2c38057d5f8db0b407736af70df940b11a86786af9af546caabc75eed58dcc21
The default postgres image
is downloaded and started in detached mode. The default postgres port is forwarded to 5435 on the host (-p
flag) and we set the POSTGRES_PASSWORD
> shell variable inside the container (-e
flag) which is used to set as the database password. Let’s verify the database is running correctly.
1docker logs --tail 20 postgres_db2
1jakub:~/dev/docker-demo/02-multiple-containers$ docker logs --tail 20 postgres_db2ALTER ROLE345/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*67waiting for server to shut down...2018-05-29 14:43:42.323 UTC [43] LOG: received fast shutdown request8.2018-05-29 14:43:42.324 UTC [43] LOG: aborting any active transactions92018-05-29 14:43:42.325 UTC [43] LOG: worker process: logical replication launcher (PID 50) exited with exit code 1102018-05-29 14:43:42.325 UTC [45] LOG: shutting down112018-05-29 14:43:42.379 UTC [43] LOG: database system is shut down12 done13server stopped1415PostgreSQL init process complete; ready for start up.16172018-05-29 14:43:42.434 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432182018-05-29 14:43:42.434 UTC [1] LOG: listening on IPv6 address "::", port 5432192018-05-29 14:43:42.449 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"202018-05-29 14:43:42.474 UTC [61] LOG: database system was shut down at 2018-05-29 14:43:42 UTC212018-05-29 14:43:42.485 UTC [1] LOG: database system is ready to accept connections22
1PGPASSWORD=postgrespassword psql -h localhost -p 5435 -U postgres -c '\l'2
1jakub:~/dev/docker-demo/02-multiple-containers$ PGPASSWORD=postgrespassword psql -h localhost -p 5435 -U postgres -c '\l'2 List of databases3 Name | Owner | Encoding | Collate | Ctype | Access privileges4-----------+----------+----------+------------+------------+-----------------------5 postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |6 template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +7 | | | | | postgres=CTc/postgres8 template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +9 | | | | | postgres=CTc/postgres10(3 rows)11
With the database running as expected, let’s update the application to connect to the database and define our database models. We’re going to use SQLAlchemy as the ORM and the Flask-SQLAlchemy library to make the integration easier.
1from datetime import datetime2from flask import Flask3from flask_sqlalchemy import SQLAlchemy45app = Flask(__name__)6app.config['SQLALCHEMY_DATABASE_URI'] = 'postgres://postgres:postgrespassword@postgres_db:5432/postgres'7db = SQLAlchemy(app)89class RequestLog(db.Model):10 id = db.Column(db.Integer, primary_key=True)11 timestamp = db.Column(db.DateTime, nullable=False)1213db.drop_all()14db.create_all()1516@app.route('/')17def hello_world():18 # Log the request.19 request = RequestLog(timestamp=datetime.utcnow())20 db.session.add(request)21 db.session.commit()2223 return f'Hello, World! Your request ID is {request.id}\n'24
1Flask==1.0.22Flask-SQLAlchemy==2.3.23psycopg2==2.7.44
The updated web server logs the time of each request and saves it with a numeric ID. Let’s look at the changes in detail:
1app.config['SQLALCHEMY_DATABASE_URI'] = 'postgres://postgres:postgrespassword@postgres_db:5432/postgres'2db = SQLAlchemy(app)3
When starting the server, we configure the database connection. It’s using the password passed earlier to the postgres
image and it’s connecting to the postgres_db
host, which is the other container.
1class RequestLog(db.Model):2 id = db.Column(db.Integer, primary_key=True)3 timestamp = db.Column(db.DateTime, nullable=False)4
A simple request logging model – it stores an incremental ID of the request and the time of the request.
1db.drop_all()2db.create_all()3
When starting the service, create tables on the database for our models.
1@app.route('/')2def hello_world():3 # Log the request.4 request = RequestLog(timestamp=datetime.utcnow())5 db.session.add(request)6 db.session.commit()78 return f'Hello, World! Your request ID is {request.id}\n'9
Now every incoming request will be saved to the database and the returned message will contain the requests database id. The updated source code is available on GitHub. With the updates in place, the API image needs to be rebuilt.
1docker build -t api:latest2
1jakub:~/dev/docker-demo/02-multiple-containers$ docker build -t api:latest .2Sending build context to Docker daemon 5.12kB3Step 1/8 : FROM python:alpine3.64 ---> 08d365ef6f235Step 2/8 : EXPOSE 50006 ---> Using cache7 ---> 908124e3cbe38Step 3/8 : ENV FLASK_ENV=development9 ---> Using cache10 ---> 1409475bda5e11Step 4/8 : ENV FLASK_APP=/api/api.py12 ---> Using cache13 ---> a4af46f2788514Step 5/8 : CMD ["flask", "run", "--host=0.0.0.0"]15 ---> Using cache16 ---> 6a5ec9dd6d9417Step 6/8 : RUN apk update && apk add --virtual build-deps gcc python3-dev musl-dev && apk add postgresql-dev18 ---> Running in d6dfd875db6c19fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/main/x86_64/APKINDEX.tar.gz20fetch http://dl-cdn.alpinelinux.org/alpine/v3.6/community/x86_64/APKINDEX.tar.gz21v3.6.2-317-g2ee6af5577 [http://dl-cdn.alpinelinux.org/alpine/v3.6/main]22v3.6.2-305-gbd91af380d [http://dl-cdn.alpinelinux.org/alpine/v3.6/community]23OK: 8442 distinct packages available24(1/16) Installing binutils-libs (2.28-r3)25(2/16) Installing binutils (2.28-r3)26(3/16) Installing gmp (6.1.2-r0)27(4/16) Installing isl (0.17.1-r0)28(5/16) Installing libgomp (6.3.0-r4)29(6/16) Installing libatomic (6.3.0-r4)30(7/16) Installing pkgconf (1.3.7-r0)31(8/16) Installing libgcc (6.3.0-r4)32(9/16) Installing mpfr3 (3.1.5-r0)33(10/16) Installing mpc1 (1.0.3-r0)34(11/16) Installing libstdc++ (6.3.0-r4)35(12/16) Installing gcc (6.3.0-r4)36(13/16) Installing python3 (3.6.1-r3)37(14/16) Installing python3-dev (3.6.1-r3)38(15/16) Installing musl-dev (1.1.16-r14)39(16/16) Installing build-deps (0)40Executing busybox-1.26.2-r9.trigger41OK: 192 MiB in 51 packages42(1/10) Upgrading libressl2.5-libcrypto (2.5.5-r0 -> 2.5.5-r1)43(2/10) Upgrading libressl2.5-libssl (2.5.5-r0 -> 2.5.5-r1)44(3/10) Installing libressl2.5-libtls (2.5.5-r1)45(4/10) Installing libressl-dev (2.5.5-r1)46(5/10) Installing db (5.3.28-r0)47(6/10) Installing libsasl (2.1.26-r10)48(7/10) Installing libldap (2.4.44-r5)49(8/10) Installing libpq (9.6.9-r0)50(9/10) Installing postgresql-libs (9.6.9-r0)51(10/10) Installing postgresql-dev (9.6.9-r0)52Executing busybox-1.26.2-r9.trigger53OK: 217 MiB in 59 packages54Removing intermediate container d6dfd875db6c55 ---> 7fea1f903ce956Step 7/8 : COPY ./api /api57 ---> f15cd36f488558Step 8/8 : RUN pip3 install -r /api/requirements.txt59 ---> Running in c5519535071c60Collecting Flask==1.0.2 (from -r /api/requirements.txt (line 1))61 Downloading https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl (91kB)62Collecting Flask-SQLAlchemy==2.3.2 (from -r /api/requirements.txt (line 2))63 Downloading https://files.pythonhosted.org/packages/a1/44/294fb7f6bf49cc7224417cd0637018db9fee0729b4fe166e43e2bbb1f1c8/Flask_SQLAlchemy-2.3.2-py2.py3-none-any.whl64Collecting psycopg2==2.7.4 (from -r /api/requirements.txt (line 3))65 Downloading https://files.pythonhosted.org/packages/74/83/51580322ed0e82cba7ad8e0af590b8fb2cf11bd5aaa1ed872661bd36f462/psycopg2-2.7.4.tar.gz (425kB)66Collecting Jinja2>=2.10 (from Flask==1.0.2->-r /api/requirements.txt (line 1))67 Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB)68Collecting itsdangerous>=0.24 (from Flask==1.0.2->-r /api/requirements.txt (line 1))69 Downloading https://files.pythonhosted.org/packages/dc/b4/a60bcdba945c00f6d608d8975131ab3f25b22f2bcfe1dab221165194b2d4/itsdangerous-0.24.tar.gz (46kB)70Collecting click>=5.1 (from Flask==1.0.2->-r /api/requirements.txt (line 1))71 Downloading https://files.pythonhosted.org/packages/34/c1/8806f99713ddb993c5366c362b2f908f18269f8d792aff1abfd700775a77/click-6.7-py2.py3-none-any.whl (71kB)72Collecting Werkzeug>=0.14 (from Flask==1.0.2->-r /api/requirements.txt (line 1))73 Downloading https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl (322kB)74Collecting SQLAlchemy>=0.8.0 (from Flask-SQLAlchemy==2.3.2->-r /api/requirements.txt (line 2))75 Downloading https://files.pythonhosted.org/packages/b4/9c/411a9bac1a471bed54ec447dc183aeed12a75c1b648307e18b56e3829363/SQLAlchemy-1.2.8.tar.gz (5.6MB)76Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->Flask==1.0.2->-r /api/requirements.txt (line 1))77 Downloading https://files.pythonhosted.org/packages/4d/de/32d741db316d8fdb7680822dd37001ef7a448255de9699ab4bfcbdf4172b/MarkupSafe-1.0.tar.gz78Building wheels for collected packages: psycopg2, itsdangerous, SQLAlchemy, MarkupSafe79 Running setup.py bdist_wheel for psycopg2: started80 Running setup.py bdist_wheel for psycopg2: finished with status 'done'81 Stored in directory: /root/.cache/pip/wheels/43/ff/71/a0b0d6dbf71f912b95cf18101bca206b40eed5086d8fdb4ed982 Running setup.py bdist_wheel for itsdangerous: started83 Running setup.py bdist_wheel for itsdangerous: finished with status 'done'84 Stored in directory: /root/.cache/pip/wheels/2c/4a/61/5599631c1554768c6290b08c02c72d7317910374ca602ff1e585 Running setup.py bdist_wheel for SQLAlchemy: started86 Running setup.py bdist_wheel for SQLAlchemy: finished with status 'done'87 Stored in directory: /root/.cache/pip/wheels/df/fc/61/df2f43ec3f11f864554bdc006a866a3ffffa59740bcf3674ef88 Running setup.py bdist_wheel for MarkupSafe: started89 Running setup.py bdist_wheel for MarkupSafe: finished with status 'done'90 Stored in directory: /root/.cache/pip/wheels/33/56/20/ebe49a5c612fffe1c5a632146b16596f9e64676768661e4e4691Successfully built psycopg2 itsdangerous SQLAlchemy MarkupSafe92Installing collected packages: MarkupSafe, Jinja2, itsdangerous, click, Werkzeug, Flask, SQLAlchemy, Flask-SQLAlchemy, psycopg293Successfully installed Flask-1.0.2 Flask-SQLAlchemy-2.3.2 Jinja2-2.10 MarkupSafe-1.0 SQLAlchemy-1.2.8 Werkzeug-0.14.1 click-6.7 itsdangerous-0.24 psycopg2-2.7.494Removing intermediate container c5519535071c95 ---> 0d12b12c158296Successfully built 0d12b12c158297Successfully tagged api:latest98
Using the build cache, only the last two steps of the Dockerfile had to be executed – the Python source files were copied over and dependencies were installed. By default, Docker containers can only communicate with the host, not with each other. In order to allow them to communicate with each other, additional configuration is required. As long as the postgres_db
image is running first, the API can be started with a link to it, allowing to resolve database connections. With the link configured, we can see that the inter-communication is working correctly.
1docker run --rm --name api -p 8082:5000 --link postgres_db:postgres_db -d api2curl localhost:80823docker logs api4docker stop api postgres_db5
1jakub:~/dev/docker-demo/02-multiple-containers$ docker run --rm --name api -p 8082:5000 --link postgres_db:postgres_db -d api2b112118717e1cc9599f2f2a7285f87f9914e293b5ee8b01defe7711d67257f5c3jakub:~/dev/docker-demo/02-multiple-containers$ curl localhost:80824Hello, World! Your request ID is 15jakub:~/dev/docker-demo/02-multiple-containers$ curl localhost:80826Hello, World! Your request ID is 27jakub:~/dev/docker-demo/02-multiple-containers$ curl localhost:80828Hello, World! Your request ID is 39jakub:~/dev/docker-demo/02-multiple-containers$ docker logs api10 * Serving Flask app "/api/api.py" (lazy loading)11 * Environment: development12 * Debug mode: on13 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)14 * Restarting with stat15 * Debugger is active!16 * Debugger PIN: 221-683-57317172.17.0.1 - - [29/May/2018 14:52:25] "GET / HTTP/1.1" 200 -18172.17.0.1 - - [29/May/2018 14:52:26] "GET / HTTP/1.1" 200 -19172.17.0.1 - - [29/May/2018 14:52:27] "GET / HTTP/1.1" 200 -20jakub:~/dev/docker-demo/02-multiple-containers$ docker stop api postgres_db21api22postgres_db23
It might seem that the advantages of using containers are outweighed by the cumbersome setup: you have to start the containers individually, in the correct sequence and explicitly link them in order for them to work together.
That’s where Docker Compose comes in.
Meet Docker Compose
Docker Compose is a tool for running multi-container Docker applications. While it requires some additional configuration (in the form of a docker-compose.yaml file containing the definition of the application’s services), multiple containers can then be built and run with a single command. Docker.compose is not a replacement of the Docker command line client, but an abstraction layer on top of it. Our docker-compose.yaml file will contain the definition of the API and the database service.
1version: '3.1'23services:45 postgres_db:6 container_name: postgres_db7 image: postgres8 ports:9 - 5435:543210 environment:11 - POSTGRES_PASSWORD=postgrespassword12 healthcheck:13 test: exit 01415 api:16 container_name: api17 build:18 context: .19 dockerfile: Dockerfile20 ports:21 - 8082:500022 depends_on:23 - postgres_db24
The version directive specifies which version of the Docker Compose syntax we’re using. It’s important to provide it, as there are non-backwards compatible changes between versions. You can read more about this in the official documentation. In the services section, the containers we will be running are described. The postgres_db
definition should look familiar to you, as it contains the arguments that used to be passed to docker run
:
1docker run --rm -p 5435:5432 -e POSTGRES_PASSWORD=postgrespassword --name postgres_db -d postgres2
The advantage of storing them in the docker-compose.yaml file is that you won’t have to remember them when you to start a container. The service uses a public image instead of a local Dockerfile. The opposite is the case for the API image so we need to provide the build context (path to the Dockerfile) and the Dockerfile to build the image from. The configuration also ensures that the API container is started after postgres_db
since the former depends on the later.
With the container built, and run-time configuration specified in the docker-compose.yaml file, both containers can be built and started with a single command.
1docker-compose build2docker-compose up -d3curl localhost:80824docker-compose down5
1jakub:~/dev/docker-demo/03-using-compose$ docker-compose build2postgres_db uses an image, skipping3Building api4Step 1/8 : FROM python:alpine3.65 ---> 08d365ef6f236Step 2/8 : EXPOSE 50007 ---> Using cache8 ---> 908124e3cbe39Step 3/8 : ENV FLASK_ENV=development10 ---> Using cache11 ---> 1409475bda5e12Step 4/8 : ENV FLASK_APP=/api/api.py13 ---> Using cache14 ---> a4af46f2788515Step 5/8 : CMD ["flask", "run", "--host=0.0.0.0"]16 ---> Using cache17 ---> 6a5ec9dd6d9418Step 6/8 : RUN apk update && apk add --virtual build-deps gcc python3-dev musl-dev && apk add postgresql-dev19 ---> Using cache20 ---> 7fea1f903ce921Step 7/8 : COPY ./api /api22 ---> 74cac33aee4b23Step 8/8 : RUN pip3 install -r /api/requirements.txt24 ---> Running in 5c2266489bf225Collecting Flask==1.0.2 (from -r /api/requirements.txt (line 1))26 Downloading https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl (91kB)27Collecting Flask-SQLAlchemy==2.3.2 (from -r /api/requirements.txt (line 2))28 Downloading https://files.pythonhosted.org/packages/a1/44/294fb7f6bf49cc7224417cd0637018db9fee0729b4fe166e43e2bbb1f1c8/Flask_SQLAlchemy-2.3.2-py2.py3-none-any.whl29Collecting psycopg2==2.7.4 (from -r /api/requirements.txt (line 3))30 Downloading https://files.pythonhosted.org/packages/74/83/51580322ed0e82cba7ad8e0af590b8fb2cf11bd5aaa1ed872661bd36f462/psycopg2-2.7.4.tar.gz (425kB)31Collecting click>=5.1 (from Flask==1.0.2->-r /api/requirements.txt (line 1))32 Downloading https://files.pythonhosted.org/packages/34/c1/8806f99713ddb993c5366c362b2f908f18269f8d792aff1abfd700775a77/click-6.7-py2.py3-none-any.whl (71kB)33Collecting itsdangerous>=0.24 (from Flask==1.0.2->-r /api/requirements.txt (line 1))34 Downloading https://files.pythonhosted.org/packages/dc/b4/a60bcdba945c00f6d608d8975131ab3f25b22f2bcfe1dab221165194b2d4/itsdangerous-0.24.tar.gz (46kB)35Collecting Jinja2>=2.10 (from Flask==1.0.2->-r /api/requirements.txt (line 1))36 Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB)37Collecting Werkzeug>=0.14 (from Flask==1.0.2->-r /api/requirements.txt (line 1))38 Downloading https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl (322kB)39Collecting SQLAlchemy>=0.8.0 (from Flask-SQLAlchemy==2.3.2->-r /api/requirements.txt (line 2))40 Downloading https://files.pythonhosted.org/packages/b4/9c/411a9bac1a471bed54ec447dc183aeed12a75c1b648307e18b56e3829363/SQLAlchemy-1.2.8.tar.gz (5.6MB)41Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->Flask==1.0.2->-r /api/requirements.txt (line 1))42 Downloading https://files.pythonhosted.org/packages/4d/de/32d741db316d8fdb7680822dd37001ef7a448255de9699ab4bfcbdf4172b/MarkupSafe-1.0.tar.gz43Building wheels for collected packages: psycopg2, itsdangerous, SQLAlchemy, MarkupSafe44 Running setup.py bdist_wheel for psycopg2: started45 Running setup.py bdist_wheel for psycopg2: finished with status 'done'46 Stored in directory: /root/.cache/pip/wheels/43/ff/71/a0b0d6dbf71f912b95cf18101bca206b40eed5086d8fdb4ed947 Running setup.py bdist_wheel for itsdangerous: started48 Running setup.py bdist_wheel for itsdangerous: finished with status 'done'49 Stored in directory: /root/.cache/pip/wheels/2c/4a/61/5599631c1554768c6290b08c02c72d7317910374ca602ff1e550 Running setup.py bdist_wheel for SQLAlchemy: started51 Running setup.py bdist_wheel for SQLAlchemy: finished with status 'done'52 Stored in directory: /root/.cache/pip/wheels/df/fc/61/df2f43ec3f11f864554bdc006a866a3ffffa59740bcf3674ef53 Running setup.py bdist_wheel for MarkupSafe: started54 Running setup.py bdist_wheel for MarkupSafe: finished with status 'done'55 Stored in directory: /root/.cache/pip/wheels/33/56/20/ebe49a5c612fffe1c5a632146b16596f9e64676768661e4e4656Successfully built psycopg2 itsdangerous SQLAlchemy MarkupSafe57Installing collected packages: click, itsdangerous, MarkupSafe, Jinja2, Werkzeug, Flask, SQLAlchemy, Flask-SQLAlchemy, psycopg258Successfully installed Flask-1.0.2 Flask-SQLAlchemy-2.3.2 Jinja2-2.10 MarkupSafe-1.0 SQLAlchemy-1.2.8 Werkzeug-0.14.1 click-6.7 itsdangerous-0.24 psycopg2-2.7.459Removing intermediate container 5c2266489bf260 ---> 53add68d940061Successfully built 53add68d940062Successfully tagged 03-using-compose_api:latest63jakub:~/dev/docker-demo/03-using-compose$ docker-compose up -d64Creating network "03-using-compose_default" with the default driver65Creating postgres_db ... done66Creating api ... done67jakub:~/dev/docker-demo/03-using-compose$ docker-compose ps68 Name Command State Ports69--------------------------------------------------------------------------------------------70api flask run --host=0.0.0.0 Up 0.0.0.0:8082->5000/tcp71postgres_db docker-entrypoint.sh postgres Up (health: starting) 0.0.0.0:5435->5432/tcp72jakub:~/dev/docker-demo/03-using-compose$ curl localhost:808273Hello, World! Your request ID is 174jakub:~/dev/docker-demo/03-using-compose$ curl localhost:808275Hello, World! Your request ID is 276jakub:~/dev/docker-demo/03-using-compose$ curl localhost:808277Hello, World! Your request ID is 378jakub:~/dev/docker-demo/03-using-compose$ docker-compose down79Stopping api ... done80Stopping postgres_db ... done81Removing api ... done82Removing postgres_db ... done83Removing network 03-using-compose_default84
Docker Compose for Local Development
In the Dockerfile we copied the source code from the host machine to the API container – so any changes made locally are not picked up until the image is rebuilt. To avoid having to rebuild the image every time the application code is updated, it’s possible to mount a local directory inside the container, allowing modifications in one environment to be present in the other.
That change, applied to the docker-compose.yaml file, would work great for development, but it’s not a configuration that would be welcome in production. In production, you want to avoid having the ability to circumvent the release process and edit a production application in situ. Fortunately, there’s no need to entirely duplicate the docker-compose.yaml file for each environment as using docker-compose.override.yaml allows you to compose two files; one as the base and the other overlaying modifications on top of it. In this case, the only modification we want locally is to mount the source code directory inside the container.
1version: '3.1'23services:45 api:6 volumes:7 - ./api/:/api/8
You can find the changes on GitHub. When running docker-compose now, the API service will contain the configuration from the ‘docker-compose.yaml’ file with the values from docker-compose.override.yaml taking precedence over duplicates. Once the API service is started, the Flask dev server will be monitoring the code for changes, and if any occur, the dev server will restart inside the container. It’s worth pointing out that changes to the compose or override files will require the images to be rebuilt.
1docker-compose up -d2docker logs -f api3curl localhost:80824
1jakub:~/dev/docker-demo/04-compose-override$ docker-compose up -d2Creating network "04-compose-override_default" with the default driver3Building api4[...]5Successfully built d4f0d4739d3b6Successfully tagged 04-compose-override_api:latest7WARNING: Image for service api was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.8Creating postgres_db ... done9Creating api ... done10jakub:~/dev/docker-demo/04-compose-override$ docker logs -f api11 * Serving Flask app "/api/api.py" (lazy loading)12 * Environment: development13 * Debug mode: on14 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)15 * Restarting with stat16 * Debugger is active!17 * Debugger PIN: 229-906-81818172.18.0.1 - - [29/May/2018 15:14:47] "GET / HTTP/1.1" 200 -19172.18.0.1 - - [29/May/2018 15:14:48] "GET / HTTP/1.1" 200 -20172.18.0.1 - - [29/May/2018 15:14:48] "GET / HTTP/1.1" 200 -21 * Detected change in '/api/api.py', reloading22 * Restarting with stat23 * Debugger is active!24 * Debugger PIN: 229-906-81825172.18.0.1 - - [29/May/2018 15:15:03] "GET / HTTP/1.1" 200 -26172.18.0.1 - - [29/May/2018 15:15:04] "GET / HTTP/1.1" 200 -27
1jakub:~/dev/docker-demo/04-compose-override$ curl http://0.0.0.0:8082/2Hello, World! Your request ID is 13jakub:~/dev/docker-demo/04-compose-override$ curl http://0.0.0.0:8082/4Hello, World! Your request ID is 25jakub:~/dev/docker-demo/04-compose-override$ curl http://0.0.0.0:8082/6Hello, World! Your request ID is 37jakub:~/dev/docker-demo/04-compose-override$ sed -i 's/World/Docker/g' api/api.py8jakub:~/dev/docker-demo/04-compose-override$ curl http://0.0.0.0:8082/9Hello, Docker! Your request ID is 110jakub:~/dev/docker-demo/04-compose-override$ curl http://0.0.0.0:8082/11Hello, Docker! Your request ID is 212
Where to Go From Here?
If you’ve enjoyed this experience with Docker for Linux, there’s plenty of ways you can further your knowledge. The official Docker website is a goldmine of information, you can familiarise yourself with the guides or learn, in depth, about the Docker client commands and config file syntax. Once you’ve quenched your thirst for Docker knowledge, you can move on to exploring the world of container orchestration frameworks which work on top of container technologies, such as Docker, and help automate container deployment, scaling and management tasks. The frameworks you might want to read about are:
- Kubernetes
- Docker Swarm
- Amazon ECS
- Azure Container Service
- Google Container Engine
Conclusion
So there you go. Docker is a fast and consistent way to accelerate and automate the shipping of software. It saves developers from having to set up multiple development environments each time they test and deploy code. That time can then be spent developing quality software instead.
Hopefully, this article has sparked your interest in Docker for Linux. I would love to hear where you take this new knowledge and what you think about Docker. So feel free to comment below.