Containers are a win: Using Docker and docker-compose
Published 2018 Sep 20, 18:18
This is the second part in a multi-part series about the path to containerized services. You can read the prologue article to get an overview of what the project this series pertains to was trying to accomplish and a high-level overview of how we got to the finish line.
This article assumes you have at least basic knowledge of Docker and
Docker good, docker-compose better
For small projects where you don’t need a collection of services that speak to each other, Docker is great. It’s all you really need to get a reliable app environment up and running and deployed. But most projects, especially those for for businesses, are not isolated applications that don’t communicate with others.
At some point you will almost definitely need to have at least one service
that needs to communicate with another service. Most commonly that would be an
application interacting with a database. So at a minimum you now have two
containers to run: your main application and your database. You could use
docker build and
docker run to launch both containers, but then you would
need to perform a little extra effort to ensure both containers can speak to
each other, especially since the IP address and exposed ports can change
depending on your configurations. Thankfully you can bypass the manual effort
Let’s look at a bare-bones Compose YAML file.
version: "3" services: web: build: ./ ports: - "4000" depends_on: - database database: image: database-service volumes: - "pgdb:/var/run/postgres"
The version tells
docker-compose what API version we are using and therefore
determines which syntax we are using. Under that is where the magic happens.
services property holds a list of named services that you want
docker-compose to be able to build, run, and manage. In this example we have
web for our main application and
database for our database.
You can think of a service as a container. The Compose YAML is creating a
“container definition” that can be used to easily build, rebuild, and run your
The web service uses the
build property to create the container image. This
means that invoking
docker-compose build to
docker-compose up --build will
trigger the full build directive based on the details of the
In our example,
build: ./ is synonymous to
docker build ./. You can use an
extended build format to specify the exact details more precisely:
build: context: docker/custom/ dockerfile: Dockerfile.custom
context property refers to the build context, which is the root folder
that the build will be executed from. The
dockerfile property allows you to
specify custom Dockerfiles for builds. The above extended example is synonymous
docker build -f <dockerfile> <context>.
In contrast, the
database service uses the
image property. This means that
“building” the service is synonymous to
docker pull. If the image exists
locally, it is used as-is. If the image does not exist locally, it will use
docker pull to obtain it. This is useful for services that you don’t change
frequently or don’t have control over. You wouldn’t, for example, want to
rebuild Postgres frequently because you aren’t modifying the Postgres build,
just the data that it stores, and that’s something you handle outside of the
build flow (explained in the “Volumes” section below).
Easily done through the
ports property, just list which ports you want to
expose. In the example we have a list of only one port,
4000, but we could
expose multiple ports and even specify ranges (e.g.
In the example, we use
depends_on to tell
docker-compose that our web
database to be running before it can be started. Notice that
we use the list format, because we can specify multiple dependencies. For
example, if we also were using memcached or Redis, we could create a service for
it and add that as another dependency. This is especially common for web
applications that use both a database and a caching service.
depends_on will determine the order in which services are
started, there is no guarantee placed on code execution within the container.
For example, the
database container could start up and attempt to run
postgres, but something stalls and it takes five minutes before the server is
actually running and available for incoming connections. In such a case,
docker-compose will not wait those five minutes before starting the
container. This is because, according to
docker-compose, the container is up
and running so we’re good to move on to the next service. When creating a system
that requires interactions between external services such as this, retrying for
a connection should be the default behavior, but if you application can’t do
that then you will need to create a custom startup flow.
Sometimes you want to persist data between builds/runs of your services. This is
typically achieved by mounting a volume, which means connecting a local file or
folder to a file or folder in your container. This is especially useful for
databases, and we can see the use of it with
volumes in the example.
The short format is
HOST:CONTAINER, so in our example we say that the local
pgdb folder should be mounted to
/var/run/postgres on the container.
pgdb will be available immediately to the
database service and
anything changed in
/var/run/postgres will be reflected in
Personally, I also use this when I’m programming in other environments at home.
If I need to program something in Python and I’m on my Windows desktop, I’ll
create a small
docker-compose file, even if it’s just a single service running
an environment with Python installed, and mount a volume. Now I can use editors
on my Windows machine (such as Atom or Visual Studio) to create and modify my
Python code files and have the changes reflected in a running container so that
I can immediately test and iterate on my development. It can also be a simple
method for dumping files from your container to your host, e.g. mounting a
folder with log data so that you can inspect and manipulate it from your host
One thing that we did not need to do: expose the database container to the web
container. In this example, both containers are automatically connected using
their service name. For our web application we can use
example, to connect to the database container;
docker-compose handles all the
messy details with that setup.
Coming up next is Automate All The Things! With Jenkins!. For repeatable tasks you do not want to run them by hand. Not only do such tasks tend to be tedious and time-consuming, but they are also tremendously error-prone. Even when you’re running a simple script, things can go wrong, and so it’s ideal to have a system that can run repeatable tasks based on triggers. Sometimes these triggers are events such as code commits and other times they are simply scheduled to repeat at certain times. Although I’ll be talking about Jenkins specifically, the content should be useful regardless of which tool you end up using.