In our everyday life at Evermore, we have to support and maintain many different software products for our clients. Each project has its own needs and requirements, which usually means that we have to deal with applications based on different technology stacks with different versions of tools and libraries. If you have to deal with the tech stacks of five or six different clients, it becomes really hard to match all the software versions that you need in order to run all the applications.
Another awkward moment is that every time we have a new colleague, one of us have to spend time, sometimes many hours, helping the new colleague to set up a local environment for all the applications he or she will be assigned to. With each of us using different operating systems, this issue is even more complicated.
Every programmer happened to be at least once in a situation, where an issue is reproducible only on production environment, but not on the local environment. This could be caused eventually by a difference in the libraries and tools versions that have been used. This, and many others issues caused by running applications in not isolated environments is why we are using Docker as development environment.
Docker solves those problems in a very elegant way. It runs the applications in isolated containers, which are like lightweight VMs. Those of you who are familiar with VMs like VirtualBox, or the Workstation Player of VMWare, already know about the penalty of allocated RAM and CPU that the developer needs to pay in order to run a single VM. And what about if you need to run two of those, like in the case of Vagrant, whose idea is similar to the one of Docker.
Whereas a full VM virtualizes all the hardware, a container runs your application directly on the host operating system and hardware, but in separated space. In this way, you get all the isolation, encapsulation and consistency benefits of a VM, but with very little overhead. You can read more about the docker architecture here: https://docs.docker.com/engine/docker-overview/#the-docker-client.
From here on, I will assume that you are already familiar with Docker and the basic terminology, such as containers and images.
With the official release of Docker for Linux and recently for Mac and Windows, it is possible to completely containerize your development environment under those operating systems, without using a virtual machine, as we used to with the previous versions of it.
One of the cool things about Docker is that you can deploy containerized application to different environments, like test and production. This is outside of the scope of this post, however you can find a lot of materials about the topic. Using docker as development environment is another story.
The first thing you need to do, of course, is to install Docker. You can do this by following the instructions regarding your operating system on Docker’s website : https://www.docker.com/community-edition
In this post I will give you an example with a simple application based on Rails and Postgres, but the same applies on everything else, regardless of the framework or language you use.
So let’s start. The project has a standard layout of the files and directories for a Rails application. In the root of the project we need to place the first of the files required by Docker and that’s the Dockerfile.
The Dockerfile could be create from scratch, but usually it is based on another image definition, in other words it inherits it. In most cases, this is an image of operation system like ubuntu:16.04 or alpine:3.4. In our example, we will go more further and will base our Docker image on ruby:2.3.3, which on its own is based on buildpack-deps:jessie. In this way Docker hides from us all the complexity required to install Ruby and all it’s dependencies. Why ruby:2.3.3? Because this is the exact Ruby version that our application requires to run on production. All official ruby images are listed here https://hub.docker.com/_/ruby/
Docker uses the root user by default, but this also means that every file your application generates, would be owned by the root. In order to fix this, we will have to create an user.
RUN useradd -u 1000 --create-home --home-dir /home/app_user --shell /bin/bash app_user
Next we need to install imagemagick, as it is a dependency of our application.
RUN apt-get update -qq &&\ apt-get install -y build-essential nodejs imagemagick &&\ apt-get -y install openjdk-7-jre &&\ apt-get -y install unzip
As you noticed, we are combining several commands into a single RUN. This is because each RUN is stored in a different layer, which is being cached. Changing a value of one of the RUNs could cause a problem to a different one, which is reused from the cache and not updated. More about this problem you can find in this article: https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#run
With the following commands we create a directory where the application will be hosted. Later on, we will map it to a volume pointed to our working directory, i.e. the root of our project, where we have also our IDE pointed to.
RUN mkdir /var/www
Further, we need to switch to the newly created directory and place in there the files required to build a Rails application.
WORKDIR /var/www COPY Gemfile* /var/www
There is another command, called ADD that is often used interchangeably. It basically would do the same as COPY, but it has slightly different behaviour. For example it allows the source to be an URL or an archive.
Now we should add the command to the Dockerfile that will start the Rails dependency build process
RUN bundle install
...followed by coping of all of the project files into the same directory:
COPY . /var/www
With this line our newly defined container is ready to be started, but as you have maybe noticed we didn’t say anything about database yet. Here comes another tool in play provided by Docker - the Docker Compose. Compose is a tool for defining and running multi-container Docker applications. Exactly! We will place the database in additional container, and we will configure both containers - the web application container to connect to the database container through an internal network created by Docker.
Docker Compose uses by default a file called docker-compose.yml to load its configuration. We place the file in the same directory as the Dockerfile i.e. the root of our Rails application.
The first line of this file specifies the Compose file version.
Next we will define our containers (i.e. services) and the dependencies between them:
services: app: user: app_user build: . command: bundle exec rails s thin -p 3010 -b 0.0.0.0 image: evermore/my-web-app container_name: my-web-app volumes: - .:/var/www ports: - "3010:3010" depends_on: - db db: image: postgres:9.6 container_name: my-db volumes: - ../dumps:/var/dumps ports: - "5432:5432"
As you can see, we have two services, the first one is our Rails application and the second one is the database. depends_on specifies that the app service depends on the database service. It is important to say that depends_on will not wait for db to be “ready” before starting app. More about the startup order you can read here https://docs.docker.com/compose/startup-order/
Another cool thing about depends_on is that it will create an internal network between both services. So, in database.yml in our Rails application we can use the name of the database service to specify the database host name:
development: <<: *default database: my-db host: db username: postgres
image: postgres:9.6 is the name of the official docker image of Postgres, that can be find here https://hub.docker.com/r/library/postgres/
ports: exposes required ports to the host environment, so we can connect our IDE to the database, and finally to access the Rails web application through the browser on our host operating system.
volumes: mounts paths or other named volumes. In our case we need to mount the path where our web application exists in the container, so we can edit, and see the changes made to the code. And in the case of our db service, we have a mounted folder, so we can import eventually data dumps.
Now we are ready to build our newly defined image by executing the command.
docker-compose build It will take a while, but if the build process finished fine, you should have a new image created that you could see by executing:
If you want to delete it and start over, you can do docker rmi and pass the image id:
docker rmi 969e9c9d5ea6
Once the build process is completed, we can start executing other commands against it. The way we can do this is by specifying the container name. In our case this would be - app.
With the following commands, we will create a new database:
docker-compose run app rake db:create docker-compose run app rake db:migrate docker-compose run app rake db:seed
And finally start the application by executing
If you do now docker ps you should see both containers running with the names we gave them:
In the following, I will give you some commands that I use in my everyday work life:
To start a bash console to the web server container use:
docker exec -ti my-web-app bash
...or to start the Rails console:
docker-compose run app rails c
Remember the volume we created with the db container definition? We can use it to import a data dump to Postgres.
docker exec -i my-db pg_restore --verbose --no-acl --no-owner -U postgres -d my-db-name /var/dumps/my-db-dump
Usually the web applications depend on some external global variables, the way you can pass them is again by adding something to the docker-compose definition of the app service:
environment: - POSTGRES_HOST=db - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres - POSTGRES_DB=my-db
Sometimes, you may have an application that depend on another application defined in external docker-compose file. In such case, it is an option to define a common network, and add it to the docker-compose.yml files of the applications that you want to connect.
networks: default: external: name: shared-network
If you have more complex start-up workflow than executing a simple command, you could create a shell script and place it in the root of the project.
command: bash -c "./docker-compose-entrypoint.sh"
And that’s it. It is really so simple to dockerize your development environment, and to isolate it from the rest of your projects that you have to maintain.
I hope that in the future, more and more companies will package their stacks as Docker images, so that the on-boarding process for new-comers will be reduced to a single docker run or docker-compose up command.
Similarly, I hope that more and more open source projects will be packaged as Docker images so instead of a long series of install instructions in the README, you just use docker run, and have the code working in minutes.
In the following you can find the full versions of Dockerfile and docker-compose.yml.
FROM ruby:2.3.3 RUN apt-get update -qq &&\ apt-get install -y build-essential nodejs imagemagick &&\ apt-get -y install openjdk-7-jre &&\ apt-get -y install unzip RUN useradd -u 1000 --create-home --home-dir /home/app_user --shell /bin/bash app_user RUN mkdir /var/www WORKDIR /var/www COPY Gemfile* /var/www RUN bundle install COPY . /var/www
version: '2' services: app: user: app_user environment: - POSTGRES_HOST=db - POSTGRES_USER=postgres - POSTGRES_PASSWORD="" - POSTGRES_DB=my-db build: . command: bundle exec rails s -p 3010 -b 0.0.0.0 image: evermore/my-web-app container_name: my-web-app volumes: - .:/var/www ports: - "3010:3010" depends_on: - db db: image: postgres:9.6 container_name: my-webapp-db volumes: - ../dumps:/var/dumps ports: - "5432:5432"