Docker adoption has grown tremendously. According to recent New Relic customer data, the average number of containers per company increased 192% in the past year alone. It’s clear from the data and New Relic’s multi-year experience running Docker that container technology is not a passing fad. Practical experience, however, tells us that successfully getting to Docker in production is more than learning a new toolchain. It requires developers and operations teams to reconsider how services and applications are built, integrated, and deployed.

When integrating Docker into an existing software delivery pipeline, it’s easiest to start small—setting up a procedure for automatically building and storing images in an artifact repository is a good place to begin. This post discusses questions to ask when choosing a continuous integration solution, how to version Docker images, publish to the Docker Hub, and collect data about images.

Standardizing image builds with continuous integration

In an article he wrote over 15 years ago, Martin Fowler described continuous integration (CI) as a process for reducing risk in developing software by building and integrating it frequently. Many best practices he described, from using a version control system to automating the build on a server when changes are committed, have since been widely implemented by software development teams.

docker build process chart

General process for automatically building Docker images from source control.

Docker images deployed to production should not be built on developer workstations. Docker engine version differences, configuration drift, and conflicting versioning strategies quickly cause issues even in small teams. Automating the creation of Docker images using a CI server is a cornerstone of New Relic’s process to running Docker at scale with hundreds of services.

For teams new to Docker, the first step is deciding what CI solution to use to automate image builds.

How to choose a CI solution for building images

Many modern software-as-a-service (SaaS) CI systems are built using container technology, so it’s entirely possible that the job that builds a Docker image is itself running inside a Docker container.

This pattern also applies to the popular open-source CI project Jenkins. It’s possible to dynamically provision worker machines using containers with a plugin or run Jenkins itself in a container that is capable of running additional containers on the Docker host from inside the Docker container. Confused yet?

Regardless of the CI solution, the general idea is to define a task that is triggered from an external event that builds, tests, and publishes an image. The reality of getting this to work is more complicated. Specifically,

  • Does the CI server version of the Docker engine match the production environment Docker engine version? How easy is it to upgrade or use different versions of the Docker engine in the CI server?
  • What’s the ability of the CI solution to leverage the Docker cache for faster builds? Can the cache easily be cleared for debugging purposes? How easy is it to interact with and debug the CI environment directly if things go wrong?
  • Can Docker image builds be run in parallel? Is it straightforward to increase the number of parallel jobs?

The number of CI options—including ones designed specifically for Docker—is growing quickly. A fairly comprehensive and updated list of continuous integration tools is maintained on GitHub.

Building and publishing Docker images

Building a Docker image inside of a CI job is similar to building it locally, with a few important differences. First, the process should be initiated from a commit to a specific branch of a source control repository (often master if using a GitHub-inspired branching strategy). That event kicks off a process that begins building the Docker image. Jenkins supports this event-driven flow using the “build when a change is pushed to GitHub” trigger.

Consistently versioning the image being created is critical to establish the link between the Dockerfile in source control and processes that created the image. At New Relic, we use a combination of the date and time and the SHA of the source control commit (provided in this example using built-in environment variables from the CI server). Tagging is done using the -t option of the build command in the root directory of a project with a Dockerfile or the docker tag command:

VERSION=$(date %Y%m%d%H%M%S).git.$GIT_REVISION
docker build -t $IMAGE:$VERSION .

If the Docker image cache is property configured—which varies depending on the CI solution being used—the build will execute much faster because unchanged image layers from previous builds don’t have to be fetched from a remote server.

Next, if the Docker build command succeeds (a non-zero process exit code will typically fail a CI job), the image is validated before publishing to a registry. For a simple web-facing service, a health check of the running container could validate that the container build actually worked using curl. The sleep command and curl retry options (helpfully suggested from a CircleCI post) are used to give the container enough time to start.

docker run -d -p 3000:3000 $IMAGE:$VERSION; sleep 5
curl --retry-delay 3 --retry 10 -v http://localhost:3000

With the latest version of Docker, version 1.12.1, this check could potentially be implemented inside the Dockerfile using the new container health check functionality.

If publishing to a public repository in the Docker hub, logging in and executing the docker push command will make the new image available to the world. Many SaSS-based CI solutions allow you to encrypt or store secure environment variables separate from source control—checking in unencrypted sensitive data into source control should always be avoided.

docker push $IMAGE:$VERSION

Beyond “Did it build?”: Tracking image bloat

Docker containers are often referred to as a lightweight virtual machine alternative, but the file size of many Docker images can rival VM images themselves. It’s useful to track how changes to Dockerfiles can affect overall image size to reduce image bloat. Here’s a simple shell script that records image file size of the image built in a CI server and sends it to New Relic Insights for display in a dashboard:

echo "Getting image size..."
IMAGE_SIZE=$(docker run --entrypoint=/bin/sh $IMAGE:$VERSION -c 'du -s / 2>/dev/null | 
cut -f1')
echo "[{\"eventType\": \"imageSize\", \"image\": \"$IMAGE\", \"version\":
\"$VERSION\", \"size\": $IMAGE_SIZE}]" > /tmp/insights.json

echo "Sending data to insights..."
cat /tmp/insights.json | curl -d @- -X POST -H "Content-Type: application/json" -H "X-
Insert-Key:$INSIGHTS_INSERT_KEY" https://insights-$ACCOUNT_ID/events

Using a NRQL query for an example Docker image called “smithclay/gopher-dance-party-frontend”, it’s possible to see how the image size changes with different commits to source control in a custom CI Metrics dashboard:

CI metrics

Result of the NRQL query SELECT * from imageSize SINCE 7 days ago where image=’smithclay/gopher-dance-party-frontend’

Collecting metrics about artifacts generated from a continuous integration job, like image size, is critical in understanding and improving the speed and reliability of the overall build pipeline.

CI as the path to Docker in production

As more development teams move away from large monoliths to smaller services with Docker, managing a growing number of images becomes a central part of the software delivery pipeline. Docker images become the primary artifact when delivering software.

From using Docker with a bleeding-edge orchestrator to scripted deploys to a single host, having an automated process that creates images from source control, enforces versioning, and performs basic verifications is a step forward to more advanced automation with Docker containers and—eventually—continuous delivery.

Additional resources


Thanks to New Relic Principal Cloud Architect Lee Atchison for his helpful feedback and suggestions on this post.

Clay Smith is a Developer Advocate at New Relic in San Francisco. He previously has worked at early stage software companies as a senior software engineer, including founding the mobile engineering team at PagerDuty and shipping one of the first iOS apps written in Swift. View posts by .

Interested in writing for New Relic Blog? Send us a pitch!