If there was one session that demonstrated the intense interest in Docker and containers in general at last month’s OSCON in Portland, it had to be Bridget Kromhout’s Docker in Production: Reality, Not Hype. The session was standing room only, and latecomers were being pushed out the door. It was so popular that the OSCON organizers asked Bridget to repeat her session the next day.
No wonder. Docker is hot, hot, hot, and Bridget brings deep experience with DevOps and containerization: She is an operations engineer, blogger, presenter, co-organizer of Minneapolis DevOps Days, and co-host of the Arrested DevOps podcast.
Bridget’s company, DramaFever, has been running Docker in production since October 2013, back when the Docker website clearly warned: “Don’t use this in production!” DramaFever streams videos, having begun with Korean soap operas, and now offers additional video services including docclub.com and shudder.com. It has 15,000 episodes from 70 content providers and reaches 20 million viewers. At peak load, the company handles tens of thousands of requests per second from a variety of different endpoints; viewers often switch devices in the middle of a program.
To make sure its infrastructure could support all that and provide a good user experience, DramaFever broke up its single, monolithic Python app into microservices. The team runs services in Amazon Web Services, uses Python for the main DramaFever website, and Go for microservices. DramaFever relies on Docker to provide a consistent development environment and repeatable deployment.
Along the way, Bridget and her team learned a number of lessons about using Docker in production. Here are five key ones that she shared in her OSCON session:
1. Beware registry overload
Docker didn’t have a private registry when DramaFever started its container adventure, but they wouldn’t have been comfortable with the registry being on Docker as a “not-controlled-by-us” single-point-of-failure anyway. So DramaFever relied on a single registry server on its Jenkins instance—but when more than 20 or so instances came up, the registry would fail. Now, DramaFever runs a private registry container backed by AWS S3 storage on every Amazon Elastic Compute Cloud (EC2) instance (and laptop!) that’s going to use Docker. This solution doesn’t require many resources, and helps deal with the scaling problem.
2. Build your base
To make sure the base Docker image is up-to-date, DramaFever’s ops team runs weekly “base builds.” These builds include infrequently changing dependencies, such as Ubuntu packages and Python requirements files, and so on. Other builds start from these base images, and therefore run faster.
3. Prevent drunk pushing
No humans have the credentials to issue the “docker push” command to production. These command come only from the Jenkins server. This rule prevents anyone from doing an ill-advised push at an hour when no one else is around.
4. Clean up!
Containers and images can consume a lot of disk space. If the root of your Docker source repository fills up, “very, very bad things can happen, including bizarre disk corruption,” Bridget explained. She advised running a daily script to remove stopped containers and images tagged “none.”
5. Watch your time
S3 really cares about your system clock. Unfortunately, boot2docker—an application that lets you run Docker on Windows and MacOS—will stop if your laptop goes to sleep, which skews the clock in the virtual machine. Any requests to the AWS API will result in a RequestTimeTooSkewed error. To deal with this problem, all the DramaFever utilities sourced by all the wrapper scripts include this line of code:
boot2docker ssh sudo date --set \"$(env TZ=UTC date '+%F %H:%M:%S')\"
This is a known problem, and boot2docker is working on a fix.
The most important takeaway, though, is that while Docker may be great, it’s not magic. Containers offer some really cool and compelling advantages, but like any new technology, Docker requires due diligence to make sure it will work and meet your needs in your specific environment.