In 2020, Docker is the best medium for distributing and running most developer-facing software. It’s widely accepted that Docker is great for building and deploying the artifacts for your enterprise web app, but this is less well known when it comes to things like developer tools. However, running tools in containers has many benefits over installing and running them in the conventional way, and we should all start doing it more.
I install very few things on either my personal or work computer. I don’t have
pip installed, but I use them all the time. I have a Docker image for each, and I run them in containers with minimal privileges. I’m definitely not the only one, but it’s not as popular as it should be. None of these tools actually need full access to my computer to do their work, but that is normally how they’re run.
Here are some benefits of running these tools in Docker.
At this point in time, Docker is ubiquitous and you get cross-platform support for free, thanks to Docker Inc’s investments in that area (Docker for Mac & Windows). This is useful both for people developing/distributing tools, and for people working on a team that needs to share tooling. You have one Docker image and it will run pretty much everywhere. OS package managers can be great, but they’re very much not cross-platform. Things like
pip install will sometimes work cross-platform, but have other serious drawbacks.
While every platform has its own sandboxing mechanisms, running with Docker lets you specify runtime context and enforce a sandbox in a cross-platform way, which is useful when you expect anybody else to run the same command as you.
docker run, you have to be explicit about privileges. A container is mostly sandboxed and unprivileged by default. It doesn’t have access to ambient environment variables. It doesn’t have access to the host system’s disk. A tool like
jq just needs to read stdin and print to stdout. It doesn’t need access to my shell’s environment variables (or, if it does, I explicitly pass those through to the container).
yarn should be fine operating on just the working directory, and maybe a cache directory. I don’t want it to have access to my ~/.aws directory (for obvious reasons).
Some tools do need access to things. I want my
aws CLI to be able to read ~/.aws, so I grant that explicitly. This makes running the tool more verbose but less magical.
Running a program in a container is a lot like running it normally, but the user doesn’t need to jump through hoops to configure the system, build and install. The developer of the image jumps through those hoops and produces a runnable artifact with a simple interface. That interface is the same whether the tool was written in Python or Rust or C or anything else.
Downloading a pre-compiled binary is almost like this, except with worse odds. Maybe there’s a build for your architecture. If it was statically linked, you’re golden. Otherwise, use
ldd to reverse engineer the fact that you need to install
A Docker image “just works”. It comes bundled with what it needs to run.
If you think about it, it’s pretty strange to execute
pip install awscli. It’s immaterial to an end user that the tool is written in Python, and requiring him or her to set up and use Python tooling doesn’t make sense. I don’t mean to pick on
awscli in particular, but this is a poor mechanism for distributing non-library software. It leaves far too much to chance. It’s a clumsy and leaky interface for tool distribution. So is
npm install. So is telling somebody to install your tool by installing golang, and then running
go build. No, thanks. If I’m hacking on the project, then by all means. But don’t foist that on end users.
When collaborating, it’s important that people run the same versions of software to get consistent results. Version pinning is essential to that. Pinning dependency manifests is good, but it’s not enough: it only covers the one situation of installing things with a language package manager. It may not cover using the same linter version, or the same version of
terraform, or any libraries installed at the OS level. Invoking
docker run node:13.10.1, instead of whatever the user happens to have installed as
node, solves this problem in general. Having the ability to specify the versions at the point of use, rather than out-of-band as part of some other installation process, is also convenient and tidy.
It’s easy to run different versions of a tool side by side with Docker. Docker solves this more generally than things like virtualenv for Python, rvm for Ruby, etc. You specify what version of the tool to use when you’re invoking it, and it pins a whole lot of context more than just the tool’s version, which is always preferable for reproducibility.
In one recent situation at work, we had a test case start failing when we upgraded our runtime from Python 3.6.5 to Python 3.6.8. Having the ability to easily run the tests with any version of Python made it easy to bisect and identify a change in 3.6.7 as the cause. This could have been debugged without Docker, but it was particularly natural and easy with Docker.
Invoking a tool with
docker run should specify everything needed to reproducibly run it somewhere else. It’s running some specific version of the tool? Okay. It needs my AWS credentials? Okay. It needs some specific combination of environment variables set? Okay.
I cringe when I see a Makefile or build instructions saying to run
go. What version? What’s being assumed about my environment? Maybe this worked on your unique snowflake of a machine 18 months ago, but good luck with it now. (My laptop is a unique snowflake too. Everyone’s is, until we all figure out how to use NixOS.)
Running tools in Docker, there are few expectations of the runtime environment beyond having Docker installed. All the other requirements should be made explicit in the
docker run command. The command that you’re running locally will work the same on your colleague’s machine, and in any CI with minimal configuration (or none). This is absolutely critical, especially when working on a team. This is a far more robust approach than expecting (requiring) anybody’s system, or a CI slave, to be set up “just so”.
I have very few things installed on my host system beyond the base OS. There’s less to remember when setting up a new machine, fewer things to go wrong during upgrades, and fewer opportunities for conflicts over shared libraries.
I have bash aliases for a bunch of tools that I run all the time. These are just for my own convenience. For anything shared with other people, I’d use a project’s Makefile (see below).
alias aws='docker run --rm -v ~/.aws:/.aws -v "$(pwd)":"$(pwd)" -w "$(pwd)" -u 1000:1000 -e AWS_PROFILE mikesir87/aws-cli:1.18.11 aws' alias jq='docker run -i --rm jess/jq jq' alias terraform='docker run -it --rm -v ~/.aws:/.aws -v "$(pwd)":"$(pwd)" -w "$(pwd)" -u 1000:1000 hashicorp/terraform:0.12.23'
With these aliases, I can `AWS_PROFILE=... aws sts get-caller-identity | jq -r .Arn` as if they were "really" installed.
Here’s zoom (video conferencing):
alias zoom='xhost +local:docker \ && docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY \ --device /dev/video0 --device /dev/snd:/dev/snd --device /dev/dri -v /dev/shm:/dev/shm \ -v ~/.config/zoom/.zoom:/root/.zoom -v ~/.config/zoom/.config/zoomus.conf:/root/.config/zoomus.conf \ jess/zoom-us'
Notice that [port 19421](https://medium.com/bugbountywriteup/zoom-zero-day-4-million-webcams-maybe-an-rce-just-get-them-to-visit-your-website-ac75c83f4ef5) remains stubbornly closed unless we explicitly let the container claim it on the host.
I do this with other stuff, too. Here’s Snes9x (can you imagine installing it?):
alias snes9x='docker run -it --rm -u 1000:1000 -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY \ -v /run/dbus:/run/dbus -v /dev/shm:/dev/shm \ --device /dev/snd --device /dev/dri --device /dev/input/js0 \ -e PULSE_SERVER=unix:$XDG_RUNTIME_DIR/pulse/native -v $XDG_RUNTIME_DIR/pulse/native:$XDG_RUNTIME_DIR/pulse/native \ --group-add $(getent group audio | cut -d: -f3) \ -v ~/.config/snes9x:/.snes9x/ -v ~/Games/SNES:/SNES -v ~/.local/share:/.local/share \ danniel/snes9x'
For things that are project-specific, or in a team setting, all useful commands should be codified in something like a Makefile. This wraps the complexity and verbosity of the
docker run incantations, makes it possible to share them easily, and makes them passably ergonomic.
When I’m writing an article for this site, I run
make hugo-watch and load http://localhost:1313 in a web browser:
hugo = docker run --rm -u $$(id -u):$$(id -g) -v "$$(pwd)":/src -v "$$(pwd)"/output:/target $(2) klakegg/hugo:0.54.0-ext-alpine $(1) hugo-watch: mkdir -p output $(call hugo, server, -it -p 1313:1313)
prettier = docker run -i --rm -v "$$(pwd)":"$$(pwd)" -w "$$(pwd)" elnebuloso/prettier:1.19.1 $(1) "src/**/*.js" format: $(call prettier) format-check: $(call prettier, --check)
We would run `make format` to format the code and `make format-check` to check the style. It runs on my Linux box, it runs on my colleague's Mac, and it runs in any Docker-equipped CI. None of those machines need to have `node`, `npm`, or `prettier` installed. We completely trivialize the issues of versioning and of synchronizing our environments: the version is specified once, here in the Makefile, and it's obeyed everywhere. In a language like Python, where libraries are forced to fight to the death for control of transitive dependency versions, lifting a tool like `black` or `flake8` out of the project's requirements.txt, and into a self-contained Docker image, can be a big simplification.
run_container = docker run -i --rm -u $$(id -u):$$(id -g) -v "$$(pwd)":"$$(pwd)" -w "$$(pwd)" $(3) $(1) $(2) go = $(call run_container, golang:1.14.0-buster, $(1), -e GOCACHE=/tmp/.cache -v "$$(pwd)"/build/go:/go) format: $(call go, gofmt) test: $(call go, go test) compile: $(call go, go build -o build/out)
I don't work much with Go, but these stubs give an idea of how it can work.
I like to keep Python projects scoped to their own directories, and I accomplish that by setting
PYTHONUSERBASE and running
pip install --user. It can look something like this:
run_container = docker run -i --rm -u $$(id -u):$$(id -g) -v "$$(pwd)":"$$(pwd)" -w "$$(pwd)" $(3) $(1) $(2) python = $(call run_container, python:3.8.2-alpine3.11, $(1), -e PYTHONUSERBASE="$$(pwd)"/vendor $(2)) dependencies: $(call python, pip install --user -r requirements.txt, -e XDG_CACHE_HOME=$(user_cache_dir) -v "$(user_cache_dir)":"$(user_cache_dir)") repl: $(call python, python) run: $(call python, python -m app.main)
where `user_cache_dir` is set elsewhere to `~/.cache` on Linux or `~/Library/Caches` on a Mac.
Above, I’ve tried to reference public images maintained by other people to make the examples easier. In sensitive use cases, it’s a good idea to keep your own set of images that you trust, whether they’re namespaced under you on Docker Hub, or using a registry that you pay for or run yourself.
When I volume in the working directory, I usually reuse the directory structure (
$(pwd):$(pwd)), just because it seems like the “natural” choice (in the mathematical sense). Many people volume their working directory to something like
/app, and that’s also fine. Remember to quote your directory paths, in case there are any spaces in it. This comes up sometimes, for example, in Jenkins jobs with spaces in the names. In the spirit of “this should run everywhere”, it’s generally a good practice to do the quotes.
I always try to run with a non-root user, mostly because I want things written to a bind mount to be owned by my user. I often do
-u $(id -u):$(id -g) to be more flexible, but in my own bash aliases, I throw caution to the wind and hardcode
1000. I’ve looked a little into using user namespaces, but it seemed like a bigger investment of effort than it was worth for my use cases.
On Docker for Mac, the permissions seem to not be a concern: even files written to a bind mount by root in the container end up on the host system owned by your user. I don’t know if it’s the same story in Docker for Windows (never tried).
When you supply the
-u ... option, it’s not necessary for that user to have been added in the Docker image. Most of the time, I am running images that don’t have the user. This can create some strange situations. Your
HOME is blank, so many tools will want to write to
/.config, and they can’t: no write permission. Sometimes setting
-e HOME=/tmp is enough, just giving the tool a writable location. Sometimes you’ll want to mount a host directory in that spot so the cache/config/whatever gets persisted across container runs. Sometimes it’s even worthwhile to put a little
/etc/passwd file in the image or container, defining the user and giving it a
HOME (I think the only time I’ve needed to do this was to placate git/OpenSSH in a Terraform image).
If you specify something like
-e AWS_PROFILE, with no value, it will pass through the value from the host environment, if there is one. This is useful for showing that the environment variable is accepted or supported or required, while leaving it up to the user to provide it.
The venerable (and confusing)
-it. For a container to be able to read from stdin (e.g. piping to it), it needs to be run with the
-i argument. If you want to be able to interactively work in the container, pass
-t. Often you’ll need
-t if you want things like colored output, but
-t may error out in CI.
On a related note, if you want to be able to abort the container with
^C, you’ll need to pass
-t, though that may not be enough. If your process doesn’t know what to do with signals (e.g. SIGINT from
^C), that won’t work. In that case, additionally passing
--init, so the process doesn’t run as pid 1, may help.
Prefer small containers over big containers. I’ve seen a pattern of building an “everything-but-the-kitchen-sink” (“dev environment”) Docker image for a project. You end up with a big image, with lots of tools packed into it (and possibly app code – don’t do that for development, use a bind mount volume instead). Interaction with tools outside the image (e.g. a text editor) may be difficult. Then you live inside the container, running a shell, etc. This has some of the same benefits described in this article, but it’s not the same use pattern. It goes against the grain of the UNIX philosophy. If I want to introduce a load testing tool written in golang, I have to rebuild the image with all of the golang baggage added in. The big container is similar to running a VM. It’s certainly better than installing everything directly on the host system, but containers can be used more effectively.
Running individual tools in containers is more in concert with the UNIX philosophy: single purpose containers running single purpose tools, doing one thing well. As a result, it’s more flexible, composable, and powerful.
There is a slight startup delay running any command in Docker. This adds up if you’re running a lot of little commands. Every time you invoke
docker run you get hit with about a 1s startup penalty from creating the namespaces. Try it yourself:
time docker run --rm hello-world. Notice, for example, that running with
--net host shaves off a few hundred milliseconds. Using host networking is usually fine for running dev tools, but do you want to litter every
docker run with this and other potentially obscure options in the interest of making the command slightly faster? I don’t. It would be nice to have a simple way to say “I only want to bother with the mnt namespace”, and get some of that time back. As far as I know, there isn’t.
On a Mac, there is a major performance hit whenever you do disk IO in a bind mount (i.e. voluming a directory of the host system into the container). Working without bind mounts is extremely limiting. In my experience, the
delegated options do not improve performance in any significant way (still worth turning on if you or your colleagues are using Macs). I don’t know whether the fault lies in macOS or Docker for Mac, but this can really make working with Docker unpleasant. If you’re using Docker on a Mac and you’ve never tried it on Linux, you owe it to yourself to try it on Linux.
The commands are very verbose. Wrap them in aliases or Makefiles or similar. It’s better to have the ability to look under the hood, and see exactly what context is being given to the tool, than to implicitly leak all context without the ability to inspect or restrict it.
It feels alien to run
make yarn args=... instead of
yarn .... It’s uglier and more awkward. However, it’s not much different for a project that’s already using a Makefile (or similar) to organize its maintenance commands.
Having a lot of Docker images takes up a lot of disk space. I had to train myself to stop allocating small root partitions, because Docker images take up a lot of space. It’s not ideal, but it’s also pretty benign. Disk space is extremely cheap. This is a tradeoff I’m happy to make. Corollary: downloading a lot of images requires a lot of network traffic.
Images become stale/unpatched. This is a valid concern, but in the use case I’m describing, I don’t consider it critical. Obviously: don’t take a stale image and serve web traffic with it. But I’m not likely to come to any harm if my
flake8 Docker image is running a version of Alpine or Debian that’s behind on security patches. With that said: all else being equal, it’s best to keep images patched and up to date.
Images are essentially black boxes. Downloading anything from Docker Hub can be dangerous, but it’s not inherently more dangerous than
curl | bash, downloading a binary from a GitHub release page, or even building from source any project that you’re not intimately familiar with. Do be aware that Docker tags are mutable, so if you’re not controlling the image, you should probably pin based on content hash. You should be comfortable building your own images, though. I don’t mind basing images off of official OS images, but depending on your
level of paranoia appetite for risk, you may build from scratch.
Running GUI-based software is hard, but possible.
Docker is an amazing tool. From what I’ve seen, this is still an underappreciated use case for it. My only real reservation with this approach is that the Docker for Mac file system performance is so bad that “cross-platform” is true in principle, but not as great as it should be in practice.
I’m always looking out for something simpler or lighter-weight than Docker, but bringing similar benefits, especially the sandboxing and cross-platform aspects. Maybe Nix? If you have advice, please let me know.