so

Gradle DockerContainer plugin

Volume 5, Issue 11; 29 Jun 2021

These days, I mostly use Gradle to manage builds and Docker to manage containers. So how do I manage Docker from Gradle? With this plugin.

I think this started with the XProc 3.0 Test Suite. The test suite needs to be able to interact with a web server, for example, to test the p:http-request step. I used to run a server on a public site for this purpose, but the version of Apache in that shared hosting environment changed and things broke and I decided to move the server to a local Docker container.

Win: the test suite now has complete, independent control over the server. Win: you don’t need an internet connection to run the test suite. Win: the whole thing runs a lot faster.

All that win, but with the added complexity that you have to be able to use Docker. I wanted to simplify that so I coded up a few Gradle tasks to manage the container.

And over the next few weeks, I copied that code into various other projects and improved it in various ways. Then I moved some of the repetitive bits into a plugin.

Last weekend, I published the plugin. It’s now available through the Gradle plugin repository. If you’re curious, the sources are on GitLab. (n.b. GitLab, not GitHub.)

There’s a complete example in the repository, but here are a few snippets to show how it works.

First, use the plugin:

plugins {
  id 'com.nwalsh.gradle.docker.container' version '0.0.3'
}

Then import the class:

import com.nwalsh.gradle.docker.DockerContainer

(In retrospect I’m not sure that extra .container hanging off the end of the plugin ID was the best choice, and the fact that it’s not in the package you import is an obvious inconsistency, but it’s too late now. You can’t (easily) change the plugin ID after you’ve published the plugin. Maybe I should add .container to the package name…)

At this point, you could just start using the API in your tasks, but I setup some variables to manage the container names and their IDs. There are three containers in this example (taken from my local harness for this weblog).

ext {
  c_postgres = "postgis_so"
  c_nginx = "nginx_so"
  c_nodejs = "nodejs_so"
  containers = [:]
}

The container names come from the docker-compose.yml file. One of the features of the plugin is that it lets you mostly work with the containers by name rather than (random) ID.

I keep my Docker compose file in a docker subdirectory with its bits and bobs, so I configure all the compose commands to look there:

docker_container.configure {
  workingDir = "docker"
}

The dockerup task spins up the containers:

task dockerup() {
  doLast {
    if (!DockerContainer.allRunning([c_postgres, c_nginx, c_nodejs])) {
      DockerContainer.compose {
        command = "up"
        args = ["-d"]
      }
    }
  }

  doLast {
    containers = DockerContainer.containers()
  }
}

If any of the containers need to be started, this task will run Docker compose to start them. After they’re started (whether the task needed to start them or not), it stores the name/id mapping for them in the containers variable.

Now I can write tasks like this one:

task node_logs(dependsOn: ["dockerup"]) {
  doLast {
    DockerContainer.docker {
      command = "logs"
      args = [containers[c_nodejs], "-f"]
    }
  }
}

Running gradle node_logs will start the containers if necessary and then “tail” the NodeJS log without me ever having to care what the ID of that container is!

For completeness, here’s the dockerdown task that stops the containers:

task dockerdown() {
  doLast {
    if (DockerContainer.anyRunning([c_postgres, c_nginx, c_nodejs])) {
      DockerContainer.compose {
        command = "down"
      }
    }
  }

  doLast {
    containers = [:]
  }
}

You can use Gradle features to chain tasks together. In a project like the test suite, for example, you could make the test task depend on dockerup and be finalized by dockerdown so that it would start and stop the containers completely automatically.

In practice, I’m usually working in a short(ish) develop/test cycle so I don’t want to have to wait for the containers to start and stop on every run. I configure things so that every task that needs the containers depends on dockerup but I have to run dockerdown separately, by hand.