If you are using containers there is a good chance you are already creating a container build environment to build your images. Wouldn’t it be awesome to leverage the same environment to test applications as well?

It turns out it is not only possible, it is also easy and elegant, since you are using the same environment to build and test the application.

For that we leverage Docker multi-stage builds. The idea there is to use two stages1:

  1. A first stage based of a first base image(s) that contains the SDK and tools you need to do so: libraries, compilers, test libraries, reporting tools and such to build and test the application.
  2. Then we extract the build artifacts from the first image and assemble a second image from another base image that typically contains just runtime libraries, or is a barebones Linux distro (e.g. alpine linux). The resulting image is both more lightweight, and contains only the artifacts you really want to distribute.

Multi-stage Dockerfile

A multi-stage build starts with a multi-stage Dockerfile. I have put a sample solution on GitHub, which builds a typescript solution in a node container. Such a file looks like that:

# First stage: build and test
FROM node:10-alpine as nodebuild    # Define the base image
WORKDIR /app                        # Define where we put the files
COPY . .                            # Copy all files from local host folder to image
RUN npm install && \                # Install dependencies
    npm run build && \              # Build the solution
    npm run test && \               # Run the tests
    npm run coverage                # Report on coverage

# Second stage: assemble the runtime image
FROM node:10-alpine as noderun      # Define base image
WORKDIR /app                        # Define work directory
COPY --from=nodebuild /app/dist/src/ ./ # Copy binaries resulting from stage build
COPY package*.json ./               # Copy dependency registry
RUN npm install --only=prod         # Install only production dependencies
EXPOSE 8000
ENTRYPOINT node /app/index.js       # Define how to start the app.

The first stage copies everything into the build container from the current host folder, then builds the typescript solution using the npm commands. Finally, it runs unit tests, and reports on test coverage. The build will fail at this stage if tests fail, or if the coverage doesn’t meet configured criteria.

The second stage copies the result of the build from the first stage (located in /app/dist/src) into the a “root” folder (that we call /app). It then restores only the packages required for production, rather than everything including test harnesses and such, then just defines what to do to start the application when the container is started (the ENTRYPOINT command).

We trigger the full build with the following command:

docker build . -t demo

This yields the following logs:

Sending build context to Docker daemon    279kB
# [...removed for brevity]
Successfully built 867ce6fd7c5f
Successfully tagged demo:latest

Image is built, we can do whatever we want from it.

Leveraging multistage in continuous builds

Building and testing is nice, we might also want to retrieve the test results and coverage results to publish and retain these. Docker allows you to target only a certain stage of the build, a feature we can use for this purpose.

To do so, run the docker build command with the --target flag. In the example I built, the target is called nodebuild, so the command is:

docker build --target nodebuild . -t nodetest

Running that command yields the following result:

Sending build context to Docker daemon    279kB
Step 1/4 : FROM node:10-alpine as nodebuild
 ---> b95baba1cfdb
Step 2/4 : WORKDIR /app
 ---> Using cache
 ---> 694002e35d87
Step 3/4 : COPY . .
 ---> Using cache
 ---> b25bc03c81b1
Step 4/4 : RUN npm install &&     npm run build &&     npm run test &&     npm run coverage
 ---> Using cache
 ---> 5cdaa41da605
Successfully built 5cdaa41da605
Successfully tagged nodetest:latest

Docker terminates the build process at this step. We now need to retrieve our test results. To do so, we will create a new container from the image2, copy the files, then delete the container:

#Create the container from the image we just built and retrieve
# its id.
id=$(docker create nodetest) 
#npm run test and coverage spit their results in a folder called
# results, so we copy these files from container to local host
docker cp $id:/app/results ./results 
#Delete the container 
docker rm $id

From there we can just resume the full build by removing the --target flag, using the same docker build . -t demo command we ran earlier. This yields the following interesting result:

Step 1/11 : FROM node:10-alpine as nodebuild
 ---> b95baba1cfdb
Step 2/11 : WORKDIR /app
 ---> Using cache
 ---> 694002e35d87
Step 3/11 : COPY . .
 ---> Using cache
 ---> b25bc03c81b1
Step 4/11 : RUN npm install &&     npm run build &&     npm run test &&     npm run coverage
 ---> Using cache
 ---> 5cdaa41da605
Step 5/11 : FROM node:10-alpine as noderun
 ---> b95baba1cfdb
# [...] truncated for brevity

Note that the first 4 steps, which we just ran to assemble the “test” image, are all returning Using cache.

This is because Docker builds images in layers. Each step of the Docker represents a new layer above the one just below. The base image is the first layer, the workdir is the next one, then the COPY is the next one, etc. All these layers are stored in the local registry (unless pushed into a central registry).

Since the source code and the Dockerfile didn’t change between the runs, Docker is actually not rebuilding the first layers, but reusing the ones we built before → this means our little endeavor does not even requires building the image several times!

Run all that in Azure Devops (or your favorite CD tool)

The build process is the exact same as described above:

  1. Build the test image
  2. Run it (you don’t need to delete it, the agent will see to that)
  3. Copy test results from the test container
  4. Build the actual run image. Docker will skip already built steps and use cached layers for that.
  5. Push to the registry.

This is the overview:

Overview of build steps

The build configuration uses the Docker task and adds --target attribute:

Build configuration for the test image

The run command is done thru the Docker task and starts a container from the image we just built:

Run command

The copy command is done thru the Docker task again and copies the test results from the container we just started:

Copy command

The rest is basic Azure Devops.

Note that at no point you need to know what’s inside the image. In fact, for all we know this could build a dotnet app, or a Java app. As long as it spits the reports in the same folder, the build would be the exact same, and that’s what is very powerful with this method: you can have the same task group and apply it to all your repositories and just repeat the same build steps, independently of the technology being built.

Alternative

There is an alternative to that strategy that might be useful to your context. Rather than running the tests as part of the build process, run them on the build image outside of the Dockerfile.

The idea there is to have the build be a pure build. In this case, the Dockerfile would be tweaked accordingly:

FROM node:10-alpine as nodebuild
WORKDIR /app
COPY . .
# Remove npm run test and npm run coverage from here.
RUN npm install && \
    npm run build

FROM node:10-alpine as noderun
WORKDIR /app
COPY --from=nodebuild /app/dist/src/ ./
COPY package*.json ./
RUN npm install --only=prod
EXPOSE 8000
ENTRYPOINT node /app/index.js

Then the test would be triggered by building up only the first stage, then running the command within a container of that image:

docker run -it nodebuild /bin/bash -c 'npm run test && npm run coverage'

This might be useful if you want your build to be a pure build and not containing the tests or coverage. I personally prefer the other solution, since it move how you run the tests within the Dockerfile, and so the build steps might be the same regardless of what is in the container (dotnet, typescript, etc.) → it makes the continuous build uniform.

Notes

  1. There can be more than two stages, if for example part of the build is aspnet core and part is typescript, you might get two build stages. 

  2. Reminder: a container is an instance of an image. Said otherwise, a container is to an image what an object is to a class