At Box, we are always looking to improve our software quality and development speed. Test driven development and effective tests are important factors leading to engineering efficiency and productivity. As part of our efforts to encourage test driven development, we have built several tools to facilitate writing and executing tests. Today, we are open-sourcing one of those projects: Makefile.test. A generic makefile for executing test executables.

Makefile.test can be used to run any type of test executables. It is not language specific nor it requires any changes to your code. Parallel, serial execution, various platforms and make versions are supported. The executables can be organized in any desired way. The user only lists the test files, the rest is taken care of Makefile.test.

Makefile.test does not contain any rules for compilation and other pre-processing steps. If your test executables are not scripts, but for example compiled binaries, you will need to extend Makefile.test with additional rules. Makefile.test can still be a good starting point for those scenarios.

Makefile.test runs on a single host and therefore its parallelization is limited with the resources of one machine. If your test suite requires multiple hosts to run, ClusterRunner can be a better tool for your use case.

The need for Makefile.test arose once we discovered that several projects with similar test suites reinvented different ways to execute their test scripts. Some used somewhat similar approaches to Makefile.test, others adopted completely different tools. Various projects had bunch of test scripts but there was no easy way to facilitate test execution, until know.

After analyzing the existing use cases, the following requirements for Makefile.test has emerged:

1) We cannot assume anything about the test directory structure.

Some projects put all their test scripts in one directory under the project root while others have a nested directory structure for their tests.

For example:

└── test

└── test
   ├── ClientTests
   │ ├── ErrorTests
   │ │ └──
   │ └──

Makefile.test is usable in both circumstances.

2) Everyone has their favorite way to invoke make.

Make can be called in various ways. For example:

cd test && make
make -C test
make -f test/Makefile

Each test suite author may invoke make differently. Makefile.test works in all 3 scenarios.

3) Makefile.test should work in various linux distributions (old and new) and on macOS.

In order to be useful, Makefile.test must be portable. It must function in various distributions with various versions of make and bash. In addition to Linux, macOS is also a popular local development environment, that must be supported. We have ensured portability with extensive automated tests in Makefile.test project itself.

4) Support for running one or all tests in parallel or one by one.

During sanity tests or pull request verification, we want to execute tests as fast as possible. So Makefile.test should support arbitrary parallelization. On the other hand, during development or debug, the programmer may want to isolate all side effects and only execute a subset of tests one by one. With Makefile.test, both use cases are supported with ease. Thankfully, make’s jobserver mode is very powerful and satisfies all our needs.

5) Cleanup after killing make.

Usually programmers kill make executions by sending various signals to it. Once tests fail, instead of exiting cleanly, they may hang. After timeout expiration, the relevant process group can be terminated to free up computing resources. Alternatively, during interactive execution of make, the user may interrupt the program with CTRL-C at any time. We had to make sure that in these scenarios, Makefile.test does not leave orphaned processes behind. Mistakes in process cleanup, may result in unused process trees attached to init and consume unnecessary compute resources. We ensured that Makefile.test does process cleanup correctly and added extensive automated tests for it in the Makefile.test project itself.

With Makefile.test we have satisfied all our test execution requirements and it is getting rapidly adopted at Box by both existing and new projects. If you have similar use cases, please take a look at it. Feel free to send us issues, pull requests to help extend project’s current capabilities according to your needs.

Special thanks to John Huffaker and Mohit Soni for their contributions!