How Long Should Unit Tests Take to Run?

by Hit Subscribe 3. October 2018 02:39

When you first start writing unit tests, all is well and the few tests run in a matter of seconds. After having written hundreds or thousands of tests, running them might take...long. Too long. But when is long too long? How long should your unit tests take to run? And how can we reduce the time it takes, without removing or skipping tests?

how long should unit tests take to run

Why Do We Want Them to Be Fast?

This may seem obvious to most of you, but let's quickly recap on why we want our tests to be fast. The concept of unit tests is to write a test for a small piece of code and get the test to pass. You then move on to the next piece of code you want to write. With matching tests of course. While you're coding, you want to keep running those tests, to make sure you haven't broken anything.

This creates a small and continuous feedback cycle telling you if you're breaking things. This is not something you want to wait for. Waiting for that feedback will slow you down because you aren't writing code and because you will lose your concentration. You might check your social media or start surfing around randomly. Each time you lose focus, it takes some energy to get back to the task at hand.

How Fast Is Fast?

We've established that we want fast feedback and so we want our tests to run fast. But how fast? This will, in part, be a matter of personal preference. Some developers will be happy to run their extensive test suite in three minutes, while others want to go the more radical route and keep it under 60 seconds.

Michael Feathers, in Working Effectively With Legacy Code, states that a "unit test that takes 1/10th of a second to run is a slow unit test..." However, this means your test suite becomes increasingly slow when you have more and more tests.

Mark Seemann takes another approach and says that your test suite "should run in 10 seconds or less."

I'd personally tolerate 60 seconds for a test suite.

But this all seems quite subjective. Is there a more objective way?

Yay for Science!

Seemann bases his ten seconds on scientific literature about attention span. But as I understand it (and I'm not a scientist), this is about how long people can focus on a task. When you're running your unit tests, you're not performing a task. You're waiting, staring at your screen. If you're lucky, you have a nice spinner or progress bar to look at.

This study on tolerable waiting time for websites mentions two seconds. The article points to several other research papers and the numbers range from two to 41 seconds. Some of these studies aren't about websites. For example, there is one from 1984 that is about computers in general. Another, from 1968, isn't even about computers. Both point to two seconds. Other studies will allow for 15 seconds if there is an indication of progress (like a progress bar).

A funny quote from Nielsen is that users have been "trained to endure so much suffering that it may be acceptable to increase the limit value to 15 seconds." However, keep in mind that this is a study from 1995-1996. I wouldn't be surprised that modern software developers expect things to be faster.

So let's take two seconds as an acceptable time to run your tests.

Don't Be Scared

When I started writing this article, I didn't expect to find such a low number. Granted, the research is geared towards websites and regular users, but it's still a bit frightening. In a .NET project of a reasonable size, compilation alone can take longer than two seconds. Then there might be some static analysis or coding style checks that are run as part of the post-compilation process. And if you have all your code under test, cramming all of this in two seconds seems unrealistic.

So maybe we should accept that the build will take longer than two seconds, but at least the tests should run in this timeframe? Even if we do that, the two-second rule will be hard to respect in many projects.

In light of this, maybe we should take the ten seconds Seemann mentioned?

If you were to ask me, I'd say you should try to get your feedback loop under 60 seconds, preferably 30. That includes compilation, analysis, and unit tests. But the lower you can go, the better.

How Can I Speed It Up?

To improve the run time of your test, you have several options.

Test Categories

Many unit test libraries provide a way of putting tests into categories. You could easily put your slowest tests into one category and have them run less frequently. Most modern test runners will provide you with an option of seeing how long each test takes.

Refactor

When you have identified slow tests, maybe you can take the time to refactor these tests. It might require some refactoring of production code too, but that should be an argument pro, not con.

Refactor Even More

Large applications that are developed with tests will have thousands of tests. Even if each test run is fast, the sheer number of tests will take a long time to run. Splitting this large application into microservices will improve this situation, as smaller projects will have fewer tests.

I realize this is a non-trivial effort, but I did want to mention it. It's especially true for legacy applications. At a certain moment, you need to stop adding more weight to an already heavy codebase.

Run Continuously

Tools like NCrunch will run your tests continuously, as you write them. They also have some smart ways of finding which tests are impacted when you change code. This means you will get faster feedback, because these tests are run first. And if that still isn't fast enough for you, you can even offload your tests to multiple other computers.

Parallelize

Consider parallelizing your tests. A modern test library and runner will provide support for running multiple tests at the same time. For example, xUnit.net enables it by default. Another interesting idea I've heard about is spinning up Docker containers to run your tests in parallel.

Improve Your Feedback Loop

All these ideas might not get your tests to run in under two seconds, but you should be able to reduce the time significantly. The main point is to make your feedback loop smaller so that you can develop faster. Unit tests are a first step towards a smaller feedback cycle, because you don't have to spin up the entire application to check your feature. The next step is to make your tests run as fast as you can. This is especially valuable in larger projects with lots of tests.

Slow test runs will increasingly frustrate you, make you lose your focus, demotivate you, and slow down the pace of your development. Don't just accept test runs that take multiple minutes to run.

This post was written by Peter Morlion. Peter is a passionate programmer that helps people and companies improve the quality of their code, especially in legacy codebases. He firmly believes that industry best practices are invaluable when working towards this goal, and his specialties include TDD, DI, and SOLID principles.

Tags:

Blog