Why Sequential Automated Testing Is Inefficient

by Remco 14. September 2011 05:03


Over the last few years, test driven development (TDD) has become progressively more widely adopted and is now in many places considered to be a best practice within the field of software development.

It's been interesting to observe how the wider adoption of TDD has had an impact on codebases. Where as we once used to suffer from not having enough automated tests over our code, it seems we are now starting to encounter a new problem of having too many of them. A major side-effect of this is that the amount of time it takes to run all tests from start to finish has been progressively increasing. While it's worth doing everything possible to keep test execution times to a minimum, we are constrained by the resources we have available (time) and the need to provide meaningful testing of all the various layers and modules of our application.

When testing software manually, we've usually had to be very selective about which areas of the application need to be tested. For example - in the case of a large and complex application, it may take several weeks of exhaustive effort to run the application through all of its use cases and ensure that no defects exist. This may not be feasible if a small change is made to the application 3 days before it is pushed into production. The logical approach to this is to ensure that during the 3 day window, the focus is on executing the highest value tests that will give the most meaningful result within the shortest possible time.

Automated testing isn't much different to this. We always want to get feedback from our tests as early as possible. Perhaps it's possible to run all of your automated tests before you release to production, but what about when you drop a daily release to a test environment? Or when you need to check code into a VCS? Or when you've made a tiny speculative change? Every defect that that slips further downstream will further slow down your development process. If your tests take 4 hours to run end-to-end, I'd be surprised if you consistently ran them all before checking your changes into a VCS.

The sad fact is that there are very few tools that can really help with this. The vast majority of test runners available have very little intelligence behind the way they run your tests. Most of them will simply run the tests synchronously in a static sequence. Probabilistically this means you're on average likely to receive your vital feedback quite a way through the test run.



Sequential/Unoptimised Test Pipeline



An intelligent test runner (such as NCrunch) can make use of several techniques to ensure the highest value tests are run first. These techniques include:

  • Prioritising impacted tests
  • Running faster tests earlier
  • Focusing on already failing tests (as these are likely to be of more interest)

When these techniques are applied, tests are far more likely to fail early in the pipeline rather than later.



Optimised Test Pipeline



The implications of this aren't small. Not only can earlier notification of failures save time fixes issues raised late in the process, but there are substantial time savings behind reducing developer effort - as developers no longer need to manually select high value tests to run.

Where there is reliable indication of the risk of test failure given a point in time, it also becomes possible for people to make informed decisions on whether they should continue to care about the results of tests that sit late in the pipeline. For example, if your tests take 12 hours to run but there is a 97% certainty that any failure would occur within the first hour, you may be able to save yourself 11 hours of watching tests run.

As applications continue to grow in size and complexity, test execution times will continue to increase. Don't be afraid of long running pipelines of tests - just be smart about how you manage them!


About Me

I'm Remco Mulder, the developer of NCrunch and a code monkey at heart.  I've spent the last decade consulting around Auckland and London, and I currently live in New Zealand.  Interests include writing code, writing tests for code, and writing more code! Follow me on twitter