Concurrent Testing: Your Unit Tests Will Never Be the Same

by Hit Subscribe 19. January 2019 03:42

What do you picture when you think about concurrent testing? Do you scoff at the very idea of it? Are you willing to entertain the notion of concurrent testing, but are skeptical about how it can work in practice? Or are you comfortable with your idea of concurrent testing, but are looking for a way to add it to your current workflow? Let's examine what concurrent testing is and how it can improve your tests, workflow, and efficiency.

/image.axd?picture=2019%2f1%2fTransform_your_tests_with_concurrent_testing

A Brief History Lesson

Assuming you have an idea of what testing software is (and if you don't, here's a resource so you can get up to speed), let's look back at the history of software testing and see how the idea of concurrent testing developed.

Back in the 90s, testing came late in the software development life cycle. First, project managers gathered the requirements. Then the architect and senior developers spent weeks or even months crafting the design. Software developers implemented that design. And then, finally, the developers handed the software off for testing. Testers manually ran it through scenarios designed to wring out as many bugs as possible before release. They checked for things like "Does this name field correctly handle a first name that's 256 characters long?" and "What happens if I press this button 75 times?" and "Will the software crash if I enter a password that contains punctuation only?"

As you can see from the illustration below, the process followed the "waterfall development" model, with testing happening toward the end. (If you're wondering about the name, the process was often depicted as one phase "falling" into another.)

Flow_chart_illustration_of_waterfall_methodology

Concurrent Testing in Waterfall Development

Under the waterfall model, developers create tests as documents with a series of steps for a human tester to follow. Because humans run them, the tests are written to optimize the testers' time. The thinking is that if the tester has to log in to the software before using it, she might as well test some of the password scenarios too. And hey, if there was a last-minute change to a button, just add a step to the existing test.

The downside to this approach is that tests may fail in multiple ways. A test failure could mean the software has multiple problems. Or worse, a failure in step two means the remaining 18 steps of the test can't be run.

In the testing phase, the size of your testing team determines how many tests can be run concurrently. In practice, the overall testing effort proceeds in fits and starts. Some days, each tester can complete multiple tests. Other days, diagnosing test failures results in only one or two tests being run. The feedback cycle from development to test result is measured in days. The idea of automated concurrent testing sounds like a pipe dream.

But there's a better way. If you're still writing and testing software following this model, I invite you to read on and consider a different approach.

An Alternative to Waterfall Development

In early February 2001, a group of software professionals discussed alternatives to the heavyweight software processes that were currently reigning. What emerged from that gathering came to be known as the Agile Manifesto. With that, the industry gained an alternative to the waterfall model of development. One of the key principles of the Agile Manifesto is the favoring of people and interactions over processes and tools. So let's examine what happens when developers and testers interact more.

Concurrent Testing in Agile Development

Agile development ushered in a shorter, more iterative development model compared to the waterfall trend that proceeded it. In the spirit of agile, developers created smaller features that could be tested even though the rest of the software wasn't yet finished. Tests shrunk and became more focused, and a test failure typically pointed to a single cause. Developers received feedback more quickly. And since they'd finished writing that code only a few days earlier, it was easier for them to troubleshoot a bug.

If you stood back from this process and squinted a bit, it would almost appear as if the development and testing were being done concurrently. In the context of the history I discussed here, it makes sense why this would be appealing:

  • Testers no longer had to wait for everything to be completed first.
  • Developers coming from a waterfall model could enjoy faster feedback.
  • Testers coming from a waterfall model no longer had to spend time running lengthy, torturous test scripts.
  • Test failures would no longer be greeted with groans as managers saw delivery deadlines slip by. What we had instead is test failures actually helping drive the development of the software.

Automate All the Things!

One side effect of the advent of smaller, more focused tests is that running them became a mind-numbing exercise for a person to do. Handing Kelly a stack of tests comprising 20 variations on a single scenario borders on the inhumane. And while Kelly's busy running those tests, she can't test the new work the developers are finishing. We've improved the concurrence within the development cycle. But, running the tests is still limited by the number of people we can throw at it. Should we just hire more people?

No. Better than more people, you know who's great at performing repetitive tasks over and over? A computer. Rather than asking Kelly to run those 20 different variations, a computer can run them. The computer can also do it much more quickly than Kelly can. That, in turn, frees up Kelly to do the more creative part of her job: exploring the software and looking for new bugs. The developers get even better test feedback since they can run the automated tests any time they want. They're free to refactor existing parts of the application knowing that the existing tests will catch any regression bugs.

Automated Concurrent Testing

What has this innovation done for our concurrent testing? Remember the move to agile development vastly improved the concurrence within the development-testing cycle. With automated testing, we've improved the concurrence of running the tests themselves. We're no longer limited to the number of testers on the team. If it takes too long to run all the automated tests, just add more computers.

And what effect has this improvement had on the tests themselves? When you have a computer running dozens or even hundreds of tests, you quickly discover the pain of flaky tests. The ideal automated test is deterministic. The order in which it's run doesn't matter. The time of day in which it's run doesn't matter. Back when tests were scripts that humans manually performed, the tester could accommodate minor errors in the test steps. Computers can't. When the automated tests are flaky and non-deterministic, developers stop relying on them. They no longer use test failures as a launchpad for fixing that new bug. Instead, they mutter something along the lines of "Yeah, it does that sometimes. Just run them again."

Strive to make your automated tests smaller, more focused, and more deterministic. They'll become a valuable tool that frees both developers and testers to do more of the work they love.

Can We Do Even Better?

Thus far, we haven't discussed unit tests. Unit tests take automated concurrent tests even further. The software developer runs these tests while she's still developing the code. She may use these tests to help guide the implementation of the feature, or she may use them as a set of automated examples of what her code can handle.

Developers run automated unit tests concurrently with their development. Testers run automated regression tests concurrently with their exploratory testing. Development and testing work together concurrently to add new features and functionality. So we've ended up with a suite of tests in multiple layers of the testing pyramid.

How Far We've Come

As we've traced the history from the days of waterfall development to automated testing in this post, we've seen tests become smaller, more focused, and more deterministic. We've taken what was a tedious, manual process and transformed it into a valuable partner to developers as they write new code. We've shortened the time between a test uncovering an error and a developer being able to fix it.

Has the idea of automatic concurrent testing stimulated some ideas for you? Take what we've examined and see if your tests can be improved. Once you've had a taste of concurrent testing, you won't want to go back.

This post was written by Eric Olsson. Eric was first introduced to programming in high school, and he's been fascinated by it ever since. Most of his professional experience has been working with C# and other .NET technologies. After being introduced to agile methodologies and practices such as test-driven development, he's had a particular interest in applying them to the code he writes.

Tags:

Blog

Please log in if you would like to post a comment