8 Things Making Your Unit Tests a Mess

by Hit Subscribe 11. January 2019 06:51

Don't worry, it happens to most of us. We're getting our TDD groove on and feeling confident that our tests ensure we're writing good code. But gradually, over time, we notice that something is off. Test coverage starts to drop. We begin to avoid changing the tests as much as possible. We feel less enthusiastic about running our unit test suite. And we start to wonder if these tests provide any value at all.

So what just happened? Well, it sounds like we've discovered a mess. A unit testing mess. And it's spreading through our application like wildfire.

But how did we get to this point? And what can we do to stop it? Today we'll talk about eight common things that, over time, make our unit tests a mess. We'll also discuss what we can do to stop the spread of this mess and get our unit testing house in order. Now let's get started.

Are your unit tests a mess

1. Inconsistent Format

Inconsistencies in tests crop up innocently enough at first. You're not sure what standards you want to follow. New people join the team with their own ideas of how tests should be structured and named.

You don't think it's a big deal. Most of these inconsistencies don't make much of a difference. And they definitely don't decrease the value of the tests. They just happen to have different styles. So eventually some unit test names use the given-when-then format. Others start with "should." Another set uses the method names.

In addition to naming tests, setup logic begins to wander to different parts of the test code. Sometimes the setup lives in the method under test. Other times, it's in a setup method. Occasionally it's extracted into a separate function. And then eventually it all moves to a helper class, and you have no idea what setup is for what test or what's really needed.

The more variations you have in your test structure, the more time you spend context-switching while running the tests. Consequently, the more confusing your tests seem—even to you and the other team members who wrote them.

Even if it doesn't seem like a big deal, you should come to a consensus on what your tests should look like. It will increase readability for your team and make everyone more efficient.

2. Giant Unit Tests

Every time we add functionality, we add a few tests. Of course, this is acceptable and even encouraged. But what happens when suddenly your test classes or files require a table of contents to find out what's going on?

Reportedly, one benefit of having a good unit test suite is that it lets new developers get up to speed on the behavior of your application quickly. However, if your unit tests read like Ulysses, it may be time to review and refactor.

Take a look at your long tests to see why they're long. Is there just way too much setup? That could be a sign of poor isolation in your production code. Are there too many asserts in every test? Look at how you can divide up the assertions so that they're easier to read and maintain.

3. Too Many Mocks

Mocking all the things sure does make everything fast. Of course, it also makes tests worthless when you start to lose any faith that your tests actually confirm anything. Further, you could have a suite of tests that only show that your mocks actually work—not that your actual system works.

Having too many mocks often points to poor isolation of the code under test. Another cause could be that you're testing too much of the implementation and not enough of the behavior. Attempt to stick to mocking external dependencies, not everything else. There are always exceptions, but mocking shouldn't be the rule.

4. Not Enough Mocks

On the other side of the mocking spectrum, we have test suites with not enough mocks. In this case, every test uses real components, including the file system, database, and even external dependencies. Not surprisingly, this can lead to slower performance and flakier tests.

We sometimes get here because we want to be thorough in our testing. A little too thorough. There are some integrations that should be validated and tested, but not from within our unit test suite. Your unit tests should stick to testing your domain. Stay within that boundary.

5. Flakiness

Yes, your tests pass. Most of the time. Well, yeah, every once in a while they fail for no reason. Then you just run them again. And most of the time they turn green with no issues.

Now you may be thinking that a flaky test or two is inevitable. Your application has a lot going on. And sometimes things fail. This is normal, right?

Though perhaps not unusual, flaky tests should never be considered normal. Whether they're due to not properly testing asynchronous methods, not accounting for randomization, or many other reasons, flaky tests reduce confidence in the test suite. They make failing builds a more accepted occurrence. And if one flaky test is considered OK, then another two or three will sprout up in no time.

If you find a flaky test, figure out the root cause and fix it. The longer flaky tests exist, the more damage they cause.

6. Interdependencies

Another common cause of flakiness involves interdependencies between tests. If tests rely on previous tests to put the application into a particular state, you're going to run into issues. And no, the answer is not to force them to always run in a particular order. That simply masks the problem and encourages even more interdependencies in the future.

Make sure you can run each test in isolation. Any setup logic required by a test should be run with that test.

The good news is that once you rid yourself of these interdependencies, you can then start thinking about adding parallel execution with NCrunch. With parallel execution, all these independent tests can run in separate processes at the same time. And that's another way you can make your tests run more quickly.

7. Slow Performance

We all know that unit tests are meant to be fast. And they usually start out with run times in the seconds. But what happens along the way? They slow down. A bit too much setup here, not enough mocking there, and suddenly your once-speedy tests are bringing your development to a crawl.

The point of unit tests is getting fast and reliable feedback. But the slower the tests are, the more likely you'll skip running them frequently. You may also run just one test at a time to get a tiny level of confidence with your changes. And then get a big nasty surprise when you finally make time to run all the tests. Eventually, you may just let the CI server tell you if there's a problem.

If only there were an easy way to find out which tests are slowing you down.

But wait. There is! Take NCrunch for a spin and find out which of your unit tests are dragging you down the most. It has a great performance-monitoring feature that lets you see where your tests take the most time.

Once you know your slowest tests, work to improve them. This may mean refactoring the test. Or it may mean you need to consider what's going on in your production code.

8. Bad Production Code

And that brings us to one more thing before we go. If your unit tests are a mess, it's time to take a hard look at your production code. If your production code is hard to test, then your unit tests will always be a mess.

Poor architecture can result in tests with excessive setup logic, too many mocks, and a lack of easily testable behaviors. Take some time to review your architecture and see if it still makes sense for your application as it stands today.

It's Time to Clean House

There are a lot of reasons why our unit tests get in a messy state. It usually happens over time and we don't notice until it feels too late. Fortunately, we can still do something to fix it. It's time to roll up our sleeves and get to work.

Let's start taking our test suite as seriously as our production code. Though your unit tests don't need to abide by all the rules of your production code, they should still be reviewed regularly. We need to put in the time to review and refactor our unit tests on an ongoing basis. And let's take advantage of tools like NCrunch to make our test suite clean again.

This post was written by Sylvia Fronczak. Sylvia is a software developer that has worked in various industries with various software methodologies. She’s currently focused on design practices that the whole team can own, understand, and evolve over time.

Tags:

Blog

Month List

Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download