Test-Driven Development: A First-Principles Explanation

by Hit Subscribe 4. September 2018 22:45

In 2018, just about every developer has at least heard of test-driven development. But that doesn't mean every team is doing it or that every project has an extensive suite of tests. It doesn't even mean everyone likes it or believes in it.

Join me as I start out with some basic facts that no one should be able to deny. From there, I'll work towards what TDD is and why we should use it.

What Is a First Principle?

Wikipedia defines a first principle as

a basic, foundational, self-evident proposition or assumption that cannot be deduced from any other proposition or assumption

This means we'll have to start from some basic truths that cannot be denied. Just telling people tests improve the design and reliability of your code won't cut it.

From these basic truths, I'll have to work my way towards explaining why test-driven development is a good idea.

What Basic Truths Can We Agree On?

I'll put some statements out there that I think we can all agree on:

  • Everyone makes mistakes.
  • Software can be or become complex.

Let's dig into those a bit.

We All Make Mistakes

The fact that everyone makes mistakes cannot be denied. It's a natural trait of human beings. We make mistakes because of several factors:

  • Maybe we didn't have all the facts.
  • We might ignore some facts because we're tired or unfocused.
  • Maybe we achieved what we wanted, but now that we see it in action, we realize we want something else.

There are plenty of reasons why you might see a certain decision or action as a mistake. The fact remains that we all make them. And for developers specifically, this means we'll make mistakes in the development of our software.

Let's put that statement aside for now, but remember: we all make mistakes while developing software.

Complexity in Software

The other statement I made was the software can be or become complex. Not all software is complex, and some software might start out simple but evolve into something complex. Also, what's complex for one person might not be for another. But at a certain moment, a developer might experience the code of an application as being complex. This might be due to the amount of code, the language, the quality of the code, the paradigms used, the experience of a developer, etc. We'll revisit this statement later.

So we have these two basic premises that we accept as truth. If you don't agree with me on these two things, you can stop reading. Everything that follows in this article, I will deduce from these two facts. If you don't accept them as facts, the rest of the article has no value. If you do accept my premises, let's move on.

We All Make Mistakes While Developing Software

Developers are only human, so we're bound to make mistakes as software developers. Software development is more than just writing code. It's reading logs, interacting with users and clients, brainstorming, thinking out solutions, and a lot more. In all these aspects of developing software, we could be making mistakes. I'll focus on one aspect: writing code.

If we can agree that we make mistakes as humans, we can agree we'll make mistakes as software developers. And from that follows that we will make mistakes while writing code.

This means that even if everything is clear and we know what we need to implement, we will still, somewhere along the way, make mistakes in how we implement it. This might be as small as writing a "+" (plus) where there must be a "-" (minus), or forgetting to increase a counter in a "for" loop. I know I've done both. But it could also be way more complex. You might write some difficult calculation or algorithm and be implementing it wrong, giving you a result you don't want.

So we're at the next step in our thought process: we all write bugs.

I think we can also assume that we don't want bugs. Bugs cause software to not do what we or the user wants, and they can cost money. So how can we avoid this?

Testing Your Software

Avoiding bugs is easy: run the software and check that it works as you intended. But here our two first truths come back into play: people make mistakes and software can be complex.

Because software is complex, your test might involve several steps that you have to reproduce perfectly. To make matters worse, you might not be able to perform some tests at certain moments. Logic that works with leap years is a great example of that.

Even if you can perform the same test over and over again, doing so manually leaves room for human error. You might perform the test wrong and see an outcome you didn't expect. This leaves you with the time-wasting next step of either looking for a bug that isn't there or doing the test again.

A logical next step in the software industry is to automate these tests. This allows us to put more time and effort into activities that are more valuable to the company, like adding new features. Also, most developers don't like doing the same activity over and over again.

Handling Complexity

Our second basic truth was that software can be complex. If your software is complex, the possibility of making mistakes should be greater. That's why the more complex the code is, the more it can benefit from test-driven development.

But it's also valid the other way around. Test-driven development can help you manage the complexity. For this statement to be true, you'll need to introduce a third truth into your team and codebase: your automated tests should be small enough to comprehend. This means they'll usually test one thing and work in isolation. If you can make this a reality, your code should follow. In other words, your code will also be split up into smaller pieces that are more easy to understand.

The Scientific Method

Once you have a suite of automated tests, you now have a way of (almost) proving your code does what it should do. It's the scientific method of not assuming things but rather trying to prove them. In mathematics, this is done by calculations that work towards the expected outcome. In other fields of science, we can perform experiments to (almost) prove some hypothesis.

I say almost prove because you can never actually prove a hypothesis right. You can only prove it wrong. Check out this great talk of Richard Feynman on the scientific method, especially starting at 3:47. In short, he states that you start with a hypothesis, then deduce some consequences from that, and finally perform an experiment to measure those assumed consequences. If your measurements fit into the hypothesis, it means your hypothesis is likely to be right.

We can say the same about software. You assume your code does what it should do, so you write a test like, "if I enter these variables, this should be the outcome." If your tests fail, you've proven your code doesn't do what it should. If they pass, your code likely does what it should, but you haven't proven it. That's because your tests might not cover all situations. If later a user encounters a bug, you know you have a missing test. Write a test to reproduce this bug and you will see your hypothesis was wrong: your code doesn't do what it is supposed to. But you can fix the code, and now your hypothesis is even more likely.

And that's why we should use TDD. It's what gets us the closest we can get to proving our code does what we want it to do all of the time.


This post was written by Peter Morlion. Peter is a passionate programmer that helps people and companies improve the quality of their code, especially in legacy codebases. He firmly believes that industry best practices are invaluable when working towards this goal, and his specialties include TDD, DI, and SOLID principles. 

Tags:

Blog

Pingbacks and trackbacks (1)+

Please log in if you would like to post a comment

Month List