London TDD Vs. Detroit TDD: You're Missing the Point

by Hit Subscribe 2. November 2018 23:59

When it comes to test-driven development (TDD), you may have heard of London style and Detroit style (also known as Chicago style).

Maybe you haven't, and these terms are new to you. Maybe you have, and you're soul searching to figure out which style will work best on your project. Either way, studying both these styles is a noble effort—you can gain more insight into TDD as a whole. But if you only focus on one or the other, then you are missing the point.

So, what's the point? For the best results, don't adhere to either style—instead, incorporate parts of both. TDD was originally intended to test all the way from the top to the bottom in each and every test. You start with the simplest logic and evolve your way to more complex cases, and you refactor both your tests and your production code as you go. I will dive deeper on this later, but first I want to summarize the two main styles.

London TDD vs Detroit TDD boxing gloves

London Style: Seeing Through the Smog

First, let's all agree that when we talk about TDD, we are referring to the Red-Green-Refactor cadence that Kent Beck made popular. Within this, we first talk about London-style TDD. This style of test-driving is top-down: you start at the controller or HTTP level and test-drive your way to the bottom. London style started soon after Extreme Programming (XP) grew popular in the early 2000s.

In this style, you implement tests and build out your application entry points, such as your controller. You mock and stub out the bits underneath, committing to fleshing out their behavior later on. Class by class, you clear the smog, test-driving out behavior until you have reached the "bottom" of the use case.

This has the advantage of letting you sort out how users will navigate your system and what the final output will be. It requires you to be upfront when thinking about your system's API, and it helps ensure you don't write extra fluff that only produces noise.

People commonly criticize London style for creating test-induced damage, making your system brittle. If you change the public method of even one class, the change ripples through the classes that use it, breaking multiple tests. This creates more work when you want to refactor, which then encourages you and your team to avoid making such changes.

London style's critics are correct about this, but they are still missing the point. More on that later.


Detroit Style: Populating a City

Detroit style, also known as Chicago style, is a bottom-up approach. It started in Detroit during Extreme Programming's inception at Chrysler's C3 project, hence the Detroit moniker. (Although I guess people like Chicago better because that name has become more popular.)

In this style, we start at the bottom, the "core logic" of the system. We start fleshing out what we feel are the most important or most complex bits of logic. We then build on top of that, populating the use case until it is completely tested.

The advantage here is a more evolvable and bug-free system. Because you have fewer mocks, you can change your code easily without breaking many tests. You also get more cohesive coverage since your higher-level classes combine already tested lower-level classes. This ensures they work together effectively.

The disadvantages are that you are deferring the API and therefore could produce logic that is way off base from what the user actually needs. It's also easy to over-design since you don't really know what the end result will look like.

Once again, these criticisms are appropriate. But once again, the critics are missing the point.

Why It Doesn't Matter

The original founders of TDD didn't have either of these styles in mind. Their intention with testing was to test the whole application, from the API level on through, one simple logic at a time. Healthy testing is testing the entire use case, every test, code path by code path.

Test-driving, as it was originally intended, has no "top" and no "bottom" to start from. You think about the end-to-end behavior, like with London style, but you only mock boundary classes—like connections or slow classes. The tests need to be fast. You start by testing the simplest behavior you can think of and evolving to more complex logic as you go.

When you do this, you avoid the test-induced damage of London style TDD while maintaining the benefit that comes from understanding the API first. You also mitigate Detroit style TDD's risk of over-design while maintaining an evolvable system with holistic coverage.

Quick Aside: What Do You Mean by API?

I want to be clear: I'm not advocating for testing at the HTTP level, or whatever your public-facing protocol is. I'm advocating for testing at the entry point to your code module, and this can differ from project to project. You may have a core library that is used by a web app and a messaging app. In this case, your modules would be the first classes touched by these apps within the library. This also means you are not always testing the "entire application"; you may just be testing a subsection of your application that has been broken down by feature. That's fine.

Why Your Objections Don't Matter

I can anticipate the protests that may come from reading all this. I heard many of them when I was consulting. It shocks me sometimes, since what I'm writing here is common to our industry, but I can see why there's still confusion: the entire technique of TDD is less than 20 years old! After all, it took over 30 years and a man who was committed as insane just to get doctors to wash their hands.

Allow me to refute a few of the more common complaints about this issue.

"My Behavior Has Too Many Dependencies to Test at Once"

Sometimes you have to deal with business logic that depends on a lot of moving parts. You may build out your dependency graph for the system under the test and realize it goes seven levels deep and is almost as wide. All this setup bloats your test class, and you think, "Why don't I test only part of these classes?" Yeah, sure, just test part of them. And just take that dirt in your kitchen and sweep it under the rug.

The vast majority of the time, this is a sign of bad architecture or design. The system itself is not healthily aligned to its use cases. We don't want to hide that, we want to expose it, which is exactly what a healthy unit test can do. If your room is dirty, do you just throw all the clutter into a closet and shut the door? No, you take the time to clean it up, bit by bit, even though it's painful. You should do the same here. Creating worse tests won't fix the unhealthiness of your system.

"But My Logic Is Too Complicated"

Despite a well-architected system, sometimes logic is necessarily complex. It may handle large amounts of data or operate a complex algorithm to bring huge business value to your organization. When the logic is complex, testing the use case from the modular API on down can be much trickier than just testing a subset of it.

This is a fair objection. And in fact, feel free to write some tests to help you better understand the complex behavior. Then, once you understand, delete them. You don't need them anymore. They are not going to protect your system from bugs. Build a set of modular unit tests to ensure you cover the behavior.

Saying your logic is too complex to test at the API is a cop out. Let's say you are trying to find the right light levels at various places in your house. So you take some bulbs, wire them up, and create a circuit board to turn them off and on. You go to different rooms in your house and wire the lights differently until the ambiance in each room is just right. Then you find that carrying these wires around and switching them gets cumbersome. You could just test one light at a time to see whether it goes on or off.

But here's the thing: this will never let you see the effects of the full combination of lights. For that exact reason, we don't do this—instead we shove these wires into our walls, hiding all of the messy details, and expose some light switches so we still get to test the light level in each room as a whole, turning on or off individual switches.

It's the same with software. If you are carrying around the setup and tear-down wiring of your API entry points in every test, things are going to get complex and messy. However, as with normal code, there are patterns to make this easier in tests. The foremost one is to hide the wires in the walls of fixture objects. These objects expose "light switches," or methods, that your tests can turn on or off.

With these patterns, you can make even complex logic testable at the API or module level.

"Hey, That's Not a Unit!"

I now come to the last complaint—the one that, to me, is the most inane, but also the most common: "These aren't unit tests!" People get too embroiled in the definition of "unit test" and think that unit means class.

I am here to burst that bubble: the "unit" was always meant to be the test, testing behavior in isolation. Good tests are independent of one another, setting up and tearing down their own contexts. The "unit" was never meant to be the system under the test. That's all I have to say about this.

All Is One, One Is All

Saying London TDD is better or Detroit TDD is better is like saying Kung Fu is better than Judo. It doesn't make sense; they're just different.

And beyond that, it doesn't matter which is "better." As useful as it may be to understand these schools of thought, picking one over the other isn't the point. In martial arts, the goals are to defend yourself and attain a clear mind.  If you only pick one school and stick with it, you are not test-driving as it was intended. Test holistically, from the observable entry point all the way to the "bottom" of the module, in a blend of something both London and Detroit but more. Testing internals won't get you there; that'll just detract from your goal: healthy test-driving. This is astoundingly uncommon advice, but: only test your module boundaries. Get as close to your API as you can while keeping your tests fast. They still count as units.

This post was written by Mark Henke. Mark has spent over 10 years architecting systems that talk to other systems, doing DevOps before it was cool, and matching software to its business function. Every developer is a leader of something on their team, and he wants to help them see that.

Tags:

Blog

Please log in if you would like to post a comment

Month List