Category: Blog, Android, iOS, Development

6 Misconceptions About TDD – Part 6. You Can Have Both Reliability and Low Coupling

The final part of our guide to the TDD cycle – this time, we explore the problem of reliability and low coupling.

TDD cycle

This is the last part of our article series about the TDD cycle.

Introduction

In the literature about the TDD cycle, you can find the differentiation between classical TDD and mockist TDD. In this post, you’ll learn the difference between implications of the two variants.

When writing an extensive test suite (for example, in a TDD cycle, but not only), one should be aware of these two traits:

  • reliability of tests – the extent to which they mimic real implementation,
  • coupling between tests and implementation – the extent to which the tests are dependent on code that isn’t under test.

Of course, we want our tests to be as reliable as possible and as little coupled as possible. As it turns out, you can’t have both. These two traits are contradictory. In other words, the more reliable tests you have, the more coupled they are.

Example

Here’s a contrived example. It’s written in Ruby, but should be understandable by non-Ruby devs too:

The reliability camp would test it like this:

And the non-coupling camp would test it like this:

It’s not like one approach is universally better than the other. It all depends on many factors. However, our team is divided between the camp we join by default.

By the way, this dichotomy is pretty much in line with the classical TDD vs. mockist TDD. In those terms, the reliability camp would be classical TDD, while the non-coupling camp would be mockist TDD. If you’re not familiar with those terms, you can go ahead and google it – I recommend this article.

Reliability

In this camp, the goal is to use as much real implementation as possible. Even in unit tests, when the SUT (system under test) is collaborating with other units, we’d rather use real ones than mock them.

The reason is exactly this – reliability. The code we’re about to test is supposed to work with real collaborators, not fake ones. Using real ones means that you precisely see how your code would behave in production. Using mocks has its advantages, but it’s always pretending and guessing.

You have to pre-program mocks – what if the real implementation works differently than you think or the implementation changes?

In other words, it’s best to avoid speculation in our tests.

Third-party collaborations

You may wonder how we tackle the costly third-party collaborations like database or network. We don’t dogmatically avoid them at any cost. Sometimes we use real collaborators here too:

  • On a small scale, tests against real database, even if 1000x slower, might last a neglectable amount of time.
  • Real network requests tell you how the code behaves against a real API, not the fake one that you guesstimated. If tests fail because of no network, this is how your app would behave!

However, there are some reasons for NOT using the real network/db code.

On a large scale, using real db or network adapters is time-consuming and unpredictable. We then transition from all-real tests to some-real tests – use fixtures or factories, and fake the API. At the same time, we mitigate the amount of real requests, but not delete them totally . A subset of tests which use real adapters serves us as acceptance (end-to-end) tests.

Disclaimer: in real projects, we usually have very limited number real end-to-end tests.

Unit tests

This approach is opinionated on how to picture “unit” in “unit tests”. Or rather – how not NOT to picture it.

In our point of view, “a unit” usually doesn’t mean “a class,” but rather “a group of classes”. In our example, Event and Warehouse together make such a group. What we care about is a reliable outcome of collaboration between multiple classes (possibly: from all the layers).

For the same reason, we often aim for higher-level tests. The boundary between unit tests and integration tests (or even end-to-end tests) blurs.

It’s not important to test units and integration separately, if we can achieve the same in one set of tests. Because the most reliable tests go through all the layers, we start from end-to-end tests. We call this approach “outside-in”. Even if tests are not end-to-end, they still go through multiple layers. Typically we test against the public interface of the outermost layer of a unit.

When to join this camp: if you want your codebase prepared for constant internal refactoring.

Advantages:

  • Less code. We don’t automatically write mocks for every collaborator. Also, sometimes we don’t test each layer separately, but rather all together or in groups. Sometimes there’s no need for dependency injection.
  • Simpler code. Because some interfaces exist only to allow for fake collaborators, we might not need as many interfaces. Fewer interfaces means less inversion of flow, which in turn makes code easier to reason about.
  • No need to keep mocks in sync (because of the few mocks).
  • Freedom to internally refactor. Because the program flow is tested in integration manner, you can change the internals without changing the test code – they still serve you as a safety net. Tests shouldn’t be fragile.

On the flip side, aiming for real implementation and higher-level tests are the reason why this approach goes together with coupling. The other camp is trying to answer this problem.

Non-coupling

The goal is gaining the ability to change or remove one part of the code (with tests), without having to change the tests in other parts.

This approach still doesn’t define what makes a good “unit,” but there’s clear separation between units (smaller pieces) and integration (their collaboration). You can picture the integration between units as a separate unit. We check whether a unit itself works and whether the integration works separately. In other words, we know which part of the app is broken based on which tests failed (unit/integration).

This means that real collaborators can’t be used directly in tests. Rather, there exist their mock (fake) counterparts in the test code. These mocks provide simplified behavior. In the case of costly third parties, they typically contain “in memory” implementations.

You are free to change unit A, even its interface, and there’s no need to change other tests (say, unit B). Of course, now the mocks in test B are out of sync, but this doesn’t change fact that unit B works well as long as the contract described by mocks is fulfilled. To understand this approach, some people need to change their perspective:

  • We don’t think this way: B uses A
  • Instead, we think like this: B relies on the interface that happens to be fulfilled by A. It can happen that A doesn’t fulfill this interface but that doesn’t mean B doesn’t work. It just can’t use A in this case.

You think of your dependencies as distant and stable parts of the app. You want your tests to prove that SUT works under given conditions – the test doubles define them. It’s SUT that defines the conditions, not the other way around. You believe that freezing public API of a unit is enough of a proof.

This approach usually goes together with a modular architecture. If we want our units to behave like separate modules, we probably should do the same with our unit tests.

When to join this camp: If you want your tests to support independent units that could be treated as separate modules, and want them to be moved around like microservices.

Advantages:

  • Freedom to externally refactor (at the scale of modules). You can change, remove, or move one unit (with tests) and there will be no need for changing other tests.
  • Hermetization of your tests. While working on one unit, you don’t need to worry about other units.
  • You don’t need implementation of your unit’s dependencies to work on it.

TDD cycle – Conclusion

There’s no simple answer as to which path you should follow. Please remember that you can’t have both high reliability and low coupling. Oops, sorry, you actually can – if you write two test suites, one from each of the worlds. But that just isn’t practical. You should simply ask yourself which is more desirable in your project and implement it.

 


The agenda of the article series about the TDD cycle “6 Misconceptions about TDD” is the following:

  1. TDD brings little business value and isn’t worth it
  2. We all understand the key laws of TDD in the same way
  3. TDD cycle can be neglected
  4. There is one right granularity of steps in TDD
  5. Mocks, mocks everywhere!
  6. Tests loosely coupled with code are reliable