Testing down low

Spread the love

In the last post, I discussed high-level testing of a simple distributed service. Now let’s look at the bottom of the stack.

What are good candidates for testing at this level? To name a few:

  • Basic functional/algorithmic correctness
  • Unit/module cohesion
  • Simple performance and scale benchmarks

In the example from the initial post on this topic, I introduced a load balancer component with an algorithm that was designed to promote fairness. The best way of verifying, tuning, and tracking regressions in this algorithm as it evolves would be through low-level tests. A good set of tests at this level could, for instance, model various load situations by passing in raw data values to the algorithm and validating the outputs are as expected (if not exactly — maybe there is some randomness — then perhaps through statistical means). The result is a highly precise statement of quality for the component under test. Of course, this is not an end-to-end test of whether the load balancer really works on a production server, but that is not the point. Remember, validating that the system works is the job of the higher level tests.

What are the problems of growing the scope of your low-level tests? After all, this gives us more coverage and hence more confidence, right? No, not right. The pain of scope creep cuts both ways. Just as high-level tests become less effective when they reach too far down, low-level tests suffer similar problems as they slither upward.

The best low-level tests are fast (enough), have few dependencies, and produce clear answers to a couple core questions. Is this algorithm correct? Did this code refactor break anything? Are there any regressions in this new build?

If the tests are not fast to run, they will consequently not be run often enough to be useful. If you can’t get quick feedback from these tests, then they subtract value from higher level tests — which will now be more prone to hitting blocking bugs that should have already been filtered out. If the tests have too many dependencies, they will be harder to maintain and discourage well-meaning engineers from making useful coverage updates as more product code is added.

To quell the urge to overwork your low-level tests, it helps to recognize that these tests do not exist in a vacuum. They are simply the first step in a larger feedback loop of product quality. When defects are discovered higher up in the stack or later in the cycle, it is appropriate to ask, “Could this have been found more quickly and closer to the source?” Proper attention and follow-up here will ensure that you are building a well-functioning suite of tests at every level.

Leave a Reply

Your email address will not be published. Required fields are marked *