The (oft-misunderstood) dependency inversion principle states that abstractions should not depend on details, and instead details should depend on abstractions. As an example, let us first consider a violation of this guideline:
void DoSomethingInteresting() { ConfigFile cfg = ConfigFile.Load(@"C:\file.xml"); if (cfg.ReticulationEnabled) { this.ReticulateSplines(cfg.SpeedFactor); } }
The so-called “abstraction” around configuration ConfigFile
is very much tied to the detail of the backing store, an XML file in this case. To fix this, we need to really go all in with the abstraction and remove such concretions from our purview. It seems that we actually desire a generalized configuration store; this could be a file if necessary, but the caller should not care. From this store, we would provide a way to load the configuration settings. A dependency inverted attempt might look more like this:
// In the entry point, configure the real file-backed config store. IConfigStore store = new FileConfigStore(@"C:\file.xml"); // The preconfigured store must now be passed in. void DoSomethingInteresting(IConfigStore store) { // Load() hides the details of how config is actually retrieved. ConfigSettings cfg = store.Load(); if (cfg.ReticulationEnabled) { this.ReticulateSplines(cfg.SpeedFactor); } }
The difference is subtle but powerful. The portion of the code responsible for acting on the configuration can now be completely insulated from the low-level configuration mechanism. Assuming the rest of the components and modules are equally as disciplined as in the code snippet above, a complete rework of the configuration system (e.g. using a cloud database) would be a relatively localized, if not trivial, change. We also have a testability advantage, if we take the opportunity to create a simple test double such as MemoryConfigStore
.
So there we have it — dependency inversion and proper abstractions to the rescue. Except… there are always situations where abstraction can only go so far. There are those pesky modules where the core responsibility is literally to interact with the file system — think of a backup utility. It may please the dependents to hide such test-unfriendly logic behind an opaque interface, but it does little to ease the module maintainer’s daily grind. Is there any hope for a better way?
Enter the simulator — the more obscure third leg of the Ports-Adapters-Simulators design triad. A simulator is a “fake” implementation of a problematic dependency, contractually verified to give the same results as the real thing. Arlo Belshee goes into some detail on this topic, showing a simulator as an alternative to mock-centric designs.
Time and time again in the projects I have worked on, the file system ultimately emerges as an unavoidable dependency in a critical component. In the past I have followed the aforementioned strategies, pushing this logic as far to the edges as I could. Yet, I always had two annoying issues: (a) the “edge logic” was not explicitly tested much or at all compared to the otherwise faithfully TDD’d code in the center, and (b) the abstractions needed to hide the “edge logic” started to grow independently in distinct areas, resulting in several similar-but-not-the-same solutions (Func<Stream>
to wrap a File.Open
call, IXyzStore
load/save interfaces like the above, and so on).
Recently, when faced with similar challenges, I decided a simulator was worth a try. Having little experience here, I had a feeling that the prospect would be expensive but eventually provide a small return. It turns out I was wrong on both counts — the simulator was actually very straightforward and it had a large positive impact on the design and quality. Why was this so? As a consequence of building a simulator, you end up with a “contract” for how to interact with an external dependency. This gives the benefit of centralizing a lot of scattered logic to one fully tested module, but can also help you constrain the nature of the interactions. Unlike, say, the system-provided file APIs which have to support hundreds of diverse use cases, your file system contract and simulator may (and in fact, should) provide exactly what is required by the application and no more. This meant we had just one way to do common tricky operations (e.g. open a file for async reading) and zero ways to do things that were disallowed or unnecessary. One notable example in this project: the only way to get a reference to a file was by enumerating it from the parent directory, which meant in practice there was little or no chance of a FileNotFoundException
— eliminating a whole class of issues.
To give a sense for the mechanics of building a simulator and how simple it can be, I have created a FileSystemSample project on GitHub. Take note of the following design choices:
- No primitive obsession. Basically every file system API in the world uses raw strings for paths. Given the simulator is in your domain, however, you can choose the terms of engagement and rewrite these rules. In this case, I chose a few custom value objects instead to represent bare file names and fully qualified paths (
PathPart
andFullPath
respectively). - Wrap as much as you need but no more. The sample simulator acts as a replacement for
DirectoryInfo
,FileInfo
, and the like. However, I chose not to provide a replacement forStream
since it affords several conveniences, such as being able to use existing classes likeStreamReader
andStreamWriter
. Depending on the context, this may or may not be acceptable. - Use the abstract test pattern. Contract tests and abstract tests go hand in hand. In this example,
FileSystemContract
defines all the tests thatFakeFileSystem
(the simulator) andRealFileSystem
must pass to be functional; there are only minor differences between the overridden test classes (most notably, the runtime type of theIFileSystem
reference). - This is just a start. A fully featured file system contract/simulator may include events for tracking file access, interface segregation to distinguish between readers and writers, and so forth. Again, it comes down to the application requirements and the likely impact to testability and quality.
If you too have suffered with painful, poorly-tested external coordination code, consider beefing up your test double game with a strategically designed simulator. It could certainly cost less and pay more than you might think.