How many binaries do you need for a .NET project? How many is too many? Does it really matter?
I’ll admit I have a strong aversion to the phenomenon which I’ve dubbed “binary explosion” — tens (or hundreds!) of tiny assemblies. In each of the dozen or so cases I’ve seen, there was never a good reason to have ended up with this many separate libraries and it was always a consequence of unchecked organic growth. How can we defuse this binary bomb? Let’s start with some guidelines about when a binary boundary makes sense.
If a component has a separate deployment/versioning lifecycle, then it belongs in a separate assembly. This case is pretty self-explanatory and will never run afoul of my usual binary parsimony. If you can (or must) update A without updating B, then you need to split them into discrete units (e.g. “A.dll” and “B.dll”).
Use separate assemblies for public and private dependencies. If you are building a server component which has an associated client library, obviously you don’t want to ship the back-end logic to the front-end consumers. It makes perfect sense to have “Contoso.Server.dll” and “Contoso.Client.dll”. It does not however make sense to have 10 client DLLs that all must be present for an app to work properly (“Contoso.Client.Common.dll”, “Contoso.Client.Data.dll”, “Contoso.Client.Net.dll”, …). There are very few legitimate use cases where you need this level of binary separation. It just causes annoyance to the poor user that has to remember to add a half-dozen DLL references or else “Hello World” fails to run. Look to the .NET Framework itself for inspiration and be amazed at the number of useful apps you can write by bringing in just “System.dll” and “System.Core.dll”.
Entry points generally need a distinct binary. This would apply to things like an EXE for a console app, an assembly containing an Azure role entry point, and so forth. However, even here there are some options. Maybe you don’t need a full-fledged MyAppTool.exe if you can build a PowerShell cmdlet. This might in fact be more attractive than a vanilla EXE to a savvy user who needs a first-class scripting experience.
If you need strong separation between certain dependencies, a binary boundary could be valid. GUI interaction logic is a common case, but really this could apply to any sufficiently complex external dependency. For example, if parts of your system depend on Azure Storage, you may want these service bindings to live in a separate library than the “core” DLL (which perhaps contains only framework dependencies unencumbered by externals). Doing this right means defining good abstractions in your core library (e.g. “interface IBlobStore”) which the externally focused library can fill out with the concrete details (e.g. “class AzureBlobStore : IBlobStore”).
Finally, test code generally goes in a separate library from product code. Almost everyone does this already but I include this for completeness. I do want to stress that you must not go crazy here. It is hardly useful or practical to have separate test libraries for each feature under test (sadly enough, I have seen this happen). Namespaces are great for logical separation and test grouping, as needed.
One last note about performance: there is a fixed cost associated with every assembly, especially in compilation and runtime loading. It is much faster to compile one relatively large assembly than 100 medium-sized ones. One potential drawback is the recompilation cost — finer-grained dependencies mean fewer things to rebuild when a small isolated change occurs. In my experience, though, I haven’t really seen this “benefit” shine so brightly in contrast to the many clear drawbacks of binary explosion.
Pingback: Binary implosion via the monolithic core – WriteAsync .NET
Pingback: Binary implosion: use the source! – WriteAsync .NET