Maintainable software: why would I care and once I do, how do I know I see one?
No matter what, the code you just shipped became obsolete at the moment you rolled it out to production. In case your product is successful, improvements will be added and defects will be fixed throughout the 5–10 years lifetime (research shows that maintenance cost will consume 40 to 80 percent of the total cost). Therefore we can say: code is written only for being read and changed by others.
Most development shops spend the least amount of energy preparing their software for maintenance even though this is probably the most critical (and expensive) phase. As a consequence, roughly 30 percent of the maintenance time is spent on “understanding the existing product”. Missing documentation, unreadable and insufficiently designed code will hit production — all under the flag of the agile movement — and make the life of tens of developers miserable for years to come.
Many researchers do agree that about half of the software bugs are due to missing requirements clarification and documentation and for most shops rediscovering requirements while working on the features is just business as usual. This observation is the main reason why the agile movement welcomes you to embrace change (subsequently software designed to adapt to new requirement discoveries).
Here are a few aspects I consider important ones for achieving maintainable software:
Domain model as the ubiquitous domain language
Quite a mouthful expression… hah?
Think about it for a moment: how many times you needed to rework the feature you just shipped because it turned out it was built on the top of a misunderstood business process between product management and engineering? It might also have happened because the documentation was unclear…or entirely missing.
Ubiquitous domain language compensates for such cases by defining the one true language which describes the business domain and processes (the entities and the actions they are involved in) and is spoken by product managers, operations, developers and customers. Development revolves around the one true language, which is beautiful in the eye of the beholder through its simplicity and unambiguity. Simplicity is achieved through encapsulation while unambiguity via dedicated types (aggregates and value objects). Domain driven design provides good guidelines for building a framework agnostic core library (using Entities, Value Objects, Aggregates, and Repositories).
Changeability
We all heard about “spaghetti code” or the “big ball of mud” (a software system which lacks a perceivable architecture, undesirable but common in practice)… but how does it looks and smells like? More importantly: how can I avoid ending up being the proud owner of a big ball of mud?
Loosely coupled, component-based architecture leads to maintainable software which is easy to change (hence the prefix “soft” in the word software). All components have one single responsibility, with well-defined input and output requirements, emulating the real world processes between teams. Components delegate their subtasks to other components via the well-defined agreement. A component can be changed anytime without the delegator observing it, given that the new component fulfills the previously agreed pact (some programming languages provide you with the concept of an interface to force this agreement). Faults can only appear within the context of the components.
Sounds a lot like the definition of the — now overhyped — microservices? Hell yeah! These concepts were here from the early days of computing, got hijacked by the microservice movement, however, it is not unique to that domain by far.
Maintainability is inversely proportional to the number of public classes, dependencies, microservices (you name it). So you want to keep it simple. I mean really simple. A key approach to that is refactoring and metaprogramming. Not to mention a lifelong ban on code duplication.
Testability
Why would you care: we all test our code in production for decades and we are quite happy with that approach. Isn’t it?
Well, the cost of incident handling in the development phase is 2–3 times lower than in case of the acceptance testing and at least 10–15 times lower than in production. And that only at the first iteration of the software, which will be followed by 5–10 years of modification and improvements.
These are the compelling reasons to test components automatically in isolation during the development phase (also known as unit and integration tests).
Not being able to test components in isolation is a sign of failing the single responsibility principle. If the component has clear input and output-values it can easily be mocked and the behavior asserted as part of the acceptance test.
Observability
So, you did your everything in your power and tested the software for all possible cases which come to your mind… however, there are the unknown unknowns (the things we don’t know that we don’t know). That means: despite my best efforts, I test in production and so do you… And that’s the moment when observability comes handy. By that, we mean systems which are easy to debug and their logs provide enough information about what’s happening right now. … Observability is about being able to ask arbitrary questions about the software environment without having to know ahead of time what you wanted to ask.
The secret to this is to hope but verify and measure everything (remember, hope is not a winning strategy). Your software should provide timers, gauges, counters and arbitrary events which later can be queried in operation to gain insights on a running software, reducing the time of incident handling from days to minutes. Don’t forget: the best companies out there, are able to deliver a fix for an incident within 1–2 hours on average.
Happy crafting, may the force be with you.