Previous Next

Quo vadis Monolith

Writing good software is hard if we run a business behind it. I guess that nearly everyone experienced the consequences of taking the short path or not having the time to revert bad design or architectural decisions. The time for feature development decreases and firefighting begins and people start digging for the cause, taking a step back, looking at their system. If this has been experienced in the past years when the concepts of microservices appeared,
then the chances are high that the root cause has something to do with the giant, named Monolith, but is this justified? I remember a quote I read some years ago, which says:

"a root cause is just a point in the retrospection, where a consent is found, not to go deeper" (unknown) and to some extent, it can be laid out for Monoliths, which have a bad reputation at the moment.

Anyhow, we should not blame monolithic architectures for reasons like the ones mentioned above. If a business lacks the opportunity to constantly decrease technical depts inside of its process, it will end in the very same position once again, regardless of the chosen architecture, but tackle different problems.
A distributed system favors smaller building blocks, as they promise more flexibility in relation to scale and maintenance. They even have a long tradition in the IT, looking at IPC/RPC, SOA and so forth.
So, how can it be that some projects succeed and others do not if this cycle is repeated over and over again? It shouldn't really matter if a function call fails due local or remote errors - that's what UAP tells us if not strictly looking at the notation part, and frameworks and tooling make great effort to hide the complexity behind it, albeit hiding complexity, is more likely a shift to somewhere else. Sure, it may fail more often or be temporarily unavailable, but it can be handled by exception paths. Let us look at some challenges for service-based architectures:

  • it should not assume that all services are available during the startup phase.
  • it should use a discovery that supports versioning, as compatible changes are more tricky to achieve.
  • it has different requirements regarding a monitoring solution - seeing the overall process of a split flow can be important to name one example.
  • it impacts the local development process for every new service and finding an answer to this should happen early.
  • it should value the avoidance of duplications, as maintenance problem may arise otherwise.

This covers just a few basic points and you may have already added some more challenges inwardly. Complexity can be seen as water, a dam solves a local problem, but puts a burden on other regions, not saying that it wouldn't be seen on Monoliths. We further tend to develop each service in isolation, creating a repository for each service, which makes things more worse, and this is why:

  • cognitive complexity increases, if a feature/fix requires adaptions of many services.
  • compatible deployments are a way harder to achieve, due to the lack of atomic changes.

Well, one could say that microservices should be developed in a certain way to avoid some of those issues, but still, this has to be understood and implemented by every team member in their daily work.

However, why not making smaller steps towards a goal of delivering better software and start writing code that fits the actual business requirements? Do we really need to scale that large? We don't have to move from one side of the spectrum to the complete opposite, there is something in between and I was lucky to begin my career in an organization which used OSGi to implement their software and made me think in modules from the very beginning. Before we start with modules, please have a look at the basic structure.

A module may contain a scope for the API, the internal implementation, or both and can be described by:

$group.$artifact.api..          // artifact should contain the module's name
$group.$artifact.internal..

And secondly, modules may depend on other modules, but code from API is strictly not allowed to use code from Internals, neither from a dependent module nor from itself. Simple and unit testable.
What does the second rule imply? Since internal can't be accessed directly, it must be provided by the API or in form of service lookups - oh services - which is not so far away from the concepts of service discovery used by distributed services. It's just named differently like service locator in terms of Java (new module system takes leverage of that but it's by far older than that) or service registry in the world of OSGi.

For example, it can provide

  • code for core functionalities or utilities | API modules
  • code or services for cross-cutting concerns, like security or logging | Mixed modules
  • services in form of UI parts or web endpoints | Internal modules
  • ...

So instead of having one project, we create a project that hosts many separate module projects. All popular build tools like Gradle or Maven support multi-module builds and Eclipse and IntelliJ do as well.
My recommendation would be not to take the root for modules directly, but some intermediate directory instead. The good news is that we neither have to shift to Java 9 nor to use OSGi to write modules, even though tooling, patterns and feedback cycles might be better. It can also be used in conjunction with Java 7 or 8, Android or even Spring. Modules are a design concept.

Having said all this, I should have better written that writing good software is hard if we don't get correct feedback. One problem that arises with a larger code base is, that everything is reachable from everywhere and proper feedback is nearly impossible because our intents can't be expressed easily, but as soon as we split code into different modules or write manifest files, we share this knowledge and receive feedback about our actions. Feedback means retrospection about our design and architectures won't erode that quickly.

A multi-module Monolith gives us the opportunity, to start with a well known simple process at the beginning and to turn single modules into distributed services afterward or back if the outcome doesn't meet the expectations.
Same holds true for prototyping and team ownership.

Nonetheless, is quite interesting what people write about the 80/20 rule in terms of software development and I am interested in hearing your thoughts about this and if you are interested in some additional topics.

The photograph for this article was taken by Austin Neill.


Become a backer or share this article with your colleagues and friends. Any kind of interaction is appreciated.

Quick Link