Microservice revolution
What can go wrong?

Microservice approach was supposed to simplify development, allow easy upgrade of given parts of the system in isolation and enable the ability to choose the best programming language and frameworks for a given task.

The new approach was elegant. It promised a lot. Its execution however is far from elegance and efficiency most of the time. We build microservices because they are popular, not because they were required to build the best solution. It’s simpler to build a REST API and expose it to front end developers than build an MVC application and cooperate with developers responsible for the view layer. 

The idea to extract and isolate a given set of functionalities is often abused. We tend to encapsulate a given set of functions in an application, hosted in a container, preferably in the cloud. In order to guarantee separation of concerns we introduce physical boundaries to ban certain pieces of code from direct interaction instead of properly designing a monolith application. 

Common errors

  • Unnecessary separation of front-end and back-end parts of the application,
  • A group of producers and consumers that could be reduced into a single module,
  • Macroservices,
  • Distributed monoliths,
  • Distributed transactions

The most common error I encountered is a premature decision to separate front-end and back-end into independent applications. The most common cause of this error is front-end developers tempted to try out new and popular frameworks even though no business need justifies such approach. It is quite easy to mitigate it. All project architecture decisions should be justified and stakeholders should either reject risky and costly ideas or accept them with all their drawbacks.

Another common error can be described as a chain of requests and responses that could be reduced into a single operation. The root cause of such problem is premature generalization of operations. An attempt to reuse a piece of logic between projects brings less benefits than costs due to communication overhead. It can be easily mitigated by extracting common code into a library used by different projects. Another approach is to avoid premature optimization. 

Macroservices are a result of premature extraction of a given area of the project into a separate application. When new requirements are introduced they are added to the application covering similar behaviour. Application grows over time, starts to cover more and more use cases tightly coupled between themselves. In the worst case scenario new requirements depend on entities or operations already implemented in other services. This leads to another ant-pattern: distributed monoliths.

Over time services get more responsibilities. They start to rely one on another. How can you check whether your project became a distributed monolith? There are a few typical smells you can observe. 

  • Changes in one service enforce changing services interacting with them. 
  • It takes time to make sure that a change introduced in one application doesn’t cascade into errors in other applications. 
  • New versions of your services need to be deployed simultaneously or the application breaks.
  • Changes in one service are delayed until another service is finished.

Distributed monoliths quite often require synchronized sets of operations to happen in separate applications in order to correctly create or update an entity. This requires adding an extra set of tools or middleware applications ensuring that if any operation fails all other operations will be reverted. It is difficult to guarantee successful revert of a completed operation in one service if another operation in a different service failed. 


  • Code – decode overhead,
  • HTTP communication over internal communication,
  • Multiple requests to different APIs to show one screen,
  • Complex deployment process,
  • Tools introduced only because different parts of the system are dependent on each other.

Most of the time APIs produce and accept JSON documents. If a distributed monolith passes a JSON representation of an entity from one endpoint to another in order to complete an operation then we’re dealing with a lot of waste of resources. Unnecessary encode and decode operations consume resources. 

If a business process is scattered across multiple microservices the HTTP communication overhead adds up to each request processing. Time spent sending data through the web is quite often orders of magnitude higher than processing time.  If you need to contact different microservices, your processing time increases fast.

In microservice architecture different responsibilities quite often are implemented in separate applications. If data required to complete one task of the front-end is scattered across multiple applications, you might end up with multiple backend requests to gather all necessary data to correctly show one screen. This symptom might be a sign of incorrect initial design process, or new requirements led to complex relationships between API endpoints across different applications and costly architecture changes were not introduced.  

You also might end up with a shared entity model across multiple microservices. APIs using this given model need to be updated at the same time, otherwise your application breaks. Deployment of one service is delayed until another service is ready. All instances of services need to be replaced at the same time. Your application is experiencing long downtime due to maintenance. You also need to spend more effort on integration tests.

Problems encountered due to wrong decisions are often patched with tools that wouldn’t be necessary if the application was correctly built in the first place. Distributed transactions are patched with a messaging system and complex logic allowing to undo an operation if another operation in another service fails. Additional data sources are added to group data from multiple systems into a denormalized document database. Maintenance of the document database is covered with additional service implementing additional logic, not necessary otherwise. Microservices are packed in containers, e.g. Docker images. Developers are forced to run containerized microservices interacting with the service they are developing.

Reducing the risk of failure

Before deciding whether microservice architecture is the right approach for your next assignment, answer the following questions:

  • Is my backlog stable? Can some stakeholder introduce a requirement that will complicate the structure of my microservice world, add relationships between entities stored in different databases, safeguarded by different applications?
  • Are there sequences of operations that need to be performed in a transactional manner? Do they reside in one application? Is there a risk of introducing a new one outside this application?
  • Will my system be still usable if one of the microservices fails?
  • Am I prepared for mitigating risk of cascade failures in microservices interacting with my microservice if I change one endpoint in it?
  • What about backwards compatibility? How will I know nobody uses my deprecated API anymore?
  • Am I prepared for tracing logs from multiple sources?
  • Am I prepared for tracing a single request across multiple application logs?
  • How will I measure overall performance?
  • Will I be able to trace a bottleneck?

Those are the typical problems of a microservice project I encountered. If risks exposed by answers to these questions are spotted late, they are a symptom of the project being in danger.

Share this article: