Back to Blog

Microservices vs Monolith: When Each One Actually Makes Sense

Articleβ€’11 min read
#microservices#architecture#software-design#distributed-systems#pragmatism

Microservices vs Monolith: When Each One Actually Makes Sense

Let's start with the thing nobody says out loud: most teams that adopted microservices did not do it because their engineering problems required it. They did it because microservices were fashionable, and "we're a microservices shop" sounded more serious than "we have a Rails monolith."

The result, in many cases, is a distributed system with all the complexity of microservices and none of the benefits β€” because the benefits of microservices only appear at a particular intersection of scale, team structure, and operational maturity that most organizations never reach.

This article is not an argument for monoliths over microservices, or the reverse. It is an attempt to give you the actual trade-offs β€” operational cost, latency, observability, organizational complexity β€” so you can make the decision correctly for your context, not someone else's.

What a monolith actually is

The word "monolith" has been used as a pejorative for so long that it is worth pausing to define it correctly.

A monolith is a single deployable unit containing all the application's functionality. It can be a single process (a Rails app, a Django app, a Spring Boot service) or multiple processes that are deployed together (the "modular monolith" or "majestic monolith"). The defining characteristic is that deployment is atomic β€” you release everything together.

Monoliths are not inherently messy or poorly structured. A well-designed monolith has clear internal module boundaries, enforced by code conventions, package visibility rules, or architectural fitness functions. The modularity is internal rather than service-level.

The failure mode of a monolith is the Big Ball of Mud β€” a codebase where everything knows about everything, boundaries are implicit or nonexistent, and every change requires understanding the entire system. But this is a failure of engineering discipline, not an inherent property of monolithic architecture.

What microservices actually are

A microservices architecture decomposes the application into independently deployable services, each owning a bounded context of the business domain, communicating with other services over the network (HTTP, gRPC, messaging).

The defining properties are: independent deployability (you can release Service A without releasing Service B) and bounded ownership (each service is owned by one team and has its own data store).

These properties are valuable β€” but they come at a cost that is often dramatically underestimated.

The real operational costs of microservices

When you split a process into services, you trade in-process function calls for network calls. This is the central trade-off, and every other cost follows from it.

Latency compounds across service boundaries

In a monolith, a function call that crosses module boundaries costs nanoseconds. In a microservices system, the equivalent network call costs milliseconds β€” a difference of five to six orders of magnitude. For a request that crosses ten service boundaries, that latency compounds. Add network jitter, retry logic, and timeout handling, and you have a distributed system where the tail latency of the whole is worse than the tail latency of any individual part.

This is not hypothetical. Netflix has published extensively about the challenges of managing P99 latency in a system where a single user request may fan out to dozens of microservices. Their investment in Hystrix (circuit breakers), Ribbon (load balancing), and Eureka (service discovery) is a direct response to this problem β€” and it represents an enormous ongoing engineering investment.

The distributed systems tax is real and substantial

Every microservices system must solve a set of problems that simply do not exist in a monolith:

None of these problems are unsolvable. But each one requires investment: tooling, operational expertise, and ongoing maintenance. A team of five needs to decide whether solving these problems is a better use of their time than shipping features.

Observability becomes a first-order concern

Debugging a bug in a monolith is relatively straightforward: you have a call stack, you have a debugger, you have local reproduction. The failure mode is visible.

Debugging in a microservices system is fundamentally different. A slow user request might be caused by:

Without distributed tracing (correlating logs and spans across service boundaries using a shared trace ID), finding the root cause is an exercise in correlated log archaeology. This is solvable β€” but it requires investment in observability infrastructure that a monolith simply does not need.

Deployment complexity multiplies

A monolith has one deployment pipeline. Microservices have N deployment pipelines, each with its own CI/CD configuration, container image, infrastructure definition, and release process. The coordination overhead grows with N.

Worse: deployments become interdependent even in an "independently deployable" system. If Service A publishes a new event schema that Service B expects to consume, both must be deployed in a coordinated window or one will fail. Managing these coordinated deployments at scale (100+ services) requires significant tooling investment.

Where microservices create genuine value

After all of that, microservices do have genuine advantages β€” but they appear only under specific conditions.

Independent scaling

A monolith scales as a unit β€” if one dimension of your application is computationally expensive (say, image processing), you must scale the entire monolith to handle it. In a microservices system, the image processing service can be scaled independently, with dedicated resources matched to its specific needs.

This matters when different parts of your system have dramatically different resource profiles. It does not matter when your entire application has uniform load β€” which is the vast majority of products at most stages of growth.

Independent deployment velocity

When multiple teams are shipping features to the same codebase, deployments become coordination problems. Team A wants to release their feature on Tuesday. Team B has a bug fix they need in by end of day Monday. Team C is in the middle of a migration that leaves the codebase in an intermediate state. The result is a deployment queue, long release windows, and teams blocking each other.

In a microservices system, each team owns a service and deploys independently. Team A can release Tuesday without any knowledge of what Teams B and C are doing. This is the killer feature of microservices at organizational scale β€” and it is almost entirely irrelevant for a single team.

Technology heterogeneity

There are workloads where the right tool is not your monolith's primary language. A machine learning inference service might run Python. A high-throughput data ingestion service might need Go or Rust. An event processing pipeline might be best expressed in Flink.

Microservices allow these workloads to live as services in their natural language, integrating via network APIs. A monolith forces everything into one language and runtime. This advantage is real but applies to a narrow slice of workloads.

The organizational complexity dimension

Conway's Law is the most underrated insight in software architecture: your system's structure tends to mirror your organization's communication structure.

This has a corollary that is rarely stated: you cannot adopt microservices faster than you can distribute ownership. If you decompose your codebase into fifteen services but you still have one team responsible for all of them, you have multiplied operational complexity without gaining any of the organizational benefit.

Microservices make sense when:

Microservices are a liability when:

The modular monolith: the middle ground nobody talks about enough

The false binary in this debate is monolith vs. microservices. There is a third option that is better for most organizations at most stages: the modular monolith.

A modular monolith is a single deployable unit with strong internal module boundaries, enforced through code structure, package visibility, and architectural tests. Each module has a defined public API that other modules must use β€” direct internal access is prohibited by convention or tooling.

This gives you:

Stack Overflow, Shopify, and Basecamp famously run on well-maintained monoliths. The common thread: disciplined modular design combined with exceptional engineering quality.

A practical decision framework

Here is the honest rubric:

Stay with a monolith if:

Move toward microservices when:

Use a modular monolith when:

The migration path that actually works

If you are moving from a monolith toward services because the need is real, the strategy that consistently works:

  1. Identify the extraction candidate by the right signal β€” not "this module is big," but "this module needs to deploy at a different frequency than the rest" or "this workload needs fundamentally different resources."
  2. Enforce module boundaries first, in the monolith β€” if the module you want to extract is tangled with the rest of the codebase, extract the boundaries before extracting the service.
  3. Extract one service at a time, validate, then continue β€” the strangler fig pattern. Redirect traffic to the new service gradually. Maintain both the old and new implementation until confident. Do not attempt mass migration.
  4. Invest in shared infrastructure before extraction β€” distributed tracing, service mesh, CI/CD templates. If these do not exist before you extract the first service, you will build them reactively under pressure.

Closing thought

The best system architecture is the simplest one that solves the organizational and technical problems you actually have today, with a clear path to evolve when those problems change.

For most products at most stages, that is a well-structured modular monolith. For products with multiple teams shipping to the same domain, distributed workloads, and operational maturity for distributed systems, services become worth their cost.

The mistake is choosing your architecture to match an aspiration about the scale you hope to reach, rather than the reality of the problems you have now. Premature microservices are as costly as premature optimization β€” and far more damaging to team velocity.

Start simpler than you think you need. Evolve deliberately when the pressure is real.