The microservices conversation has a particular texture. It begins with a problem — latency, deployment coupling, team autonomy — and ends with a topology: twenty-three services, a message queue, and a service mesh that requires a dedicated engineer to operate.
The original problem is sometimes solved. The new problems are rarely mentioned.
What microservices actually solve
The genuine case for distribution rests on three foundations:
Independent scaling. If your checkout service requires ten times the compute of your catalog service at peak load, separation allows you to scale each appropriately.
Independent deployment. If your product has multiple teams shipping at different cadences, shared deployment cycles create coordination overhead that distribution eliminates.
Fault isolation. If one component can fail without cascading to others, the system's reliability profile improves.
These are real benefits. They apply to a specific class of product at a specific scale of team and traffic. The error is not in the distributed architecture — the error is in applying it before the conditions that justify it exist.
The cost that isn't in the blog post
A distributed system is a distributed system at all times, not just when it's convenient. This means:
- Every operation that touches multiple services is a distributed transaction, with all the failure modes that implies
- Observability requires correlation IDs, centralized logging, distributed tracing — infrastructure that a monolith does not need
- Local development requires service orchestration, which is either complex (Docker Compose with fifteen services) or incomplete (stubs that diverge from production)
- Network latency is now a first-order concern for operations that were previously in-process function calls
None of this is insurmountable. But the overhead is real, and it is paid continuously, not just during incidents.
The well-considered monolith
A monolith that has been designed with internal modularity — clear boundaries between domains, explicit interfaces between modules, disciplined avoidance of cross-cutting state — provides most of the organizational benefits of microservices at a fraction of the operational cost.
The key property is internal separation, not deployment separation. A monolith where the billing module cannot directly read from the inventory database has preserved the important boundary. Whether that boundary is enforced by a network call or by a module interface is an operational question, not an architectural one.
// The boundary is in the interface, not the topology
// billing/index.ts exports only what the rest of the system needs
export { chargeCustomer, refundOrder, getInvoice };
// The database layer is not exported. The implementation is private.
When to distribute
The honest answer is: later than you think, and for more specific reasons than "that's how modern systems are built."
Distribute when you have measured the scaling problem and found that it cannot be solved by vertical scaling or caching. Distribute when you have team autonomy problems that cannot be solved by better deployment tooling. Distribute when you have fault isolation requirements that cannot be solved by graceful degradation within a single process.
Until then, a well-structured monolith is not a compromise. It is the correct tool.