The Case for Modular Monoliths in Distributed Teams

Felipe Hlibco

Every architecture conversation I’ve had in the last two years eventually arrives at the same question: “When do we move to microservices?”

Not if. When.

I think that’s the wrong framing. At TaskRabbit, I manage a team of nine engineers spread across four time zones. We’ve been through the architecture discussion more than once, and what I keep coming back to is this: the coordination overhead of microservices might actually be worse than the monolith problems they’re supposed to solve.

The Microservices Tax #

Microservices sound great on conference slides. Service boundaries, independent deployability, team autonomy. But here’s what the slides skip — the operational reality for a team that isn’t collocated.

Every service boundary is also a network boundary. Network boundaries mean latency, retry logic, circuit breakers, distributed tracing. You need service discovery. You need a deployment pipeline per service (or a monorepo pipeline that’s arguably harder to maintain). You need to think carefully about data access patterns across service boundaries, which means either duplicating data or building synchronization mechanisms that are themselves distributed systems.

For nine engineers in four time zones, that’s not simplification. That’s a full-time job nobody signed up for.

And I’m not talking hypothetically. I’ve watched teams — good teams, teams with smart people — spend months getting their service mesh sorted before they could ship a single feature. You can’t iterate on product when you’re still fighting your infrastructure.

What a Modular Monolith Actually Gives You #

The term “modular monolith” gets dismissed as a compromise. I’d argue it’s a deliberately chosen architecture that gives you most of what microservices promise without the operational cost.

A modular monolith is a single deployable unit with well-defined internal boundaries. Modules own their data. They expose explicit interfaces to each other. Teams can work on different modules in parallel without stepping on each other — same as microservices. But deployment is one pipeline, one artifact, one rollback strategy.

Shopify published a detailed account of their modular monolith migration back in February 2019. They had a massive Rails monolith (arguably one of the largest in production) and instead of breaking it into microservices, they decomposed it into components with enforced boundaries. Their engineering blog walks through the whole process: how they defined component boundaries, enforced them with tooling, and gradually untangled the dependency graph.

The key insight from their approach: strong module boundaries don’t require distributed systems complexity. InfoQ covered the migration in July 2019 and highlighted that Shopify’s team maintained velocity during the transition precisely because they weren’t dealing with network boundaries. Worth reading if you haven’t.

Why This Matters More for Distributed Teams #

Here’s what I don’t see discussed enough: microservices introduce coordination costs that scale with team distribution.

When your engineers are in four time zones, deploying a change that touches three services means coordinating across those time zones. Synchronous handoffs are expensive. Async handoffs are slow. Either way, you’re paying a tax that a single deployment pipeline just eliminates.

With a modular monolith, my team in Pacific time can merge a change to the payments module, my team in Eastern time reviews it, and it ships with the next deploy. No cross-service integration testing; no contract versioning between services; no “which version of the user service are we running against?” (That last one — I’ve lost two hours to that conversation. Never again.)

One deployment pipeline. One CI/CD configuration. One set of environment variables to manage.

The Escape Hatch #

The strongest argument for a modular monolith is that it preserves optionality. If a module genuinely needs to scale independently — and I mean genuinely, not hypothetically — you can extract it into a service. The module already has a defined boundary and an explicit interface; the extraction path is clear.

But you don’t pay the distributed systems tax until you actually need it. And in my experience, most teams never actually need it. They think they will, so they start with microservices and end up spending 40% of engineering time on infrastructure instead of product.

I’ve seen this play out at more than one company. The hypothesis is always “we’ll need to scale service X independently.” The reality is usually that the bottleneck turns out to be somewhere completely different, and now you have eight services to update instead of one codebase.

When Microservices Make Sense #

I’m not saying microservices are always wrong. If you have 200 engineers and genuinely independent scaling requirements — one part of your system handles 100x the traffic of another — microservices earn their keep. If you’re a platform company where third parties consume your APIs with different SLAs, the isolation makes sense.

But for a team of 9? Or 20? Or even 50?

Start with a modular monolith. Enforce boundaries with tooling and code reviews. Use a monorepo with clear module ownership. Deploy from a single pipeline.

Save the microservices migration for when you have a real scaling problem, not a hypothetical one. The modular monolith gives you room to grow without the coordination tax that distributed architectures impose on distributed teams.

You can always extract a service later. Merging services back into a monolith is a different kind of pain entirely.