ADRs as Feedback Loops for Distributed Performance

Felipe Hlibco

I wrote about Architecture Decision Records a while back. Back then, I saw them as documentation—a way to capture the why behind decisions so future teams wouldn’t repeat past mistakes. Useful, sure. But kind of… administrative?

I’ve changed my mind. Or rather, I’ve realized I was only seeing half the picture.

At Google scale, working with distributed teams scattered across time zones, ADRs turned out to be something else entirely. They’re feedback loops. And that distinction matters more than I expected.

From Static Records to Living Signals #

Michael Nygard proposed the traditional ADR model back in 2011: title, status, context, decision, consequences. Write it, file it, reference it when someone asks “why did we do it this way?” Valuable stuff. Also—let’s be honest—pretty static.

In a distributed architecture, where semi-autonomous teams own their services and ship on their own schedules, static records don’t cut it. Decisions interact whether you plan for it or not. Team A’s caching strategy collides with Team B’s latency budget. Team C’s serialization choice creates friction with Team D’s pipeline. Each decision makes sense in isolation; the emergent behavior doesn’t.

This is where ADRs become feedback loops. When you treat them as living documents—revisited, annotated with outcomes, linked to real performance data—they stop being records of past decisions. They become sensors for architectural drift.

How This Actually Works #

Here’s what I’ve started doing differently. Every ADR now gets three extra fields:

Outcome tracking. Six months later, did the expected consequences materialize? I add a dated entry with actual metrics. “ADR-047: Chose gRPC for inter-service communication. Expected: lower latency, better type safety. Actual: latency improved 40%, but the learning curve delayed Team C’s integration by 3 sprints.” The second half of that sentence—the part about Team C—is the valuable part. You don’t get that from the original decision record.

Supersession links. When a new ADR replaces an old one, both documents link to each other. Brief explanation of what changed and why. This creates a navigable history showing not just what was decided, but how the system’s needs shifted over time. Architectural thinking evolves; your documentation should show that evolution.

Cross-team impact tags. If a decision affects teams beyond the one making it, those teams get tagged. Not for approval—that creates bottlenecks—but for awareness. Tagged teams can review asynchronously and flag concerns before implementation, not after.

The Remote Work Surprise #

Here’s something I didn’t see coming: async-first remote work actually improved our ADRs.

Pre-pandemic, architecture decisions often happened in whiteboard sessions and hallway conversations. The decision was made verbally; the ADR (if written at all) was an after-the-fact summary that missed half the context.

When everyone went remote, decisions had to be written down from the start. The ADR wasn’t a retroactive artifact—it was the primary medium for communication, debate, and refinement. Async review in shared documents produced more thoughtful objections and better-considered alternatives than most in-person meetings I’d sat through.

I’ve seen this consistently at Google. Teams operating async-first produce better ADRs because writing is the decision-making process, not just the record of it.

Connecting ADRs to Architectural Fitness #

The concept of architectural fitness functions—from Ford, Parsons, and Kua’s Building Evolutionary Architectures—is about automated checks verifying your architecture still meets its goals. ADRs with outcome tracking create a manual version: revisit the decision, compare expected to actual outcomes, update your architectural model.

The feedback loop closes when a revisited ADR triggers a new one. “We chose X because of Y. Y turned out to be wrong. New ADR: switching to Z.” That chain—decision, observation, correction—is how architectures evolve intentionally rather than accidentally.

In distributed systems, where no single person holds the full architecture in their head, these chains become connective tissue preventing drift. Without them, each team optimizes locally and the system degrades globally. With them, local decisions get informed by system-wide outcomes.

What I’d Actually Recommend #

If your team already writes ADRs, add outcome tracking. Just that. Five minutes per ADR, six months after the fact. It transforms your decision archive from a graveyard of past choices into something that informs future ones.

If your team doesn’t write ADRs yet, start with decisions that cross team boundaries. Those cause the most pain when undocumented, and they’re where the feedback loop matters most. Don’t try to document everything—document the decisions that affect people who weren’t in the room.

The best architecture documentation doesn’t just tell you what was decided. It tells you whether it worked.