AI Software Delivery: Collaborative Intent Articulation

Felipe Hlibco

Something changed in the past six months. I don’t think we’ve fully processed it yet.

I’ve been writing code professionally for eighteen years. For all of those years, one thing stayed constant: the code was the source of truth. Requirements documents got stale. Design specs drifted. Jira tickets turned into archaeological artifacts. But the code? The code was always right — because the code was what actually ran.

That assumption is breaking down.

The Shift Nobody Named #

In early 2025, a pattern started emerging across the teams I work with at Google and the developer communities I engage with through DevRel. Teams weren’t writing code first anymore. They were writing specifications — detailed, structured descriptions of what the software should do — and then handing those specs to AI agents for implementation.

This isn’t new in the way a new framework is new. It’s new in the way version control was new. It changes the fundamental artifact around which development orbits.

I’ve been calling it “collaborative intent articulation” because the existing names don’t quite capture it. Spec-driven development (SDD) is one piece. Intent-driven development is another. The AI-Driven Development Life Cycle (AI-DLC) pattern from AWS describes a third variation. But they’re all circling the same idea: the human’s job is to articulate intent clearly enough that an AI can implement it correctly.

Why “Collaborative” #

The word matters. This isn’t dictation. You don’t describe what you want once and get perfect code back.

What actually happens — and I’ve watched dozens of developers go through this now — looks more like a conversation. You write a spec. The AI asks clarifying questions. You revise based on those questions (because the questions reveal ambiguities you hadn’t noticed). The AI generates a first pass. You review it, find misalignments between intent and implementation, and refine your articulation. The AI generates again.

Each cycle tightens the spec. After three or four iterations, you have two things: working code and a specification document that’s far more precise than anything you would’ve written without the AI’s probing. The spec becomes a byproduct of the collaboration, not just an input to it.

This is fundamentally different from the old “write a requirements doc, throw it over the wall to developers” model. The AI is a collaborator in the articulation process — not just a consumer of finished specs.

Spec-Driven Development Is Having a Moment #

SDD has emerged as one of the most talked-about practices in the developer tooling space this year. The core idea: separate the design phase from the implementation phase using structured Markdown specs that humans validate before any AI-generated code gets committed.

GitHub released Spec-Kit, an open-source toolkit for SDD workflows. Amazon built Kiro, an IDE that treats specs as first-class citizens. Thoughtworks highlighted SDD in their technology radar. The pattern is gaining adoption fast.

The appeal is obvious. When you hand an AI agent a vague prompt (“build me an auth system”), you get vague output. When you hand it a structured spec with clear acceptance criteria, defined edge cases, and explicit constraints, the output quality jumps dramatically. Garbage in, garbage out applies to AI just as much as databases.

But here’s what I think the SDD advocates sometimes undersell: the spec itself is the valuable artifact. Even if you threw away the AI-generated code and had a human implement from the spec, you’d be ahead. The discipline of writing a spec that an AI can consume forces a level of precision that most teams skip when they’re “just going to figure it out in the code.”

The AI-DLC Pattern #

AWS published a pattern they’re calling the AI-Driven Development Life Cycle, and it formalizes something I’ve been seeing organically. The flow goes like this:

The AI creates a plan based on your initial description. It surfaces clarifying questions — not generic ones, but specific ambiguities it identified in your spec. You answer those questions, which refines the plan. The AI implements only after you’ve validated the refined plan. Then you review the implementation against the plan, flag any divergences, and the cycle tightens.

The key insight isn’t the cycle itself; iterative development has been around since the Agile Manifesto (and before). The insight is that the AI’s questions are doing real analytical work. They’re finding the holes in your thinking faster than a code review would, because they’re operating at the intent level rather than the implementation level.

I watched a junior developer on a partner team go through this process last month. She described a feature in about two hundred words. The AI came back with seven questions — three revealed edge cases she hadn’t considered, and one identified a conflict with an existing API contract. She spent twenty minutes refining her spec and got implementation-ready code on the first AI pass. Without the questioning step, she estimated she would’ve spent two days finding those issues through debugging.

Team Composition Is Shifting #

This is the part that makes some engineers uncomfortable. I think it’s worth being direct about.

If intent articulation becomes the primary skill, the ratio of what teams need changes. I’m seeing early signals of a shift toward something like 60% product judgment, 30% engineering architecture, 10% design precision. Those numbers aren’t scientific; they’re my rough observation from watching teams that have fully embraced AI-assisted workflows.

Product judgment — the ability to understand what users need, what the business requires, and how to translate that into clear specifications — becomes the scarce skill. Engineering architecture still matters enormously; someone needs to define system boundaries, choose infrastructure, and understand performance characteristics. But the line-by-line implementation work — the part that consumed 60-70% of an engineer’s time in the old model — is increasingly handled by AI.

This doesn’t mean engineers become less important. It means the skills that make an engineer valuable shift. The developer who can write a precise spec, evaluate generated code critically, and architect systems that AI agents can implement reliably is going to be far more effective than the developer who can write elegant code from scratch but can’t articulate what that code should do.

I realize that last sentence sounds provocative. I’ve been wrestling with it. My entire career identity is built on being someone who writes good code. Accepting that the most valuable thing I do might shift to writing good specs feels like a loss — even if intellectually I know it’s an evolution.

The Intent-Is-Truth Paradigm #

Here’s the conceptual leap that I think matters most.

In the old world, if the spec said one thing and the code did another, the code was right. You’d update the spec (if you bothered) to match the code. The implementation was the source of truth because the implementation was what shipped.

In the new world — and I’m seeing this in real production teams, not just thought experiments — the spec is becoming the source of truth. If the AI-generated code doesn’t match the spec, you regenerate the code. You don’t update the spec to match a faulty implementation. The spec represents validated human intent; the code is a (reproducible, regenerable) expression of that intent.

This has enormous implications for maintenance. When you need to change behavior, you change the spec and regenerate. When you onboard a new team member, they read the spec — not the code. When you review a PR, you’re comparing the generated code against the spec, not evaluating the code on its own merits.

Cursor, Claude Code, and similar tools are already enabling this workflow. Developers describe what they want, AI generates candidates, humans iteratively revise until the output matches intent. The iteration isn’t on the code; it’s on the articulation of what the code should be.

What This Means for Your Team #

I’ll resist the urge to turn this into a prescriptive framework (you’re welcome). But a few observations from watching this transition happen in real time.

Teams that try to adopt AI-assisted development without changing their process get marginal improvements. Teams that restructure around intent articulation — investing in spec quality, building review processes around specifications rather than implementations, hiring for product judgment alongside coding ability — get transformational improvements.

The methodology shift is bigger than the tooling shift. Cursor and Claude Code and Kiro are excellent tools, but they’re force multipliers for a practice that has to be adopted deliberately. You can’t just hand developers an AI IDE and expect specs to materialize.

One thing I keep telling teams: start with a single feature. Write the spec. Go through the collaborative intent articulation cycle. See what happens. Most teams I’ve watched do this come back with two reactions: “that was harder than I expected” (writing precise specs is a skill) and “that was worth it” (the output quality surprised them).

The shift from code-as-truth to intent-as-truth isn’t complete. Honestly, I’m not sure it ever will be fully complete. There will always be cases where you need to drop into the implementation layer. But the center of gravity is moving — and the developers and teams that recognize this early will have a significant advantage over those who don’t.

We’re not post-code. But we might be post-code-first. And that’s a bigger change than it sounds.