Crossing the Chasm: AI's Impact on Team Performance

Felipe Hlibco

Geoffrey Moore’s Crossing the Chasm came out in 1991. It describes what’s happening with AI adoption in engineering teams right now — with uncomfortable accuracy.

Here’s the framework: new technology gets adopted by innovators and early adopters (the enthusiasts), then hits a chasm. That’s the gap between early adoption and mainstream use that kills most products. Crossing it requires a fundamentally different approach — stop selling to enthusiasts and start solving a specific, painful problem for the pragmatic majority.

Generative AI in 2024? Staring directly into that chasm.

The pilot trap #

Every engineering org I talk to has done AI pilots. “We gave everyone Copilot licenses.” “We ran a hackathon with GPT-4.” “Our docs team is using ChatGPT for first drafts.”

The Stanford AI Index Report from April showed enterprise AI adoption surging — but most of it is experimental. Gartner put generative AI at the Peak of Inflated Expectations in their hype cycle. Translation: maximum enthusiasm, minimal proven results at scale.

Here’s what I keep seeing at DreamFlare and across my network: individual developers use AI tools to produce more drafts, more boilerplate, more first-pass code. Team output volume increases. But outcomes — shipped features, resolved bugs, customer satisfaction — barely move.

Why? Because the team’s using AI to do the same work faster, not to change what work they do.

More drafts versus more outcomes #

A developer using Copilot to autocomplete code 40% faster still has the same PR review process, the same CI pipeline, the same deployment cadence. The bottleneck was never typing speed. It’s decision-making, coordination, and feedback loops.

High-performing teams (and industry research keeps confirming this) are three times more likely to redesign their workflows end-to-end around AI rather than bolting it onto existing processes. They aren’t asking “how can AI help me write code faster?” — they’re asking “given that code generation is nearly free, what should our process actually look like?”

That’s a fundamentally different question. The first optimizes a step. The second redesigns the system.

What workflow redesign looks like in practice #

At DreamFlare, we tried something unusual: instead of developers writing code then writing tests, we inverted it. GPT-4 generates a comprehensive test suite from the spec. The developer reviews and refines the tests. Then AI generates implementation code that passes those tests, and the developer reviews the implementation.

The output isn’t “developer writes code faster.” It’s “the definition of quality shifts from code review to spec review.” The bottleneck moves upstream — where it belongs.

Another example: instead of one developer writing documentation after a feature ships (which nobody reads because it’s always out of date), we generate documentation continuously from commit messages, PR descriptions, and code comments. A developer reviews a weekly summary rather than writing from scratch. Documentation becomes a byproduct, not a task.

Neither change is about individual productivity. They’re about reorganizing who does what and when.

Measuring the wrong thing #

Most teams measure AI impact with activity metrics: lines of code generated, time saved per task, number of PRs per sprint. These reward volume.

What Moore’s framework tells us — and what data increasingly confirms — is that the chasm is crossed by focusing on business outcomes: features shipped that customers actually use, bugs prevented rather than bugs fixed faster, time-to-market for revenue-generating capabilities.

A team producing 50% more PRs per sprint hasn’t crossed the chasm. A team shipping the same number of features in half the time with fewer bugs? That team has redesigned its workflow. The AI isn’t a speed boost; it’s a structural change.

The uncomfortable part #

Redesigning workflows means changing people’s roles. That’s politically difficult and emotionally charged.

When I suggest AI should write the first draft of tests, some QA engineers hear “you’re being replaced.” When I suggest documentation should be auto-generated, technical writers worry about their positions. These concerns aren’t irrational — they’re responses to real uncertainty.

The honest answer? Roles will change. A QA engineer reviewing and refining AI-generated tests does different (and arguably more valuable) work than one writing tests from scratch. A technical writer curating and ensuring accuracy of auto-generated docs is an editor, not a writer. Some people thrive in these new roles; others don’t.

As a CTO, I think the worst thing you can do is pretend this isn’t happening. The second worst is rushing it without bringing your team along. The middle path — acknowledging the change, providing time to adapt, measuring outcomes rather than activity — is where I’m trying to operate.

Crossing or falling #

Moore’s original insight: most technologies fail to cross the chasm. They get stuck in early-adopter land, generating excitement and pilots but never reaching mainstream productive use.

AI in engineering teams is at that decision point right now. Organizations that cross will be the ones that treated AI not as a tool to bolt onto existing processes, but as a reason to rethink the processes themselves.

The ones staying in pilot mode? They’ll have impressive demos, enthusiastic hackathon results, and the same shipping velocity they had before.