Panel: Global DevRel in the Age of AI Documentation
I spent last week at DevRelCon New York and the shift in conversation was impossible to miss. Two years ago, every panel was about community building and developer experience. This year? Half the sessions were about one question: how do you make your platform visible to AI agents?
The developer discovery pipeline has fundamentally changed. It used to be simple: developer has a problem, searches Google, finds your docs, reads them, evaluates your API. That model still exists, but it’s no longer primary for a growing segment of developers. The new path is messier: developer has a problem, asks Claude or ChatGPT, gets a recommendation that either includes your platform or—more likely—doesn’t.
If your documentation isn’t structured for LLM consumption, you’re increasingly invisible. And most platforms’ docs aren’t.
The llms.txt Standard #
Jeremy Howard (Answer.AI) proposed the llms.txt standard in September 2024, and adoption has been faster than I expected. The idea is simple: place a file at /llms.txt on your documentation site—similar to robots.txt but for language models—that provides a structured overview of your platform, its capabilities, and where to find detailed documentation.
By late 2024, thousands of documentation sites had adopted it, including Anthropic and Cursor. The spec defines a Markdown-based format with clear sections: project name, summary, key URLs, and optional detailed information in a companion /llms-full.txt file.
What makes this work isn’t the format itself (it’s just Markdown); it’s the convention. When LLMs encounter a /llms.txt file, they get a curated entry point to your documentation rather than trying to parse your entire site. The difference in output quality is noticeable—I’ve tested this with Claude across platforms that have llms.txt and ones that don’t, and the recommendations for documented platforms are significantly more accurate and specific.
For DevRel teams, the implication is stark. Creating an llms.txt file takes an afternoon. Not creating one means your platform might not surface in AI-assisted recommendations, and you’ll never know it because there’s no analytics dashboard showing “developers who didn’t find you through an LLM.”
Documentation as Machine-Readable API #
The deeper shift goes beyond llms.txt. Documentation has always been written for human readers—conversational tone, visual hierarchy, progressive disclosure, tutorials that build on each other. LLMs need something different: semantic structure, consistent formatting, self-contained explanations, and machine-readable API descriptions that don’t require navigating a multi-page tutorial to understand.
This doesn’t mean making your docs inhuman. It means making them structured enough that both humans and machines can extract meaning reliably. Meaningful headings (not cute ones), consistent parameter documentation, explicit relationships between concepts, and comprehensive code examples with context.
Mintlify and GitBook have been moving in this direction—building documentation platforms with LLM-readiness as a first-class concern. Their argument (which I find persuasive) is that well-structured docs serve both audiences simultaneously. Clear semantic markup helps LLMs parse your content and helps human readers scan it. Consistent formatting helps retrieval-augmented generation (RAG) systems chunk your docs accurately and helps humans find information quickly.
The teams that are struggling are the ones with documentation spread across Notion, Confluence, GitHub wikis, and inline code comments. When your docs live in five places with five different formats, no LLM can make sense of them—and frankly, neither can most developers.
MCP and the Agent-Driven Ecosystem #
Model Context Protocol (MCP) is changing what “platform visibility” means in an AI-mediated world. When AI agents can directly invoke your API through MCP integration rather than just reading about it, the line between documentation and functionality blurs.
At the AI The Docs conference in March, several speakers made a point I keep thinking about: in an agent-driven ecosystem, your API spec isn’t just documentation. It’s an interface for autonomous systems. The OpenAPI description that used to help developers understand your API now helps AI agents decide whether and how to use it.
This has practical implications for DevRel teams. API descriptions need to be complete enough for an agent to use without human intervention. Error messages need to be informative enough for an agent to self-correct. Authentication flows need to be simple enough for programmatic execution. The bar for API design quality just went up substantially; an agent won’t “figure out” what your confusing error code means the way a frustrated developer eventually will.
Self-Healing Documentation Pipelines #
One pattern I saw at DevRelCon that excited me: teams building documentation pipelines that use AI agents to detect and fix inconsistencies automatically. The agent monitors your codebase, compares implementations against published docs, and flags (or auto-fixes) discrepancies.
This addresses the oldest problem in documentation: it decays the moment someone writes it. Code changes, APIs evolve, features get deprecated, but the docs sit there reflecting a reality that no longer exists. AI-powered pipelines don’t eliminate this problem—they reduce the maintenance burden enough that small DevRel teams can keep documentation accurate across large API surfaces.
The caveat is trust. Automated documentation fixes need human review, at least until the tooling matures enough to guarantee correctness. I’ve seen auto-generated doc patches that introduced subtle errors by misinterpreting implementation changes. A method that changed from synchronous to asynchronous got documented as “faster” rather than “requires await.” The fix was technically accurate (async is often faster) but missed the developer-facing implication (you need to change your calling code).
What This Means for DevRel Teams #
I came out of DevRelCon with three concrete priorities for my own team.
First, llms.txt. If you don’t have one, make it this week. It’s the lowest-effort, highest-impact thing you can do for AI discoverability. Write it like you’re explaining your platform to a smart colleague in three minutes.
Second, audit your documentation structure. Not the content—the structure. Are your headings semantic or decorative? Are your API descriptions self-contained or do they reference other pages without explanation? Can a RAG system chunk your docs into useful segments, or does critical context span multiple pages in ways that fragmentation would break?
Third, think about your API through an agent lens. MCP integration isn’t optional for much longer. If an AI agent can’t use your API programmatically based on your published spec, developers building with AI assistants will choose a competitor whose API the agent can use.
The DevRel role isn’t disappearing. But it’s expanding to include a new audience: the AI systems that mediate between your platform and the developers who might use it. We’ve always optimized for developer experience. Now we need to optimize for agent experience too—and the teams that figure out how to serve both audiences well will have a significant edge.
Documentation used to be the thing DevRel teams maintained reluctantly. It might be the thing that determines platform survival.