Mastra: Empowering JS Devs to Build Agents in TS

Felipe Hlibco

If you’re a TypeScript developer who’s tried to build AI agents, you’ve probably felt the gravitational pull toward Python. LangChain, CrewAI, AutoGen—the mature agent frameworks all live in Python land. The JavaScript side has LangChain.js (a port, not a native design) and the Vercel AI SDK (great for streaming, thin on agent orchestration). The gap is real, and honestly? It’s been frustrating to watch.

Mastra is a new framework that aims to close it. Built by Abhi Aiyer, Sam Bhagwat, and Shane Thomas—the team behind Gatsby.js—it’s a TypeScript-first agent framework that just joined Y Combinator this month. I’ve been poking at it for the past week, and I want to share what I’ve found.

The Gatsby Pedigree #

The founding team matters here. The Gatsby crew knows developer tooling intimately. They built a framework that millions of developers used daily (for better or worse—Gatsby’s build times were a meme for a reason). They understand DX, documentation, and the difference between a cool demo and a tool people actually ship with.

They started building Mastra in October 2024 and got into YC for the January 2025 batch. That’s a fast timeline, but the codebase shows more maturity than you’d expect for three months of work. My guess? They’d been noodling on the architecture well before the first commit.

What Mastra Actually Is #

At its core, Mastra gives you primitives for building AI agents in TypeScript: agents, workflows, memory, streaming, evals, and tracing. It’s open source under Apache 2.0.

Here’s a basic agent definition:

import { Agent } from "@mastra/core";

const researcher = new Agent({
  name: "researcher",
  instructions: "You are a research assistant. Find and summarize information.",
  model: "gpt-4-turbo",
  tools: [webSearch, summarize],
});

const result = await researcher.generate("What happened with AI agents in 2024?");

If you’ve used LangChain in Python, this will feel familiar. The difference is that everything is typed end-to-end. Your tool definitions have typed inputs and outputs. Your workflow steps have typed connections. The compiler catches wiring mistakes at build time rather than at runtime when an agent tries to pass a string where an object was expected.

This matters more than it sounds. Agent workflows get complex fast, and debugging runtime type mismatches in a multi-step agent pipeline is genuinely painful. I’ve done it in Python. I don’t want to do it again.

Mastra Studio #

The standout feature for me is Mastra Studio—a local playground that ships with the framework. You start it up and get a visual interface for your agents, their tools, their workflows and their execution traces.

This fills a gap I’ve felt acutely. When you’re developing agents, the feedback loop is slow. You write a prompt, run the agent, read through logs, adjust, repeat. Studio gives you a visual trace of each step: what the agent decided, what tool it called, what the tool returned, what the agent did next. It’s the kind of observability that Python agent frameworks are only now starting to add.

It’s not production monitoring—it’s a development tool. But it dramatically tightens the development loop.

Framework Integration #

Mastra integrates with the frameworks TypeScript developers actually use: Next.js, Express, Hono. Deployment targets include Vercel, Cloudflare Workers, and Netlify. This is smart because the deployment story has been one of the weakest parts of the agent ecosystem. Python agents tend to run on dedicated servers or containers; TypeScript agents can ride the existing serverless infrastructure that most web teams already have.

// In a Next.js API route
import { mastra } from "@/lib/mastra";

export async function POST(req: Request) {
  const { query } = await req.json();
  const agent = mastra.getAgent("researcher");
  const result = await agent.generate(query);
  return Response.json(result);
}

There’s something appealing about deploying an AI agent the same way you deploy an API endpoint. The operational overhead drops significantly.

What’s Missing (For Now) #

Mastra is early. Very early. The documentation has gaps. Some workflow patterns that are straightforward in LangChain require workarounds. The memory system is functional but less mature than what you’d find in the Python ecosystem.

The eval system—for testing agent behavior against expected outcomes—exists but needs more battle-testing. Agent evals are one of the hardest problems in this space (how do you measure whether an agent’s open-ended response is “good”?), and I’d be surprised if any framework had this truly solved at this stage.

Multi-agent coordination is supported but rough around the edges. If you need multiple agents collaborating on a complex task with shared state, you’ll be writing more glue code than you’d like. This will improve; it’s just early.

Why I Think It Matters #

The TypeScript AI agent gap isn’t just a tooling inconvenience. It’s a talent bottleneck.

Most web engineers—the largest developer population on earth—write TypeScript. When the only mature agent frameworks are in Python, you’re telling those engineers to either learn a new ecosystem or wait. Some switch. Most wait. And the organizations they work for lose months of potential experimentation.

Mastra doesn’t need to be better than LangChain to succeed. It needs to be good enough for TypeScript developers to build real agent applications without leaving their ecosystem. From what I’ve seen so far, it’s getting there.

The Gatsby team built a framework once before that changed how an entire community worked. They have the scars (and the instincts) from that experience. I’m watching this one closely.