Infrastructure from Code: The Death of Traditional IaC?

Felipe Hlibco

Every couple of years, someone declares that Terraform is dead. The replacement changes — Pulumi, CDK, now “Infrastructure from Code” — but the thesis stays the same: writing infrastructure definitions separately from application code is busywork, and a sufficiently smart tool should just infer the infrastructure from the code itself.

It’s a compelling idea. It’s also mostly wrong, at least in 2024.

I’ve spent the past few months evaluating IfC tools for DreamFlare, and the landscape tells an interesting story about what happens when elegant abstractions meet production reality.

What IfC actually means #

Traditional Infrastructure as Code means you write explicit definitions: Terraform files, CloudFormation templates, Pulumi programs. You describe what you want — a Lambda function, an S3 bucket, a VPC — and the tool provisions it.

Infrastructure from Code flips this. You write application code using a specific SDK or framework, and the tool analyzes your code to infer what infrastructure you need. Use a queue in your code? The tool detects it and provisions SQS (or equivalent) at deploy time. Reference a database? Provisioned automatically.

The appeal is obvious. No more context-switching between app code and infra definitions. No more drift between what your code expects and what your infrastructure provides. The infrastructure becomes a consequence of the application, not a separate concern.

Sounds great on paper.

The four approaches #

IfC tools have clustered into four categories:

SDK-based tools like Ampt and Nitric give you a custom SDK. You import their libraries, use their abstractions for queues, storage, APIs, and the tool figures out what to provision. Nitric is probably the most mature in this category; it maps to multiple clouds and keeps the abstractions fairly clean.

Annotation-based tools like Klotho took a different approach: you annotate existing code with comments or decorators, and the tool reads those annotations to generate infrastructure. Write @klotho::expose above an Express route, and Klotho generates the API Gateway config. Clever idea.

Hybrid tools like Encore and Shuttle blend framework conventions with infrastructure inference. Encore gives you a Go framework where the structure of your code — packages, function signatures, struct types — determines the infrastructure. Shuttle did something similar for Rust.

New-language tools like Wing and DarkLang went furthest: they created entirely new programming languages designed from the ground up to express both application logic and infrastructure simultaneously.

The market shakeout #

Here’s where the story gets grim. By early 2024, the IfC landscape has experienced significant attrition.

Wing — the new language approach backed by former AWS CDK creator Elad Ben-Israel — ceased active development. Building a new programming language is extraordinarily hard; building one that developers will actually adopt for production workloads is nearly impossible. Wing’s syntax was elegant but the ecosystem was too thin. No mature libraries, no Stack Overflow answers, no battle-tested patterns.

Klotho’s annotation-based approach also stalled. The repository went quiet. The problem with annotations is that they’re essentially a second configuration language hidden inside comments — you end up maintaining infrastructure definitions anyway, just in a less visible format.

DarkLang pivoted away from its original vision. Shuttle narrowed its focus to Rust deployments on their own platform.

The survivors? Nitric is still active, maintaining a clean abstraction layer across AWS, Azure, and GCP. Encore continues to evolve, though it’s tightly coupled to its own hosting platform. Ampt is around but seems focused on a narrow use case.

Why traditional IaC survives #

The IfC market’s contraction isn’t about bad engineering. These were talented teams solving real problems. The issue is structural.

Complex architectures resist inference. When your application spans fifteen microservices, three databases, a message bus, two CDNs, and a custom networking layer with VPC peering across accounts — no tool can reliably infer all of that from application code. The infrastructure decisions at that scale are architectural decisions that don’t live in any single codebase.

Cloud-specific features matter. IfC tools abstract away cloud primitives, which works until you need a specific feature. DynamoDB global tables with particular consistency settings. CloudFront functions versus Lambda@Edge. Aurora Serverless v2 scaling configurations. The moment you need fine-grained control, the abstraction becomes a cage.

Debugging. When something goes wrong in production — and something always goes wrong in production — you need to understand your infrastructure explicitly. If a tool generated your CloudFormation stack from code inference, debugging requires understanding both the tool’s inference logic and the underlying cloud resources. That’s more complexity, not less.

Organizational boundaries. In most companies larger than a startup, the team writing application code and the team managing infrastructure are different people with different concerns. IfC assumes these are the same person. That’s true at a five-person startup; it’s rarely true at a hundred-person company.

Where IfC actually works #

I’m not dismissing the entire category. IfC tools solve a real problem for a specific context: small teams building new applications on standard architectures.

At DreamFlare, we considered Nitric for a new microservice. The architecture was straightforward — an API, a queue, a database, a scheduled job. Nitric could handle all of that with minimal config. For that scope, writing Terraform would’ve been overkill.

The pattern I see working is IfC for prototyping and early-stage products, with a planned migration to traditional IaC as complexity grows. Think of it like scaffolding: useful to get the structure up, removed when the building needs to stand on its own.

The provocative answer #

Is traditional IaC dead? No. Not even close.

Terraform has problems — the state file alone drives people to therapy — but it survives because it solves the right problem at the right abstraction level for production systems. You describe what you want explicitly. The tool makes it happen. When it breaks, you can see what went wrong.

IfC tools solve a different problem — reducing boilerplate for simple deployments — and they’re good at it. But calling that “the death of IaC” is like calling a dishwasher the death of cooking. They’re related but they serve different needs.

The future is probably boring: traditional IaC for production, IfC-style ergonomics built into IaC tools themselves (Pulumi is already heading this direction), and new-language approaches remaining a niche. The most likely outcome is convergence, not replacement.