Dapr Acceptance into CNCF: Logic Decoupling

Felipe Hlibco

Last week the CNCF TOC voted to accept Dapr as an incubating project. I’ve been watching Dapr since its Microsoft launch in 2019 — mostly with skepticism, if I’m honest — and this feels like the right moment to talk about what makes it different from the dozen other “cloud-native” projects begging for attention.

The short version? Dapr doesn’t try to be a platform. It’s a set of building blocks that sit between your application and whatever infrastructure you happen to be running on. That distinction matters more than it sounds.

The sidecar bet #

Dapr runs as a sidecar alongside your application. On Kubernetes, it’s injected as a container in your pod. On bare metal or a VM, it’s just a local process. Either way, your app talks to it over HTTP or gRPC on localhost.

This was the first thing that caught my attention. Most distributed systems frameworks want you to import a library, inherit from a base class, adopt their SDK. Dapr doesn’t care what language you’re using. Your Go service, your Python service, that legacy Java monolith nobody wants to touch — they all talk to the same sidecar API.

# A Dapr component definition for pub/sub
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
  name: pubsub
spec:
  type: pubsub.redis
  version: v1
  metadata:
    - name: redisHost
      value: "localhost:6379"

Swap pubsub.redis for pubsub.kafka or pubsub.azure.servicebus and your application code doesn’t change. Not one line. The component definition is infrastructure configuration; your app just calls POST /v1.0/publish/pubsub/orders and moves on.

Building blocks, not frameworks #

Dapr organizes capabilities into what it calls “building blocks.” Each addresses a common distributed systems headache:

  • Service invocation — call other services by name with retries, mTLS, and observability built in
  • State management — key/value storage with pluggable backends (Redis, Cosmos DB, PostgreSQL, DynamoDB)
  • Pub/sub messaging — publish and subscribe with at-least-once delivery
  • Bindings — trigger your app from external systems or invoke them from your app
  • Secrets — retrieve secrets from vaults without vendor-specific SDKs
  • Actors — virtual actor model (I have mixed feelings about actors in most business apps, but they’re there if you need them)

The pattern is consistent. Each building block exposes a stable HTTP/gRPC API. The implementation behind that API is swappable via configuration. Your business logic talks to abstractions; the sidecar handles the messy details.

I’ve watched teams try to build this portability layer themselves. It rarely ends well. The abstraction either leaks immediately or becomes so generic it’s useless. Dapr’s trick is that the abstraction boundary is enforced by a process boundary. Your code literally cannot reach into Redis internals because Redis isn’t in your process.

Why CNCF incubation matters #

Dapr hit 1.0 in February 2021. The project has a steering committee with members from Alibaba, Intel, and Microsoft. Production adoption has been growing — Alibaba runs Dapr at scale for their Double 11 shopping festival, which is about as brutal a load test as you’ll find.

But CNCF incubation signals something beyond maturity. It says the cloud-native ecosystem considers application-level abstractions part of its remit — not just infrastructure primitives like container runtimes and service meshes.

Think about where Kubernetes sits. It solves scheduling, networking, storage orchestration. Service meshes like Istio and Linkerd handle network-level concerns (mTLS, traffic management, observability). Dapr sits one layer up: it handles application-level concerns that have nothing to do with packets or pods.

This is the layer most teams still build by hand. Every time you write a wrapper around your message broker, or build retry-with-backoff logic, or create an abstraction over your state store — you’re reimplementing what Dapr provides out of the box.

The portability argument #

I work at Google. Let me be honest about something: cloud vendor lock-in is real, and it benefits companies like my employer. When your pub/sub is SNS, your state store is DynamoDB, your secrets live in AWS Secrets Manager, and your functions are Lambda — migrating to GCP or Azure isn’t a weekend project. It’s a rewrite.

Dapr doesn’t eliminate lock-in entirely. You still deploy somewhere. But it makes the application code genuinely portable. That component swap I showed earlier isn’t theoretical; teams actually do this. Dev environment uses Redis, staging uses GCP Pub/Sub, production uses Kafka. Same binary.

I’ve seen this pattern with database ORMs, and it usually falls apart because the abstraction can’t handle vendor-specific optimizations. Dapr’s building blocks are deliberately narrow — state management is key/value with optional concurrency and transactions, pub/sub is topics with subscriptions. They don’t try to expose every feature of every backend.

What I’m watching #

Two things interest me going forward.

First, the multi-runtime microservices pattern Bilgin Ibryam wrote about. Dapr is the most complete implementation: your business logic in one runtime, infrastructure capabilities in another. It’s a clean separation that reminds me of the Unix philosophy — do one thing well, compose from there.

Second, how the community handles the tension between simplicity and capability. Dapr’s building blocks are intentionally limited. That’s a feature when you’re starting out, a frustration when you need something the abstraction doesn’t expose. The escape hatch is custom components, but I haven’t seen enough production examples to know if that actually scales.

Should you use it? #

If you’re building distributed applications on Kubernetes and you’re tired of writing the same pub/sub wrapper for the third time — yes. Take a serious look.

If you’re running a monolith that works fine, don’t let the hype cycle push you into distributed systems you don’t need. Dapr makes distributed systems easier. It doesn’t make them easy.

The CNCF incubation gives Dapr institutional credibility it didn’t have before. Combined with 1.0 stability and real production usage at scale, I think it’s past the “interesting experiment” phase. It’s a tool worth investing time in — just make sure you actually have the problems it solves.