Sustainable Software: Measuring our Carbon Footprint

Felipe Hlibco

I spent most of my career thinking about software performance in terms of latency, throughput, and cost. CPU cycles per request. P99 response times. Monthly AWS bills. Those are the metrics that show up in dashboards; those are the numbers managers care about.

Carbon emissions never made the list.

That’s starting to change, and I think it’s worth paying attention — not because sustainability is trendy (though it is) but because the numbers are genuinely staggering once you look at them.

The Numbers #

The ICT sector accounts for roughly 2-3% of global greenhouse gas emissions. That’s comparable to aviation. And unlike aviation, software’s footprint is growing; the appetite for compute doubles every few years while airlines at least have a theoretical ceiling on fleet size.

The Paris Climate Agreement set a target of 45% greenhouse gas reduction by 2030 for the sector. We’re not on track. Most engineering teams I’ve worked with don’t even know the target exists.

What makes software emissions tricky to reason about is that they’re invisible. You can see a factory’s smokestack. You can’t see the carbon cost of a poorly optimized database query running 10 million times a day across three availability zones. But the emissions are real — they come from the electricity powering the servers, the cooling systems keeping data centers at temperature, and (less obviously) the manufacturing of the hardware itself.

Operational vs. Embodied Emissions #

This distinction matters and I didn’t fully appreciate it until recently.

Operational emissions are what most people think about: the energy consumed while your software runs. Servers draw power; that power comes from a grid; that grid has a carbon intensity based on its energy mix. Running a service in a region powered by hydroelectric is fundamentally different from running it in a coal-heavy region, even if the code is identical.

Embodied emissions are the carbon cost of manufacturing, transporting, and eventually disposing of the physical hardware your software runs on. Building a server produces carbon before it processes a single request. This component is harder to measure but it’s not negligible — for some workloads (low-utilization servers, infrequently used hardware) embodied emissions can actually exceed operational emissions over the device’s lifetime.

I find that counterintuitive. We tend to think of hardware as a sunk cost, both financially and environmentally. But every server sitting idle in a data center represents embodied carbon that isn’t being amortized across useful work.

The Green Software Foundation #

In May 2021, Microsoft, Accenture, GitHub, and ThoughtWorks launched the Green Software Foundation under the Linux Foundation umbrella. I’m cautiously optimistic about it.

Cautiously, because I’ve seen enough industry foundations that produce whitepapers and not much else. Optimistic, because the founding members include companies that actually run significant infrastructure and have the leverage to change practices at scale.

The foundation’s stated mission is to create a trusted ecosystem of people, standards, tooling, and best practices for building green software. Three months in, what they’ve shipped is a set of principles and an early specification for measuring software carbon intensity. It’s not a finished product, but it’s a start.

The principles themselves are worth internalizing. They boil down to three strategies:

Energy efficiency. Use less electricity to deliver the same work. This is the one engineers already understand intuitively; it’s just performance optimization with a different framing.

Carbon awareness. Not all electricity is equal. Running batch jobs when the grid is powered by renewables (typically midday in solar-heavy regions) versus peak fossil fuel hours can materially reduce emissions from the same workload.

Hardware efficiency. Maximize the useful work extracted from each piece of physical hardware. Higher utilization rates amortize embodied emissions across more compute. This one conflicts with the “over-provision for safety” instinct that most infrastructure teams have, which makes it interesting.

The SCI Specification #

The Software Carbon Intensity specification is the foundation’s attempt to give us a unit of measurement. You can’t improve what you can’t measure, and right now most teams have no way to quantify their software’s carbon impact.

The formula (still being refined) looks roughly like this:

SCI = ((E * I) + M) per R

Where:

  • E = Energy consumed by the software
  • I = Location-based marginal carbon intensity of the electricity
  • M = Embodied emissions allocated to the software
  • R = Functional unit (per user, per transaction, per API call)

The “per R” part is what makes it actionable. Absolute numbers are hard to compare across services of different sizes; intensity per functional unit lets you benchmark and track trends. Your SCI score should go down as you optimize, regardless of whether your traffic goes up.

It’s early days for this specification. The methodology for calculating E, I, and M individually is still being debated. But having any framework is better than the status quo, which is guessing (or more commonly, not thinking about it at all).

What You Can Actually Do Today #

I’ve been poking at this for a few weeks now, and here’s where I’ve landed.

Measure first. Cloud Carbon Footprint is an open-source tool that estimates emissions from your cloud provider usage. It works with AWS, GCP, and Azure. The estimates are rough — they’re based on billing data and published power usage effectiveness (PUE) ratios — but rough numbers beat no numbers.

Pick your regions intentionally. Most teams choose cloud regions based on latency to users or data residency requirements. Adding carbon intensity to that decision matrix isn’t hard. Google publishes hourly carbon intensity data for its regions; AWS and Azure have similar (if less granular) information. If two regions serve your users equally well, pick the cleaner one.

Right-size your infrastructure. This one’s just good engineering regardless of sustainability goals. Over-provisioned instances waste money and emit carbon for compute that never gets used. Auto-scaling policies, spot instances for batch work, and regular capacity reviews all reduce waste.

Shift workloads temporally. Batch processing, CI/CD pipelines, data analytics jobs — anything that doesn’t need to run at a specific time can be scheduled for low-carbon-intensity windows. This is the “carbon awareness” principle in practice. It sounds exotic but it’s really just cron with a different objective function.

The Skeptic’s Corner #

I want to be honest about where my skepticism lives.

Individual software teams optimizing their carbon footprint is good, but it’s incremental. The largest levers are at the infrastructure layer — how data centers are powered, how hardware gets manufactured and recycled, how cloud providers allocate renewable energy credits. An application developer choosing between us-east-1 and eu-west-1 is making a real but small decision compared to Amazon deciding to build a solar farm.

There’s also a measurement problem. The SCI specification is promising, but the inputs are estimates built on estimates. Marginal carbon intensity varies by hour; embodied emissions calculations rely on manufacturer-reported data (which is inconsistent at best). We’re building dashboards on shaky foundations.

None of that means we shouldn’t try. The alternative — pretending software has no environmental impact — is obviously worse. And measurement tools will get better; they always do.

But I’d resist the temptation to treat this as a solved problem. It’s not. It’s a nascent field with immature tooling and a lot of greenwashing potential. The companies that take it seriously will need to be honest about the limitations of their measurements, not just the headline numbers.

Where This Goes #

I think sustainability in software engineering will follow the same arc as security did a decade ago. First ignored, then bolted on, then (eventually) built in. We’re somewhere between ignored and bolted on right now.

The Green Software Foundation gives the movement institutional gravity, which matters for adoption. Standards bodies are boring but they’re how industries actually change. If the SCI specification matures into something teams can realistically implement, it could do for carbon accounting what DORA metrics did for deployment frequency: give us a shared vocabulary and something to track.

For now, I’m measuring what I can, learning the vocabulary, and watching the spec evolve. It’s not heroic, but it’s a start.