Productivity: How to ship faster and on time
Managing both kinds of teams — on-time and perpetually late — taught me one thing above all. The talent gap between them ran roughly zero. The process gap ran enormous.
An uncomfortable truth in this industry: shipping speed ranks as a leadership problem far more than a technical one. Engineers love to blame late deliveries on “complexity” or “technical debt” or “scope creep,” and those factors are real enough. More often, nobody established clear boundaries around what “done” means, nobody surfaced blockers early enough, and nobody had the nerve to cut scope when reality diverged from the plan.
A few weeks back, the “6th day” post covered individual productivity — reclaiming eight hours per week through better personal habits. Consider this post the team-level companion: how to get a group of smart engineers shipping faster without burning them out.
Honestly, the full answer eludes me. But enough observations have accumulated that articulating a rough playbook feels worthwhile.
The Estimation Problem #
Most engineering teams fail at estimation, which nobody disputes; the research documents it extensively. The “Accelerate” study (Forsgren, Humble, and Kim) found that elite organizations don’t estimate better — they deliver smaller increments more frequently, making individual estimate accuracy far less critical.
That insight reshaped how planning works entirely.
Instead of asking “how long does this take?” the better question becomes: “what’s the smallest useful thing we can ship this week?” Small-batch delivery sidesteps the estimation problem. No team needs to predict that a feature runs six weeks when working increments ship every four days.
The resistance never varies: “The feature lacks value until all of it ships.” True in rare cases. Usually, not. The minimum viable increment shrinks smaller than people expect. A feature flag wrapping a half-implemented capability — ships to production but not user-facing — still validates the integration, the deployment pipeline, and the code path. Measurable progress.
Scope Management Is Leadership Work #
Here’s where engineering managers reliably fail: treating scope as a technical decision. “The tech lead scoped it” or “product defined the requirements” or “engineers estimated it” — familiar deflections, all of them.
Scope management demands leadership. Someone must say “we’re cutting this” when the timeline tightens, and defend that cut against product, design, and executive pressure. Engineering managers who can’t cut scope can’t ship on time. The two skills lock together inseparably.
A simple framework handles this. Every feature breaks into three tiers:
Must-have: The feature fails without this. Users file bugs if production ships without it.
Should-have: Makes the feature meaningfully better. Worth building when the timeline permits.
Nice-to-have: Polish, edge cases, optimizations. Build these if slack appears at the end.
The trick — and most teams get this wrong — lock the must-have tier before engineering starts and treat it as immutable. Not “immutable unless someone argues hard enough.” Immutable. Everything else stays negotiable, and the whole team understands this from day one.
When a project falls behind (most do), trimming scope from the should-have and nice-to-have tiers becomes mechanical, not political. Nobody argues, because the framework existed from the start. The must-have ships on schedule. The rest ships in a follow-up iteration.
Does the framework always hold? No. Must-haves balloon sometimes. But the approach outperforms improvising.
Visibility Kills Surprises #
The primary reason teams miss deadlines: problems stay hidden until too late for anyone to react.
An engineer hits a blocker Wednesday. Thursday goes to working around it. Friday the blocker surfaces in standup. Monday the tech lead investigates. Tuesday reveals the blocker needs a design change. A full week gone — to a problem that a 15-minute conversation Wednesday morning could have resolved by Thursday afternoon.
Delayed escalation of blockers destroys delivery timelines more than any other single factor.
My fix: daily async check-ins (not standups; a shared doc updated by 10am) where the only mandatory field reads “blocked on.” Not “what finished yesterday.” Not “what starts today.” Just blockers. Blockers drive timeline management; everything else falls into nice-to-know territory.
When a blocker surfaces, the engineering manager — me, in practice — gets two hours to respond. Two hours. Not end of day. Not the next standup. Two hours. Every hour a blocker lingers means an hour the team loses to waiting.
Overkill? Possibly. But fester a few blockers and the outcome stops looking pretty fast.
The Deployment Cadence Advantage #
Frequent deployment teams ship faster. Circular-sounding, except the DORA research quantifies it: elite teams deploy multiple times daily, and those same teams clock faster lead times, lower change failure rates, and faster recovery. Frequent deployment generates a virtuous cycle.
Why? Small deployments carry low risk. Ten lines of code — blast radius of a bug stays small, rollback runs simple, debugging surface shrinks to almost nothing. Ten thousand lines after three weeks of batching? Everything breaks together and the culprit change disappears into the noise.
CI/CD through Cloud Build with automated tests handles deployment here. Every PR triggers the full test suite. Merges to main deploy to staging automatically. Production deploys need one manual approval. Nothing more. Near-zero overhead means engineers deploy the moment a meaningful unit of work finishes.
Deployment pipelines running longer than 15 minutes from merge to production need fixing before anything else. Heavy pipelines incentivize batching. Batching produces large, risky releases. Large releases breed anxiety. Anxiety compounds slowness.
Dependency Tracking (The Unglamorous Essential) #
Nobody wants to discuss dependency tracking. Boring. Overhead. Bureaucracy — every developer says so.
Dependency tracking also ranks as the single most common source of missed deadlines in cross-team projects.
Team A needs Team B’s API ready by October 1st. Team B knows, sort of, but October 1st competes with three other priorities on Team B’s plate. Team A skips the check-in until September 25th. Team B says “we’ll try” — which translates to “probably not.” Team A misses the deadline because Team B’s API ships October 15th.
This exact scenario played out at every company in my career. TaskRabbit, Google, five-engineer startups — the pattern holds regardless of scale.
The fix requires only discipline: for every cross-team dependency, one person checks status weekly. Not monthly. Weekly. And the check-in skips “are you on track?” (people answer yes reflexively). Demand artifacts instead — “show me the PR” or “show me the staging environment.” Artifacts over assertions.
The Uncomfortable Truth About Speed #
None of this feels glamorous. Small increments. Cut scope ruthlessly. Surface blockers early. Deploy frequently. Track dependencies manually.
No “10x engineer” trick appears anywhere. No productivity tool transforms a team. No framework automates good judgment. Teams shipping consistently and reliably share these same traits, and none of them describe the process as exciting.
Which, honestly, nails the whole point. Exciting processes — big reveals, heroic pushes, dramatic pivots — signal a delivery culture optimized for drama rather than output. Boring, predictable shipping defines the goal. Boring deploys, estimates landing close, scope decisions front-loaded rather than scrambled at the end: those signal a team shipping well.
Managing teams through these transitions for years now, the playbook barely shifts. The resistance shifts constantly. Engineers want to build the cool thing, not manage scope tiers. Product managers want to say yes to stakeholders, not negotiate trade-offs. Leaders want timeline confidence, not weekly dependency check-ins.
Shipping on time means building the muscles to do the boring work consistently. Free to implement and genuinely hard to sustain — which explains why most teams skip it.
Or maybe the whole picture misses something. Maybe a magic bullet hides out there undiscovered. After twenty years building software, though, conviction keeps growing: the boring stuff remains the only stuff that actually works.