Judge: Scaling Tools for Volunteer Mobilization
Last weekend I sat on a judging panel for a disaster-relief hackathon here in SF. Twelve teams, forty-eight hours, one shared obsession: how do you get the right volunteer to the right place at the right time when everything’s falling apart?
I’ve been thinking about this problem since 2011. Back then I co-founded Doare.org in Rio, which became the largest donation platform for non-profits in Latin America. Donations, though — not coordination. The coordination piece is orders of magnitude harder, and watching these teams wrestle with it brought back a familiar frustration.
The state of the art has moved. Genuinely moved. But the distance between “impressive demo” and “works when cell towers are down” remains vast.
What Actually Works at Scale #
The submissions that impressed me most weren’t the flashiest. They were the ones that acknowledged operational reality.
Team Rubicon is the benchmark everyone referenced, and for good reason. They scaled to 190,000+ volunteers in 2024 and served 838 communities; that’s not a pitch deck number, that’s throughput. What makes their system work isn’t the AI layer (though they use Microsoft Dynamics 365 Field Service for volunteer management) — it’s the data model underneath. Skills, availability, location, language proficiency, physical capability. All structured. All queryable in real time.
Three of the hackathon teams built volunteer-matching engines. The best one analyzed six dimensions simultaneously: skill match, proximity, availability window, language overlap, physical requirements, and team composition balance. Watching it assign a bilingual EMT to a flood response thirty miles away — factoring in that she’d already completed her shift at another site — felt like seeing the future of dispatch.
The worst one used GPT-4 to write motivational text messages to volunteers. I gave it a 2 out of 10.
The Craigslist Lesson #
One team built something that reminded me of Crisis Cleanup, which takes a deliberately low-tech approach to disaster work orders. Their model is basically craigslist for disaster relief: organizations post what needs doing, volunteers claim tasks, and the platform handles deduplication across agencies so three different groups don’t show up to tarp the same roof.
It’s unglamorous. It works.
The UNDP’s 2024 report on innovation in disaster management gets enthusiastic about frontier tech (AI, IoT, blockchain for supply chain verification), but buried in the appendix is a warning I wish more hackathon teams had read: the digital divide means your fancy AI dispatcher is useless if the affected community can’t get online. One team’s submission required 5G connectivity. In a disaster zone. I asked them what happens when the towers go down and got a blank stare.
Where AI Actually Helps #
I’m not anti-AI here. I’m anti-AI-as-decoration.
Trek Medics dispatched responders to 40,000+ emergencies in 2024 using SMS-based coordination that works on feature phones. Now they’re exploring AI route optimization — not to replace the dispatcher, but to suggest faster paths when multiple calls come in simultaneously. That’s the right application. AI as force multiplier for human judgment, not replacement.
The volunteer-to-task matching problem is genuinely well-suited to ML. You’ve got structured inputs (skills, location, availability), a clear objective function (minimize response time while maximizing skill match), and historical data to train on. IBM Watson and Google’s own AI tools are already powering some of these systems. The matching algorithms aren’t the bottleneck anymore.
The bottleneck is data quality. And interoperability. And the fact that every relief organization runs its own database with its own schema and its own definition of what “available” means.
Digital Volunteering Isn’t One Thing #
A framework I found useful while evaluating submissions comes from a 2024 paper on digital volunteer services that breaks the concept into four categories: bridging (connecting existing resources), supportive (augmenting professional responders), complementary (filling gaps professionals can’t reach), and collaborative (co-creating solutions with affected communities).
Most hackathon teams built bridging tools. The strongest submission built a collaborative one — a platform where disaster-affected community members could report needs directly, tag them by urgency, and have local volunteers (not outside responders) handle triage. It respected the community’s agency instead of treating them as passive recipients.
The Marketplace Engineering Problem #
Here’s where my TaskRabbit background kicks in. Volunteer mobilization at scale is a marketplace problem. You’ve got supply (volunteers), demand (disaster needs), matching complexity, and trust/verification requirements. The parallels to two-sided marketplace engineering are striking.
At TaskRabbit we spent years optimizing for response time, skill matching, and geographic coverage across four time zones with nine engineers. The disaster relief space faces those same challenges but with worse data, higher stakes, and zero tolerance for billing errors (because there is no billing — which means the typical marketplace feedback loops don’t exist).
What I told the teams: the technology for volunteer matching exists. The AI for route optimization exists. The mobile platforms for registration and dispatch exist. What doesn’t exist is a shared data layer. Every organization is an island. Crisis Cleanup gets closest to solving this, but even they’re one platform among many.
What I’m Watching #
The hackathon produced two submissions I’ll follow. One built a lightweight agent that sits between existing volunteer management systems and translates between their APIs — a universal adapter pattern rather than a replacement platform. The other built an offline-first dispatch tool that syncs when connectivity returns; it cached AI-generated task assignments locally and reconciled conflicts on reconnect.
Neither won first place. The judges (myself included) gave that to the team with the slickest demo. I’ve been second-guessing that choice all week.
The gap between what’s technically possible and what’s operationally deployed in disaster relief remains wide. Hackathons produce ideas; the hard part is building something that works when the internet’s down, the power’s out, and 10,000 people need help simultaneously. That’s not a weekend project. That’s a decade of unglamorous infrastructure work.
But watching twelve teams try — really try, with genuine ingenuity and no small amount of caffeine — reminded me why I got into this space fourteen years ago with Doare. The tools are better now. The problems haven’t changed.