Stable Workers Threads in Node.js for High-CPU tasks

Felipe Hlibco

Node.js is single-threaded. Everyone knows this.

It’s also the source of the most common misconception about Node: that it can’t do CPU-intensive work. It can. Worker Threads have been stable since Node.js 12 LTS (April 2019), and they give you actual parallelism — not the cooperative multitasking of the event loop, but real OS-level threads running JavaScript in parallel.

So why, a year later, do most teams I talk to still haven’t touched them?

I think the mental model is confusing. Let me try to fix that.

What Worker Threads Actually Are #

The worker_threads module lets you spawn JavaScript threads that run in the same process. Each worker gets its own V8 isolate — its own heap, its own garbage collector — but shares the same process memory space. That last part matters. It’s what separates worker threads from child_process or cluster.

With child_process.fork(), you spawn a whole new Node.js process. New memory space, new V8 instance, inter-process communication through serialization. Fine for many cases, but the overhead adds up if you’re spawning processes frequently.

Worker threads are lighter. They share memory through SharedArrayBuffer and can transfer ownership of ArrayBuffer instances without copying. Processing a 50MB image? Transfer the buffer to a worker, process it there, transfer the result back. No copying. Pretty neat.

const { Worker, isMainThread, parentPort, workerData } = require('worker_threads');

if (isMainThread) {
  // Main thread: spawn a worker and send it data
  const worker = new Worker(__filename, {
    workerData: { iterations: 1e8 }
  });

  worker.on('message', (result) => {
    console.log(`Result: ${result}`);
  });
} else {
  // Worker thread: do CPU-intensive work
  let sum = 0;
  for (let i = 0; i < workerData.iterations; i++) {
    sum += Math.sqrt(i);
  }
  parentPort.postMessage(sum);
}

When They Help (and When They Don’t) #

Worker threads solve one specific problem: CPU-bound work blocking the event loop.

If your Node.js server spends most of its time waiting for database responses and HTTP calls, worker threads won’t help. The event loop already handles I/O concurrency well; adding threads to an I/O-bound application just adds complexity for no gain.

But if you have operations that chew CPU for more than a few milliseconds — image resizing, PDF generation, data compression, encryption, JSON parsing of large payloads — worker threads keep that work off the main thread. Your server keeps handling requests while the heavy computation runs in parallel.

At TaskRabbit, we have a service that generates invoice PDFs. Before worker threads, a batch of 50 invoices would block the event loop for several seconds. Health check endpoints would time out. Moving PDF generation to a worker pool fixed that — no separate service needed.

Worker Pools: Don’t Spawn Per Request #

Here’s the first mistake I see teams make: spawning a new worker for each task.

Worker creation has overhead. V8 needs to create a new isolate, parse and compile the worker script, allocate memory. For short-lived tasks, the spawn overhead can actually exceed the computation time. Not great.

Use a worker pool instead. Pre-spawn a fixed number of workers and route tasks to available ones, reusing them across requests.

// Simplified pool concept (use a library like piscina in production)
const { Worker } = require('worker_threads');

class WorkerPool {
  constructor(workerPath, poolSize) {
    this.workers = [];
    this.queue = [];
    for (let i = 0; i < poolSize; i++) {
      this.workers.push({
        worker: new Worker(workerPath),
        busy: false
      });
    }
  }

  async runTask(data) {
    const available = this.workers.find(w => !w.busy);
    if (!available) {
      // queue the task; simplified here
      return new Promise((resolve) => {
        this.queue.push({ data, resolve });
      });
    }

    available.busy = true;
    return new Promise((resolve) => {
      available.worker.postMessage(data);
      available.worker.once('message', (result) => {
        available.busy = false;
        resolve(result);
      });
    });
  }
}

In practice, you probably want something like piscina. It handles pool management, task queuing, cancellation — built specifically for this use case and released earlier this year.

SharedArrayBuffer: Shared Memory Done Right #

The most powerful feature of worker threads is shared memory via SharedArrayBuffer. Unlike regular message passing (which serializes data between threads), SharedArrayBuffer lets multiple threads read from and write to the same memory.

Useful for shared counters, ring buffers, any scenario where copying data between threads would be too expensive. You’ll need Atomics for synchronization — without it, you get the same race conditions you’d see in any shared-memory system.

// Main thread
const shared = new SharedArrayBuffer(4);
const view = new Int32Array(shared);
Atomics.store(view, 0, 0);

const worker = new Worker('./counter.js', {
  workerData: { shared }
});

// Worker thread (counter.js)
const { workerData } = require('worker_threads');
const view = new Int32Array(workerData.shared);
Atomics.add(view, 0, 1); // Thread-safe increment

Fair warning: shared memory is powerful but error-prone. If you don’t need it, stick with message passing. The serialization overhead of postMessage is usually fine, and the code is much easier to reason about. I’ve debugged enough race conditions to know — sometimes “good enough” performance beats “optimal” performance when the optimal version is a minefield.

When to Scale Out Instead #

Worker threads give you parallelism within a single process. But there’s a ceiling: the number of CPU cores on your machine. Need more parallelism than that? You’re looking at horizontal scaling — multiple processes via cluster or multiple containers behind a load balancer.

My rule of thumb is pretty simple. If the CPU-bound work is occasional and bounded — processing an upload, generating a report — worker threads are the right tool. If CPU-bound work is your primary workload and you need to maximize throughput, you probably want separate processes or maybe a different runtime altogether.

Worker threads complement Node’s async model. They don’t replace it. The event loop handles I/O; worker threads handle computation. Keep that mental model straight and you’ll make the right architectural calls.