WGA Strike: The Battle Over AI-Generated Content
On May 2nd, 11,500 members of the Writers Guild of America went on strike. The usual issues are on the table: compensation, residuals, staffing minimums. But one demand stands apart from everything else, and it’s the one I can’t stop thinking about.
The WGA wants explicit contractual language preventing studios from using AI to generate or rewrite scripts. They want guarantees that AI-generated text cannot be considered “literary material” under guild agreements, and that no AI system can receive writing credit.
Three weeks into the strike, the picket signs are everywhere in LA. And the fight they’re having is one that every knowledge industry is going to have eventually.
The Writer’s Fear (And Why It’s Rational) #
Here’s the scenario that keeps WGA members up at night: a studio feeds a ChatGPT-class model a prompt — “write a 22-minute sitcom episode about a family road trip, in the style of Modern Family” — and gets back a passable first draft in thirty seconds. The studio then hires a writer, not to write from scratch, but to polish the AI’s output. The writer gets paid less (because it’s “just a rewrite”), gets diminished credit (because the “source material” came from the AI), and becomes, functionally, an editor of machine-generated content.
This isn’t hypothetical speculation. Writers have reported studios already exploring this exact workflow. The economics are brutal: if you can generate a first draft for the cost of an API call, the writer’s value proposition shifts from “creates the work” to “makes the machine’s work publishable.” That’s a different job, at a different pay grade, with different leverage.
The WGA’s position is clear. AI can be a tool that writers use, but it cannot be the source of “literary material.” Writers should be able to use AI in their process (research, brainstorming, iterating on their own drafts) without the studio claiming the output was AI-generated and therefore outside guild protections.
It’s a smart framing. They’re not anti-technology. They’re drawing a line around authorship and compensation.
The Studio’s Calculus #
Studios haven’t said much publicly about their AI plans, which tells you more than any statement would. The Alliance of Motion Picture and Television Producers (AMPTP) responded to the WGA’s AI proposals by offering annual meetings to discuss the technology — a non-answer that the WGA rightly rejected.
The studios’ silence makes strategic sense. Committing to any specific AI policy now locks them in before the technology’s trajectory is clear. A year from now, AI-generated scripts might be significantly better (or significantly worse, or legally questionable). Why concede territory you might not need to defend?
But the silence also reveals the underlying tension. Studios see AI as a potential cost reduction tool. Writers see it as an existential threat to their profession. Both perspectives are economically rational, which is precisely why negotiation is so difficult. This isn’t a misunderstanding that better communication resolves; it’s a genuine conflict of interest.
Why This Matters Beyond Hollywood #
I work in tech, not entertainment. But the WGA strike hits close to home for reasons that go beyond sympathy for writers.
Every knowledge worker who produces creative or analytical output is watching this fight. Software engineers have GitHub Copilot writing code suggestions. Journalists have AI summarization tools. Designers have Midjourney and DALL-E generating visual concepts. Marketing teams have Jasper and Copy.ai producing copy at scale.
The pattern is the same everywhere: AI generates a first draft, a human refines it. The question the WGA is forcing into the open — who is the author? who gets compensated? what is the human’s role? — applies to all of these domains.
If studios establish the precedent that AI-generated first drafts are the studio’s intellectual property (because the studio paid for the API access), that precedent extends. If a company can claim ownership of AI-generated code because they licensed Copilot, what happens to the developer’s claim on their work product? If an AI generates the initial design concept and a designer refines it, is the designer an artist or a post-processor?
These aren’t abstract philosophical questions. They’re compensation questions. Credit questions. Career identity questions.
The Legal Vacuum #
What makes this fight particularly messy is the absence of legal framework. Copyright law in the US currently doesn’t recognize AI-generated works as copyrightable (the Copyright Office has been clear about requiring human authorship). But that ruling doesn’t address the work-for-hire dynamics at play here.
If a studio prompts an AI to generate a script outline and then hires a writer to develop it — who authored what? The outline isn’t copyrightable (AI-generated), but the final script is (human-written). Where does one end and the other begin? If the AI’s structure, character arcs, and plot points survive into the final draft, the writer functionally worked within an AI-defined creative framework. Is that meaningfully different from working with a human showrunner’s outline?
Current copyright law doesn’t have answers. Current labor law doesn’t have answers. The WGA is trying to establish contractual protections precisely because statutory protections don’t exist yet.
I keep thinking about how this parallels the early days of open source licensing. The legal frameworks didn’t exist, so communities created their own through contracts (licenses). The GPL, MIT, Apache — these were contractual solutions to legal gaps. The WGA is doing the same thing: writing AI protections into their guild agreement because no law requires it.
The Pace Problem #
Here’s what genuinely worries me about this situation. The WGA is negotiating over GPT-4-era capabilities. By the time this contract is ratified (these negotiations historically take months), the AI landscape will have shifted again. Negotiating AI policy in 2023 based on what the technology can do today is like negotiating streaming residuals in 2005 based on what Netflix looked like then.
The WGA knows this, which is why they’re pushing for principles rather than technology-specific language. “AI cannot be the source of literary material” is a principle that holds regardless of whether the AI is GPT-4 or whatever comes next. “AI cannot receive writing credit” doesn’t depend on the specific model’s capability. That kind of forward-looking, principle-based framing is exactly right.
But principles need enforcement mechanisms, and enforcement mechanisms need to be specific enough to be actionable. Somewhere between “AI is a tool writers can use” and “AI cannot replace writers,” there’s a boundary that will be tested constantly as the technology improves. Holding that boundary requires ongoing vigilance, not a one-time contractual clause.
What I’m Watching #
I don’t know how this strike ends. Nobody does — that’s the point of a negotiation. But several things are clear to me.
The WGA’s fight will set precedent that extends far beyond screenwriting. Whatever contractual language they achieve (or fail to achieve) will become the template for other creative guilds, professional associations, and eventually individual employment contracts. The stakes are higher than one industry’s labor dispute.
The studios’ reluctance to commit to AI policy is a tell. Their long-term plan involves AI-generated content at some level; the only question is how much human involvement they’ll contractually guarantee. Every month they delay committing, AI capabilities advance and their negotiating position strengthens.
And the broader challenge — building labor protections that keep pace with exponential technology advancement — remains unsolved. The WGA is fighting today’s version of a fight that every knowledge profession will have over the next decade. They’re the canary. The coal mine is enormous.