There are stories in cinema that read like ancient myths: lost cities, cursed paintings, and—if you spend enough time in film-nerd corners—lost reels. Few of those stories carry the emotional gravity of The Magnificent Ambersons, Orson Welles’s follow-up to Citizen Kane, which was famously gutted by studio hands and had nearly a full hour of footage destroyed. Now, a Silicon Valley startup called Showrunner says it wants to try to bring some of that missing material back to life—using generative AI.
The long, unhappy tail of Ambersons
A quick primer: Welles shot what he thought was a 131-minute epic in 1941–42. After only a couple of previews, RKO took the picture away from him, chopped it down to about 88 minutes, tacked on a new ending and—cruelest of all—destroyed the negatives of the removed material. That excision has been the stuff of legends ever since; scholars and cinephiles have pored over scripts, production stills and memos trying to imagine what Welles actually intended. The missing footage isn’t just a curiosity: it’s a gap in the historical record of one of cinema’s great directors.
Which is why Showrunner’s announcement landed like both a promise and a provocation. The company (backed in part by deep-pocket investors, including Amazon) has debuted a new internal model—branded FILM-1—and said it will use generative tools to reconstruct “keyframes” for the lost scenes, stitch those to 3D and photographic references of existing sets, and combine AI-mediated face/pose transfers with newly shot live-action to approximate how those sequences might have looked. It’s a hybrid, not a pure synthetic movie: AI-produced frames and spatial reconstructions + practical shoots + VFX.
How you “recreate” something that was destroyed
Showrunner’s pitch is technically ambitious and narratively fraught. The basic workflow they’ve described is part forensics, part imagination: historians and archivists provide whatever scraps exist—scripts, shot lists, stills, Welles’s own notes about camera placements and intentions—then the FILM-1 pipeline generates visual “keyframes” that fill in camera blocking and composition. Those frames act as a scaffold for live-action shoots, compositing, and VFX work; veteran artists will map actors’ performances onto those frames using face- and pose-transfer tools to evoke the faces and movements of the original cast. That is, where real footage doesn’t exist, the team plans to synthesize plausible moving images that are anchored in archival material.
Showrunner has recruited people who know the particular minefield they’re stepping into: Tom Clive, a VFX artist known for face-work and high-end body-replacement effects, is on board; Brian Rose, a filmmaker who’s spent years making a painstaking, animation-based reconstruction of Ambersons, is also part of the effort. Rose’s prior work—hand-drawn sequences, 3D set reconstructions and archival reconstruction—provides a map of the scholarship and appetite that already exists for this project, and it’s likely why Showrunner tapped him.
The legal and ethical thickets
Here’s the blunt part: Showrunner does not own the rights to The Magnificent Ambersons (those sit with the studio heirs, currently Warner Bros. Discovery and others), and it’s clear the company recognizes the legal tightrope. Its public framing is explicitly noncommercial—at least for now. Showrunner’s leadership says the aim is cultural and academic: to “see [the lost minutes] exist in the world” after decades of speculation, and to hand work over to rights holders if there’s a legitimate pathway for it. That sounds magnanimous; it also sounds like the kind of pitch a growth-minded startup makes when it’s trying to move from provocation to partnership.
The company isn’t entering this debate as a blank slate. Showrunner’s platform famously (or infamously) produced unauthorized AI-generated episodes in the style of South Park that went viral—work that helped the company raise capital and attention, and also stoked a lot of industry suspicion. Amazon-linked investment and a splashy “Netflix of AI” positioning have put Showrunner both in the spotlight and on the defensive. That history makes the Welles move feel less like a purely scholarly rescue mission and more like an argument: “See? AI can do cinema history good.” Critics will see hubris. Rights holders will see risk. And preservationists will squint at the notion of replacing irreplaceable material with synthetic facsimiles.
What this would be—and what it wouldn’t
Two important clarifications before we run off imagining Welles in algorithmic form. First: reconstruction isn’t resurrection. The original performances, the grain and flaws of 1940s nitrate stock, the improv’s particular magic—those are gone. What Showrunner proposes is a kind of informed simulation: a creative artifact that tries to honor Welles’s plans and staging using modern tools. For historians, that could be illuminating; for purists, it will feel ersatz. Second: the process foregrounds questions of authorship. If an AI-assisted team composes a ballroom sequence “as Welles might have shot it,” who is the author? The director on paper? The VFX supervisor? The model? That’s not just a philosophical riff—it’s a practical one, because attribution and royalties hinge on answers to those questions.
Why people care
There’s a hunger around Ambersons that’s partly aesthetic and partly archaeological. Welles’s betrayed ambitions (and the myth of his merciless studio beatdown) are a big reason film students still sit up late arguing about what might have been. A credible, transparent reconstruction—if it existed alongside clear documentation of what’s original and what’s generated—could be an invaluable tool for teaching and debate. But if the work is released cloak-and-dagger or treated like a closed-source product, it will likely inflame the same fights that dogged Showrunner in its formative days: authorship, compensation, and whether AI is a service for culture or just a set of tricks for attention.
So what now?
For now, the announcement is a promise and a provocation: a promise that new tech can answer long-standing cultural questions, and a provocation to guardians of the cinematic canon. Showrunner has rolled the dice in public, invited scrutiny, and—whether you love or hate the idea—you have to admit it places a very old problem (lost art) into a very new light (generative media). Over the next year or two, we’ll see whether archivists, rights holders and audiences can find a way to make this experiment more than a headline. If nothing else, it’s opened the conversation about what “restoring” a film really means when the original is gone for good.
Discover more from GadgetBond
Subscribe to get the latest posts sent to your email.
