How Gossip Goblin Is Becoming the Auteur of the AI Age

How Gossip Goblin Is Becoming the Auteur of the AI Age


This story comes from The Hollywood Reporter’s upcoming AI Issue, which publishes March 31. Check out further stories throughout the week, and the complete issue next week.

On Instagram, the filmmaker known as Gossip Goblin posts bleak sci-fi epics set in strange worlds populated by mutated creatures and bunker societies. The images are accompanied by philosophical narrators contemplating reality. The short films look uncannily like fragments of big-budget genre cinema. But they weren’t shot on soundstages or rendered by a VFX studio: They were generated and assembled using artificial intelligence.

At a moment when Hollywood and Silicon Valley are still arguing over what AI actually is — a cost-cutting tool, a visual gimmick or the foundation of an entirely new cinematic language — Gossip Goblin offers a more provocative possibility: that AI already has an emerging aesthetic, and that it belongs not to studios but to individuals willing to wrestle it into something personal. His films don’t reject the medium’s telltale strangeness — the dream logic, the synthetic textures, the sense of images half-remembered rather than fully observed — but lean into it, suggesting a form of storytelling that feels less like traditional filmmaking than like visualized thought.

The man behind the account is Zack London, 35, a Los Angeles native who has so far kept a relatively low public profile even as his work has spread widely online. The name “Gossip Goblin,” he suggests, began as a deliberately unserious alias — a kind of Internet pseudonym — but has since become a banner for a growing body of work that is anything but disposable. London studied sculpture and anthropology at Pitzer College before drifting into product design and virtual-reality work at tech companies like Oculus. Four years ago, he relocated to Stockholm after meeting his Swedish partner. While experimenting with early image-generation software after work, he stumbled onto a new way of visualizing the stories he’d long been writing.

Since then, Gossip Goblin has quietly amassed more than 1 million followers on Instagram and millions more views across platforms. London recently quit his tech job, raised a small round of funding and launched a studio to produce longer AI-driven films with a small international team. His first major effort, a 20-minute short titled The Patchwright — set in a grungy, Blade Runner-esque world populated by flesh-and-metal hybrid characters and featuring a full cast of voice actors, a foley artist and an original score — is set to be released in the coming weeks after roughly five months of production.

The approach puts him in a curious position within the fast-moving AI landscape. While social media is flooded with one-click AI videos (often dismissed as “slop”) London insists his projects still involve many of the same steps as traditional filmmaking: scripts, shot lists, voice actors, foley artists and extensive editing.

Whether that process represents the future of independent filmmaking or simply a transitional curiosity remains an open question. But Hollywood is already paying attention. London says he has fielded calls from studios, actors and directors curious about what AI storytelling might become.

The Hollywood Reporter spoke with London about how his films are made, why most AI content fails to stand out and whether a legitimate blockbuster could someday emerge from this new medium.

You’re from Los Angeles originally. How did you end up doing this from Stockholm?

I grew up in the Valley and studied sculpture and anthropology at Pitzer College — two very lucrative disciplines. I thought maybe I’d go to law school after, but I ended up doing a Fulbright in Malaysia and spent almost two years traveling around Southeast Asia. After that I moved to the Bay Area and started working in tech as a product designer at startups and eventually at Facebook on Oculus doing virtual reality work. I moved to Sweden about four years ago after meeting a Swedish girl — it was either she moved to the States or I moved here, so here I am. Filmmaking was never really part of the plan. I’ve always illustrated and written stories, even self-publishing some small books of travel writing and short fiction, but it never occurred to me that making films was something available to me. AI kind of changed that.

How did you first start experimenting with AI tools?

About three and a half years ago I was messing around with early image-generation tools with a coworker after work. We were trying to use them for a design project and the results were terrible — totally unusable for corporate work — but the technology itself was fascinating. Before video generation existed I started doing a tongue-in-cheek travel writing series about a fictional country called Urumquan, written in the style of 1980s National Geographic. I created an entire fake ethnography of this imaginary Soviet satellite state and used Midjourney to generate images that looked semi-documentary but surreal. It unexpectedly took off online and got me excited about storytelling again. When the video tools started appearing, I realized moving pictures meant you could actually build narrative worlds — even though early on the technology was so limited that the storytelling had to adapt to what the AI could realistically produce.

Your work looks far more polished than most AI videos online. How are these films actually made?

The biggest misconception is that someone types “sci-fi film” into a prompt and a movie pops out the other side. Maybe we’ll get there eventually, but that’s not where the technology is today. Our process starts with a script, and then we break that script into something like a traditional shot list — every scene, every angle, every environment. After that we start exploring the visual world: what the characters look like, what the creatures look like, what kind of lighting and architecture this world has. Once we define that aesthetic, we generate and refine hundreds or thousands of images and video clips that fit the story, and then everything gets assembled and edited in DaVinci Resolve like a normal film. We also work with voice actors and even a foley artist to create sound effects, so there’s still a lot of traditional filmmaking craft involved.

What AI tools are you using to generate the imagery?

Quite a lot of them — somewhere between 15 and 25 tools across the entire pipeline. There isn’t one magic generator that does everything. Some tools are better for creating initial images, others are better at replicating characters consistently across different scenes, and others are better for motion or animation. Midjourney is still a favorite for generating images, but we also use other models that are better at reproducing a specific character from multiple angles or lighting conditions. Consistency is one of the hardest problems in AI filmmaking — if a character changes appearance from shot to shot, the illusion falls apart — so a lot of the work is figuring out how to control the outputs across different tools.

One thing I noticed watching your short film is that it relies heavily on narration rather than dialogue. Was that intentional?

Mostly that was a technical limitation. When we made that film, the tools simply weren’t good enough to produce convincing dialogue scenes with synchronized speech and performances. If we tried to do it, it would have felt awkward or artificial, so we leaned into narration and atmosphere instead. The next project we’re working on is around 25 minutes long and much more dialogue-driven because the technology has improved significantly since then. The tools are evolving so quickly that what felt impossible a year ago now feels achievable.

Are the voices in your films AI-generated?

No, they’re all human voice actors. We work with a couple of performers — one used to be an opera singer who’s now a DJ in San Francisco, and another is a jazz singer in the UK. Synthetic voices have become incredibly convincing, but real performers still bring something that’s hard to replicate. Eventually motion-capture performance will probably become a bigger part of this workflow too, where you record an actor’s performance and translate it onto an AI-generated character, but that part of the technology is still pretty early.

You’ve built a following of more than a million people online. Why do you think your work stands out from other AI content?

Honestly, because most AI content is what people call “slop.” The technology has a kind of default visual style, and if you just press the button and accept whatever it generates you end up with generic sci-fi imagery that looks like everything else. It actually takes a lot of work to push the AI away from that baseline and impose a specific creative vision. The other difference is storytelling. A lot of creators focus purely on visuals — impressive images with no narrative behind them. I’m much more interested in building a mythology, with recurring characters and stories that exist within a larger world.

You recently quit your job and started a studio around this work. What’s the goal?

The goal is to build out a larger universe of stories — not mass-manufactured content, but thoughtful science fiction created with a small team. What’s exciting about AI is that it might allow people to create ambitious genre storytelling without needing hundreds of millions of dollars. Historically, if you wanted to make large-scale science fiction you needed a massive studio production. Now a handful of people might be able to create something visually comparable with far fewer resources.

Have Hollywood studios started reaching out to you?

Yes, I’ve spoken with most of the studios and streamers at this point, as well as some actors and directors whose work I really admire. A lot of those conversations are simply curiosity — people trying to understand what the future of filmmaking might look like. Some actors ask questions about whether they should license their voices or likenesses for AI use. I don’t think anyone really knows the answers yet, but there’s definitely a lot of interest.

Do you ultimately want to partner with Hollywood or build this independently?

Our goal is to retain as much ownership of the intellectual property as possible. In a future where AI allows anyone to generate huge amounts of content, there will be an overwhelming amount of noise online. The things that will actually hold value are recognizable characters and worlds that audiences connect with. If we can build a small set of stories and IP that people genuinely care about, that’s where the long-term value lies.

Do you think a true AI-generated blockbuster is coming?

Probably. The technology is improving so quickly that it feels inevitable. But I’m less interested in being the first person to prove it can happen. There are already well-funded companies trying to win that race. What matters to me is doing it well and focusing on storytelling rather than simply demonstrating the technology. At the end of the day, audiences don’t care about the tool — they care about whether the story is compelling.

I do feel like someone is going to be the George Lucas of this, and wouldn’t that be interesting if it was you?

That’s what we’re telling investors, but I don’t want to jinx it. That is essentially the elevator pitch: “We can tell a totally vast and unfiltered sci-fi epic spanning all of these different worlds and ideas and storylines, and we can do it fairly reliably with a fairly small team.” Plus it’s not a huge risk to take this on. It’s not like we’re asking for the world to do this.



Source link

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *