No one confused early Sora demonstrations with art, but what startled filmmakers is these semi-professional and quasi-lifelike images introduced a doomsday scenario: Studios will use this rapidly evolving tech to replace them.

It’s a pervasive fear — with the notable exception of those filmmakers who are early adopters of generative artificial intelligence tools.

Over the last three weeks, I spoke to dozens of film and television creatives who use AI-based software programs like Midjourney, Adobe Firefly, Runway, and Stable Diffusion. They’re excited by the tools and their applications — but they also see their massive and inherent limitations.

“It’s a fraught time because the messaging that’s out there is not being led by creators,” said producer Diana Williams. A former Lucasfilm executive who is now the CEO and co-founder of Kinetic Energy Entertainment, she spoke at the 2024 SXSW panel “Visual (R)evolution: How AI is Impacting Creative Industries.” (You can listen to the audio at the link.) “It’s really being led by the business people and by publicly owned companies.”

Some filmmakers who are most familiar with the power of AI image generation view the technology as yet another disruption that alters the workflow of how humans create motion pictures.

“Every five years to almost every decade, something new comes in,” said Julien Brami, a creative director and VFX supervisor at Zoic Studios, who spoke on the panel with Williams. He found comparable disruptions to motion-picture storytelling with the introduction of digital technology, computers, CG character creation, a shift from 2-D to VR, and most recently virtual production.

Repeating a familiar sentiment, Brami said the common thread with each tech disruption is filmmakers adopt these tools to tell stories. “I started understanding [with AI] that a computer can help me create way faster, iterate faster, and get there faster,” he said.

Speed. That’s what you hear, over and over again, as the real benefit of Gen AI imaging. By eliminating the friction of time-consuming tasks and collaborative translation fails, it lets an idea move swiftly from a creative’s head to something that others can see.

However, few see a viable path for Gen AI video to make its way to the movies we watch. In my notes from the last few weeks, “I would never use AI in my final project” appears 15 times from conversations with animators, visual effects artists, post-production and virtual production supervisors, and filmmakers who use generative AI tools. These exchanges are usually on background, as using AI is currently the equivalent of showing up on set in a MAGA hat.

Logistical and workflow obstacles make the incorporation of Gen AI video into commercial movies seem insurmountable. First among these is copyright since, if something is AI-generated, it can’t be copyrighted.

shy kids’ ‘Air Head,’ made with Sora

Gen AI also lacks the ability to create and consistently repeat an image from scene to scene, especially when they have faces. It’s worth noting that the most recent crop of Sora shorts included shy kids’ “Air Head,” which creates a story and has a reasonably consistent character that moves through space — but dodges the face problem altogether by giving the main character the head of a yellow balloon.

There’s also a complete lack of faith in the business plans for all of this VC-funded software. No one wants to rely on products that could pivot their pricing models, if not the products themselves. It’s not a question anyone wants to field during the 18-24 months of making a major film or TV show.

Even if these obstacles could be overcome, a more fundamental divide remains: the uncanny valley. That’s where, these artists and artisans believe, AI images will always fall short.

“When [Gen AI] exploded last year, people were showing me art and I kept saying, ‘I don’t see a soul. The eyes are dead,’” said Williams. “People throw out that phrase so often, ‘uncanny valley,’ and I defy people to actually define it. We all use it because it’s is indefinable: We know it when we see it. We can’t pinpoint, we can’t say, ‘Do X, Y, Z equals human soul.’ It just doesn’t work. That’s why I do believe the final output — what we want to put into the world for an audience — is always going to have the physical human touch.”

Movies are about emotion and I couldn’t find anyone who sees a future in which a prompt creates images that connect a viewer to a character’s emotions. It’s another reason why Gen AI is seen as a powerful tool for the beginning of the creative process, not the final product.

For Davide Bianca, a co-founder of content studio Shifting Tides, frustration with the pitching process motivated him to first try AI.

“Over the course of the past 20 years, [I’ve seen] great projects and concepts and IP die a miserable death on the floor of an executive team meeting, just because the person across the room doesn’t really have the creative capacity to visualize what it is,” he said.

Bianca shared a teaser for a big-budget sci-fi world he envisions. He created this “Zero Shot” sizzle with a self-imposed 40-hour deadline to create it by himself, utilizing pencil and pen storyboards, Gen AI tools like Gen-2 by Runway and Midjourney 5.2, and more traditional software like Photoshop, DaVinci Resolve, and After Effects.

“I really wanted to see how far I could push the blockbuster-type look and feel, so by giving myself a self-imposed time frame – I had like 30, 40, 50 hours from ideation to final sizzle – I wanted to see if I could convey that sense of what I had in my head,” Bianca said.

However, just as creating a look book for an upcoming project is very far from creating the actual event, he emphasized that the one-minute teaser is meant to be to be a look-and-feel exercise, not a movie. “I think [it helps] having something tangible like this to show in a meeting, to see if there’s enough interest to greenlight a project,” he said.

The money, hours, and collaborators Bianca would need to create this pitch without Gen AI would make it impossible. Beyond pitching, it also gives Bianca and Brami the ability to test, iterate, and quickly get on the same page with funders as well as fellow filmmakers.  

VFX artists I spoke with loved talking about how using AI tools will let them avoid spending two months building something, only to discover it’s not what the director wanted. And then, according to Williams, once everyone is on the same page: “That’s when we get the team together and we apply our filmmaking craft to doing it for real.”

Leave a comment