This weekend at the Camden International Film Festival, the Archival Producers Alliance (APA) released the first-ever industry-wide guidelines for generative artificial intelligence use in documentary film. GenAI is a thorny topic throughout the film and TV world, but it poses specific and unique challenges to nonfiction, where primary sources, historical documentation, and archival imagery are fundamental building blocks.
The guidelines address this threat directly in its introduction: “The APA believes that the survival of the documentary industry — in all of its power and promise to help us understand and interpret our history and our present — is contingent on maintaining a truthful and transparent relationship with viewers.”
For this reason, the guidelines are focused explicitly on using GenAI to create new footage or alter existing primary source materials. The APA distinguishes between GenAI and other Machine Learning, or AI that is being incorporated into production workflows for ideation, transcription, and logging footage. The guidelines also purposefully steer clear of AI technology used for “minor alteration” of existing audio-visual assets such as retouching, restoration, or upres-ing.
The APA represents over 400 archival producers, and the writing and rewriting of the guidelines over the last few months were based on conversations and feedback from members, as well other key stakeholders in the field, and boasts an impressive list of endorsers, including the IDA (International Documentary Association), POV, Firelight Media, Catapult Film Fund, Impact Partners, BIPOC Film & TV (Canada), A-DOC, Documentary Producers Alliance (DPA), Alliance of Documentary Editors (ADE), Kartemquin, DC/Dox, the D-Word, and WITNESS, and individual filmmakers including Peter Nicks, Ken Burns, Megan Chao, and Kristine Samuelson.
The guidelines stop short of making hard and fast rules on several controversial topics, often favoring supplying an approach or ethical thought process for filmmakers to consider for when and how to use GenAI.
Ahead of the APA’s unveiling of the new guidelines this weekend at Camden, IndieWire talked to APA co-directors Jennifer Petrucelli and Rachel Antell, who made clear the document was designed to serve as an essential starting point for how documentarians wrestle with this rapidly evolving technology, which they believe has the potential to serve as a powerful creative tool, but is also very much an existential threat to the core principals of the documentary community.
Here are seven big takeaways from the nine-page document, which you can read here, and our interview with Petrucelli and Antell.
What’s Changed Since Spring?
As IndieWire reported early this year, an initial draft of guidelines has been circulating for months, as the APA sought input and feedback as it moved toward the first officially published draft. So, what changed over the last six months?
“In April, we had talked about consent of individuals whose likeness or voice might be generated,” said Petrucelli. “We did more thinking and got more feedback around that and changed it a bit because there are going to be instances where you might not want to get consent if you’re making a film on a political figure, and you’re interrogating their record. They might not be open to giving you consent and you don’t want to be tying the hands of people to do this sort of more investigative [work], so that language got changed.”
In addition to deemphasizing and shifting language around consent, Antell said there was a call for filmmakers to take into consideration the “potential cultural sensitivities of using synthetic media,” and that taking “extra care” in instances where subjects “can’t give consent.”
“When getting archival media, [filmmakers] are going to be so intentional in making sure that what they’re getting is exactly what it is they’re aiming at, and that it’s historically accurate,” said Antell, pointing to the investment of money and time it takes to acquire archival photos and footage as a driving force for this intentionality. “Generative AI is so inexpensive and so quick to use, it’s easier to not be as intentional. So that’s one of the main things we also encourage in the guidelines is to think of this the way you would any other piece of media, as something that has to be really intentionally created.”
Antell and Petrucelli both indicated the mounting concern over what is being used to train GenAI models, and the inherit biases of the publicly available material it is based on (the internet), threatens both that intentionality and historical accuracy of GenAI material, and was therefore something that needed to be incorporated into the guidelines in recent months.
The Anthony Bourdain “Roadrunner” Controversy
When a filmmaker uses GenAI to recreate audio or video of a subject, this thorny issue of getting the subject’s consent is a reminder of the controversy surrounding Morgan Neville’s posthumous documentary “Roadrunner” about author/chef/reality TV adventurer Anthony Bourdain, whose voice Neville recreated to read from the deceased author’s writing. It’s a case where the guidelines provide a thought process for filmmakers to carefully consider, rather than supply a binary yes/no if it is OK to do. It’s an interesting case, in which Petrucelli lays out her own thinking in applying the call “extra diligence” to each case.
“When you’re creating someone’s voice, and it does seem again more benign if they’re reading words that they actually wrote, so you’re not putting words in their mouth per se, but you are making assumptions about the way that they’re delivering that, how they spoke, what words they were emphasizing — if they were saying something in jest, or if they were saying something very seriously, what the emotions are behind it,” said Petrucelli. “All of that is getting put into a voice, and when we hear someone say something, we have very different interpretations depending on how they say it. There’s no way a machine knows how that human being would have said it.”
Petrucelli distinguishes between this use of GenAI and hiring an actor to read from a historical figure’s writing: “If you have an actor read someone else’s diaries, you know it’s an actor, you can say, ‘Oh, well, they’re making some assumptions as to how that person originally meant those words,” said Petrucelli. “But when we hear an actual human being saying something in their own voice [or what we assume is their own voice], there’s no way to tease that out.”
It’s here that the APA makes the distinction between leaving room for artistic interpretation and avoiding “muddying the historical record” by creating synthetic archival material that could be interpreted as actual historical documentation.
Outward Transparency: Clearly Labeling GenAI
One key principle of the guidelines is the call for “Outward Transparency,” which is directly related to the viewer’s understanding of the origin of GenAI images and audio. “Transparency is always the bottom line, that the audience should never be confused as to what is generative AI and what is not,” said Antell.
The guidelines call for clear labeling of GenAI images, footage, audio, and recreations. The guidelines suggest the use of lower third titles, bugs and overlays, watermarking, and other forms of “visual vocabulary that alerts the audience to GenAI use, such as a unique frame around the material, change of aspect ratio, colorization,” or even a narrator acknowledging the use of AI in creating images.
“We wrote the guidelines in a broad way because our goal is not to hamstring creativity, we don’t want to get into prescribing how people do it,” said Petrucelli. “Whether that needs to be a watermark in a certain film, or that same goal can be accomplished with some other treatment or cinematic language. What we feel is key is the average viewer understands what is synthetic.”
Transparency Concerns Go Beyond Documentary Viewers
The guidelines warn that outward transparency and taking steps of labeling GenAI goes beyond the understanding of the viewer who sits down to watch a documentary, and warns filmmakers of the need to be concerned about how GenAI footage from their documentaries have a life beyond their project. The APA Guidelines read, “Synthetic material that is indistinguishable from primary sources risks being passed along — on the internet, in educational materials, in other films — and is danger of forever muddying the historical record.”
In other words, a documentary can become a primary source itself. Footage from a film about a historical subject, putting forward primary sources, is often repurposed under the legal parameters of Fair Use by students, YouTubers, educators, and other content creators.
“If you’ve seen something in a documentary, you make assumptions about its veracity, and things get pulled from documentaries and posted, and they make their way into the historical record,” said Petrucelli. “We are concerned about not muddying the historical record and not creating synthetic artifacts that are out in the world as if they are real ones. We want to make sure there’s tagging of material, [as] the authentication of media is not super clear at the moment.”
Antell points to filmmakers making disclosures about the origin of footage at the top and end of a movie or series, which could serve as enough clarity for viewers of that program, but synthetic material pulled from that documentary would be seperated from that context, and could therefore be taken as real archival. This presents a difficult creative versus ethical decision-making process for documentary filmmakers using Gen AI, as watermarking — or other forms of visual labels — of all synthetic material is one of the only ways to safeguard against this scenario.
“I think filmmakers, for obvious reasons, are a little less enthusiastic about [using] watermarks on the actual material because it does take you out of the story to some degree and changes your experience of the scene” said Antell. “But it is a real concern for us what happens to that synthetic media — that’s created and is contained within a documentary — in a subsequent life or iteration outside the documentary.”
Inward Transparency: The GenAI Legal Minefield
The APA guidelines are focused on the ethical issues, rather than the legal issues surrounding the use of GenAI, except in one important and all-encompassing way: The call for what is referred to as “Inward Transparency.”
The guidelines call for everyone involved with the documentary, at every phase of production, especially post-production, to be made blatantly aware of the origin of all synthetic material and how it was created. This includes the production’s lawyers and insurers. In another section of the document, the guidelines call for filmmakers to seek legal advice on all uses of GenAI in creating synthetic material that appears in a film or series.
“We certainly don’t want to be prescriptive, but we think that the lawyers should be, which is why we want to make sure that this information gets to them,” said Petrucelli.
The legal landscape surrounding GenAI hasn’t been firmly established, and is likely going to quickly and constantly evolve in the coming months or years, in ways the APA’s guidelines would have difficulty tracking. According to Antell, the call for inward transparency, and the need for filmmakers to seek legal input, is designed to protect filmmakers.
“You may inadvertently be bringing in something that is copyrighted, it’s very hard to really trace what’s being scraped [to train GenAI models], and if it is too close to something that actually exists,” said Antell. “And the laws are all so unclear, and they’re different between different states at this point. The laws are different in Tennessee than they are in Colorado.”
Both Antell and Petrucelli believe the minefield of legal and copyright issues surrounding GenAI will likely limit the use of synthetic images in documentaries, limitations they believe filmmakers need to know beforehand.
Financial Realities: Archival Is Expensive
The APA guidelines are clear that in most cases nonfiction filmmakers are best served by seeking original and primary sources over GenAI. But Petrucelli and Antell are also realistic, and they know one of the major driving forces pulling filmmakers toward GenAI is the financial realities of their field.
“I would say that filmmakers if you can avoid it, there are other solutions that might be better, but we’re always being driven by the money. Everyone is so strapped for cash and the entire industry right now is suffering from it, especially independents,” said Antell. “One of the things as an organization we also want to be advocating for, separate from [GenAI guidelines], is making archival materials accessible and affordable in a way that they’re not right now. Because I know that’s driving a lot of the industry towards generative, which is so inexpensive and so quick to use.”
“Extra Diligent”: Leaving Wiggle Room for Creativity
The APA’s guidelines fall short of drawing hard lines between what is and is not an ethical use of GenAI. When addressing using GenAI to “make a real person say or do something they did not say or do” or “generate a realistic-seeming historical scene that did not actually occur,” the guidelines leave wiggle room: “We encourage filmmakers to be extra diligent when GenAI is used to do the following — as the risk for sowing confusion is high.”
While the guidelines lay out a thought process for being “extra diligent,” IndieWire asked the co-directors why they left the door open on such issues that would seem to be the antithesis of nonfiction storytelling. Their answer was simple: The APA did not want to limit the creative use of GenAI, as they had seen firsthand examples of utilizing the technology in ethical ways.
“‘The Andy Warhol Diaries’ used GenAI to recreate Andy Warhol’s voice to read his diaries with the consent of the estate and with full transparency with the audience,” said Antell. “And it was very well integrated. It was very motivated because Andy talks a lot about wanting to be like a machine, and so it’s a creative choice that allowed them to tell a story that they couldn’t have told another way. And I don’t think there was anything deceptive in how they did it.”
Antell said it was an ethical and creative example of using GenAI that the APA could never have anticipated, and impacted the organizations decision to leave the door open a crack on such matters. Another recent example that changed the co-directors thinking was “Another Body,” which tackles the issue of deepfake pornography.
“‘It is a phenomenal documentary about young women whose lives were really ruined by deepfake pornography,” said Antell. “The women agreed to be in this documentary on the condition that they were anonymized. Now they could have done that just by blurring out their faces, but instead they, again in a very intentional way, used the same deepfake technology that put them into these pornographies to anonymize them and they used actors’ faces, obviously, with their consent, and they created an imitation face. The advantage to that over blurring out someone’s face is they were able to capture all of the actual emotions and expressions of that person, and that’s a very different thing for the audience to be able to connect with a human being. They’re upfront [about the use of GenAI], they integrated it into the story, it’s motivated, and it’s form meeting content.”
At the end of the day, the APA decided creating binary rules about the circumstances of when never to use GenAI to recreate real events and people would be limiting the creative possibilities of the technology, but the hope is that their organization’s document could start a conversation and path for the field to collectively steer clear of the innumerable ways GenAI could forever change the art form for the worse.