Before David Shore watched his daughter graduate from Boston University in May, the two of them made a pit stop at a picket line. Shore, showrunner for The Good Doctor and a native of London, Ont., had been on strike with the Writers Guild of America for the past three weeks and was a member of the guild’s negotiating committee. It so happened that Warner Bros. Discovery CEO David Zaslav, one of the sources of the union’s ire, was giving the class of 2023′s commencement speech. With his daughter beside him in her cap and gown, Shore marched with his guild colleagues outside the ceremony before going inside.
His picket sign that day carried a calculated message. “I am a writer*,” he wrote, the asterisk drawing attention to a footnote: “Not written by ChatGPT.”
AI is coming for my job. My boss told me so
Two months later, 160,000 members of the SAG-AFTRA actors’ union joined 11,500 of Shore’s colleagues on the picket lines. The strikes have coincided with the broad cultural reckoning over generative AI tools such as the text writer ChatGPT and image generator Dall-E. Both unions have both made AI protections a key component of their negotiations amid their effective shutdown of Hollywood: they want to safeguard their members from having their work and likenesses reused or co-opted for studios’ profit.
“To go on strike is obviously a dramatic choice,” said Shore, who also created the show House and has written and produced for NYPD Blue and Due South. “People suffer, and we did not take that responsibility lightly. The fact is, you don’t want to wait until the last minute with AI.”
Since flooding the consumer market late last year, companies the world over have been seeking ways to cut costs using generative AI models – which hoover up information from anywhere they can, including the open internet, to remix and create endless new content at their users’ desires. These services tend to mature rapidly as they ingest more data from users and new data sets, replicating human output with more accuracy over time. The artists and creative workers who make the world’s entertainment, from authors to writers to actors to musicians, have become deeply concerned about having their work devalued or replaced.
Though jurisdictions such as the European Union are taking steps to force the makers of AI models to be transparent around the material, to conduct risk assessments and avoid copyright infringement, government tech regulation generally moves at a molasses-like pace. So unions are turning to their collective agreements to build in guardrails before precedents might be set for replacing their workers’ creations.
As SAG-AFTRA prepared to strike, chief negotiator Duncan Crabtree-Ireland framed AI as a crucial negotiating point, claiming in a press conference just before striking that studios “proposed that our background performers should be able to be scanned, get one day’s pay, and their companies should own that scan, their image, their likeness and should be able to use it for the rest of eternity on any project they want, with no consent and no compensation.”
The Alliance of Motion Picture and Television Producers (AMPTP), the negotiating body for major studios – ranging from Apple to Netflix to Paramount to Warner Bros. Discovery – disputed this characterization in an e-mail to The Globe. The proposal, said the group’s communications consultant Scott Rowe, “only permits a company to use the digital replica of a background actor in the motion picture for which the background actor is employed,” while any further use requires consent and payment.
It may still be possible for actors to get further protections. The Directors Guild of America, for instance, secured language in a new collective agreement with Hollywood studios in June that guaranteed “that AI is not a person and that generative AI cannot replace the duties performed by members.”
Other unions are hoping to get similar guarantees. “If we allow AI to take over without our control, consent or compensation, there will be massive job losses for performers,” said Eleanor Noble, a long-time actor and voice actor in Montreal who is president of the 28,000 member Alliance of Canadian Cinema, Television and Radio Artists (ACTRA).
The English-language union, Canada’s biggest acting guild, has already negotiated some AI protections in its agreements with video-game studios, Noble said, but plans on making AI a key focus of its negotiations when its collective agreement with film and TV studios expires next year.
The Union of British Columbia Performers, an ACTRA division with a separate collective agreement, signed on to a contentious extension through 2025 with studios in July. It allowed a historic 5-per-cent raise in minimum compensation at the expense of negotiating any other terms until the extension ends. Though 78.5 per cent of members voted in favour, dozens of ACTRA members signed a letter before the vote discouraging B.C. actors from accepting the offer during the SAG-AFTRA strike.
The letter focused on solidarity, but signees such as Darcy Michael are worried about the absence of AI protections in the agreement. “AI doesn’t have childhood trauma; AI doesn’t understand what it means to fall in love; AI will never truly be able to create art like artists can,” said Michael, a prominent TikToker, comedian and actor who appeared on CTV’s Spun Out.
Numerous members of B.C. film unions also worried that the new extension to their existing agreement might mean the province could become a haven for studios to use AI to generate background actors using data from previous body scans. (Hollywood has a long history of reusing human expression; one man’s exasperated yell, dubbed the Wilhelm Scream, has been reused in dozens of popular movies since the 1950s.)
While many purportedly AI-generated commercials and movie trailers appearing online are clearly fake and deeply unsettling – think glitchy movements, over-contorted faces, unnatural movements, too many finger joints – generative AI models are maturing fast enough that it’s entirely possible that the services that made them could start churning out creative works that come much closer to the real thing.
The director Deepa Mehta said in an interview with The Globe that “the spontaneous performance of an actor, the surprise of a facial movement or delivery of a line, cannot be replicated by a machine.” While the Indian-Canadian filmmaker behind such acclaimed films as Water, Fire and Bollywood Hollywood has not yet been presented with the opportunity to use AI in her work, the very idea feels antithetical to the artistry of storytelling. “It’s not what people go to the movies to see.”
These developments are startling people across the screen sector. “I’ve had more discussions about budget in the last year than I had in the previous 15 years of running shows,” Shore said in an interview. When studios can save money, he said, “they seem to just not care of the quality of the product.”
Alex Levine, the president of the Writers Guild of Canada, said that without proper protections around the use of AI, it’d be very easy for “unscrupulous” companies and producers to generate a quick script, then pay screenwriters a pittance to polish it up. “And then in the dystopian near-future, the corporation owns the script, and pays you to be a gig worker,” said Levine, a coexecutive producer on Orphan Black who’s done writing and script work for numerous Stargate series.
The threat of AI extends not just to its use by studios, but by everyday people. The long-time video-game voice actor Ellen Dubin discovered this a few weeks ago when she found out that her voice had been stolen.
A fan of The Elder Scrolls V: Skyrim tipped her off that her voice was one of about two dozen that had been cloned using an AI-powered tool for use in a player-modified, pornographic extension of the game. She was stunned: when the Toronto-and-Los Angeles-based actor signed on to record vocals for the game more than a dozen years ago, having her voice reused for other purposes – and certainly not for such nefarious means – wasn’t something she’d considered.
Dubin shared evidence of the modification with The Globe, which is not sharing further details to respect her privacy. She had to contact the game-modification site, which eventually removed the infringing content. But the shock is still rippling through her brain: “How can we be protected and informed of this? We really have to get on this – yesterday.”
With a report from Barry Hertz
Rise of the machines: An abridged history of digital advances on our screens
A knight that emerged from stained glass in Young Sherlock Holmes (1985) is widely regarded as the first character in a nonanimated movie to be designed entirely with computer-generated imagery, or CGI. Soon after, the Oscar-winning Toronto-born animator Richard Williams bridged the world of humans with cartoons in 1988′s Who Framed Roger Rabbit?.
But a more realistic CGI character got top billing in Casper (1995), in which the friendly computer-animated ghost and his uncles haunted a cast of mostly real people. Later that year, Pixar Animation Studios showed the world what a fully CGI movie could look like with Toy Story, and studios started pouring resources into digital animation and augmentation.
Star Wars: Episode I – The Phantom Menace (1999) brought Ahmed Best’s much-maligned character of Jar Jar Binks to life with CGI. But its sequel Episode II – Attack of the Clones (2002) was the first major blockbuster to mark an important point in the screen industry’s computer-oriented shift, using digital cameras instead of film.
Filmmaker George Lucas’s eager embrace of CGI, however, was clumsy, transforming Frank Oz’s wise Jedi Master Yoda from a puppet into a digital version, which was something lesser than the sum of its parts. Years later, Obi-Wan Kenobi actor Ewan McGregor told Variety that the computer-animated version Yoda was “not nearly as endearing” as Frank Oz’s puppet predecessor.
2006 – July 2022
CGI characters became more realistic throughout the 2000s as processing power skyrocketed, and computers became an intrinsic part of the animation process. Humans were still involved, both on the motion-capture side and in the actual animation on top of what was captured.
During the same period, artificial-intelligence research began to blow up as pioneers including Geoffrey Hinton had breakthroughs in concepts such as neural networks that allowed AI systems to process massive amounts of information. People and companies began harnessing this technology to take on laborious tasks. Toronto’s Monsters Aliens Robots Zombies Inc., for instance, began using AI to save visual-effects artists time in removing tracking marks from actors wearing motion-capture gear in each frame of movies and shows such as WandaVision (2021).
At the same time, filmmakers began using visual effects to augment actors for new kinds of storytelling purposes. X-Men: The Last Stand, released in 2006, was the first major film to “digitally de-age” its characters, making Patrick Stewart’s Professor Charles Xavier and Ian McKellan’s future villain Magneto appear decades younger. Once the X-Men opened the floodgates, digital de-aging became all the rage. But humans were doing the work with real actors on the screen who consented to being there. Technology would soon emerge that threatened to remove humans – or at least their consent – from the equation.
Within a few weeks of each other in 2022, the London-via-Munich startup Stability AI launched the Stable Diffusion deep-learning model – letting users generate images from text commands – and the California firm OpenAI removed the waitlist for its own image-creating service DALL-E 2 (named with a hat-tip to Salvador Dalí and the Pixar robot WALL-E).
By scraping millions, if not billions, of pairs of images and descriptions from the internet and elsewhere, their models recombined images, ideas, and styles – like, say, a Picasso-style painting of Garfield eating Arby’s or a lime-green laptop where the keys were made of squids. Human talent was no longer needed to execute wild graphic ideas – the AI models could do it for you, with no compensation or meaningful credit going back to the creators of the source material.
After a decade of increasingly sophisticated “chatbots” – most commonly known as those little text boxes that pop up on companies’ websites offering customer-service help – OpenAI also made its long-gestating ChatGPT service public in November. You could ask it to write just about anything, and it would digest a bunch of information it’d gathered from across the internet and spit it back to you in surprisingly clear writing.
ChatGPT and similar services such as Google’s Bard weren’t perfect – many responses were bereft of fact, tact and, for the source materials’ creators, payback – but by the end of 2022, millions of people were churning out essays, scripts, conversations and more with these services. Almost immediately, a chorus of voices in creative fields began to shout variations of the same refrain: what will all of this mean for the future of their work?
December 2022 – June 2023
As the Writers Guild of America began negotiations with Hollywood studios in the spring, AI was high on their list of concerns. When the studio rejected the writers’ proposals and sent them into strike mode, Phillip Iscove, the creator of the show Sleepy Hollow, tweeted details from the union’s rejected AI proposal. “AI can’t write or rewrite literary material,” it said, and material in their collective agreement “can’t be used to train AI.” But the writers said they would allow for AI services to be used as a tool in the writing process, so long as unionized writers were involved and credited.
Meanwhile, the Directors Guild of America did secure some rights around AI in its June agreement with studios.
July 2023 – Now
By the time the actors in the American SAG-AFTRA union approached their strike deadline in July, its leaders were making clear that having their work replaced by generative AI was at the forefront of their concerns. Some actors began claiming publicly that they’d been asked to have their bodies scanned during film shoots.
In her impassioned speech after her union announced they would strike midmonth, The Nanny star Fran Drescher warned that “We are all going to be in jeopardy of being replaced by machines.”
The studios claimed to U.S. media in response that their proposal “only permits a company to use the digital replica of a background actor in the motion picture for which the background actor is employed. Any other use requires the background actor’s consent and bargaining for the use, subject to a minimum payment.”
Yet AI continues to be a growing concern, including in jurisdictions such as B.C., where actors and directors worry that a one-year contract extension they just voted in without any AI protections could turn the province into a haven for using AI-generated background actors, with the loss of support roles cascading across the industry.