From casting characters to editing movie trailers, Artificial Intelligence is slowly but surely conquering every aspect of the filmmaking industry— yet many ethical and philosophical questions remain before the final cut.
AI is no stranger to the world of cinema.
From The Terminator to Ex Machina, Hollywood’s fixation on the war between man and machine has spanned decades, spawning dozens of beloved Sci-Fi movie masterpieces.
Although Artificial Intelligence spent its formative years moonlighting on the big screen, today, the real AI action occurs behind the scenes.
Powerful, intelligent algorithms have been applied to nearly every aspect of modern filmmaking, revolutionizing the way cinematographers write stories, edit scenes, and cast characters.
Even so, storytelling is a central pillar of what makes us human.
Are we ready to surrender this activity to an algorithm?
Dawn of the Dead
“Life breaks free. Life expands to new territories. Painfully, perhaps even dangerously. But life finds a way.”
— Dr. Ian Malcolm, Jurassic Park (1993)
In a way, cinema’s most famous dead celebrities are already immortalized in their work.
Each time their movies light up the silver screen, Hollywood’s greatest once again grace us with their presence.
Still, as films age and memories fade, even the most acclaimed actors inevitably decline in cultural relevance.
With AI, however, some studios hope to reanimate their most revered entertainers, utilizing the Deep Learning discipline of AI to cast departed performers in modern movies.
The phenomenon of “Deepfakes” (a term combining “Deep Learning” and “Fake Videos”) first gained widespread notoriety on the Internet in 2007. From the pits of Reddit emerged a bounty of bizarre content, from an entire library of popular films recast with Nicholas Cage to a collection of doctored PSAs from President Obama (along with a variety of other less than savory adult videos).
By leveraging powerful new simulation software (and the availability of increasingly sophisticated computing hardware), this group of inventive hackers figured out how to apply the AI experiments of Ph.D. academics to their favorite films and characters.
Unsurprisingly, Hollywood quickly followed suit.
When Disney first announced its intentions to produce a 2016 prequel to A New Hope, dubbed Rogue One, fans were skeptical of just how “Star Wars” the new installment could really be. After all, the title actors in the original film were far too old to reprise their original roles, and a few had even tragically passed away.
The menacing Grand Moff Tarkin— played by the late Peter Cushing— was one such character that held an indispensable role in the proposed plotline. After all, the story was centered on the notorious theft of the Death Star battle station’s construction plans, and Cushing’s role was that of the Death Star’s commander. There was little room for negotiation.
With permission from Cushing’s estate, however, Rogue One’s producers embarked on a rather risky endeavor: bringing Grand Moff Tarkin back to life.
Using the same techniques used by Reddit’s hordes of deepfake enthusiasts, Cushing’s likeness was fitted to the face of a look-alike actor, employing AI algorithms to precisely superimpose every corner and contour.
Visually, the results were stunningly photorealistic— but ethically, the studio’s choices were questioned.
Catherine Shoard of The Guardian decried the actor’s on-screen resurrection as a “digital indignity.” Nonetheless, John Knoll (one of Rogue One’s visual effects supervisors) protests that Cushing’s depiction was done for “solid and defendable story reasons,” claiming Tarkin was “a character that is very important to telling this kind of story.”
Unfortunately, this line of thinking is a rather slippery slope.
Inspired by the flexibility of a computer-generated cast, the creators of the upcoming Vietnam era action-drama Finding Jack plan to take these questionable casting choices to entirely new levels, posthumously casting Hollywood heartthrob James Dean as their leading man.
James Dean died in a 1955 car crash at the age of 24.
In justifying their selection of a decades-long dead actor, director Anton Ernst claims that his crew had searched “high and low” for the perfect performer to portray “some extreme complex character arcs,” and after months of research, they finally concluded their best option was to digitally recreate James Dean.
Ernst claims to be “confused” regarding the controversy around his unconventional casting strategy, but his critics aren’t mincing words.
Deriding the director on Twitter, Zelda Williams (daughter of the late cinema legend Robin Williams) lamented, “publicity stunt or not, this is puppeteering the dead for their ‘clout’ alone and it sets such an awful precedent for the future of performance.”
As Deep Learning visualization techniques grow increasingly sophisticated, controversies surrounding the simulation of superstars will likely become far more common.
Could studios use this technique to extend the life of their most profitable franchises indefinitely?
Might Hollywood, too, use deepfakes to mimic living actors— perhaps even against their will?
Will access to an infinite library of dead performers tempt Hollywood to abandon unknown actors altogether?
For now, we can only speculate on how far Hollywood will push the boundaries of reanimating actors. In an era where the advancement of our technologies has quickly outpaced our ability to collectively debate their implications, many studios will find themselves at the epicenter of a heated philosophical debate— boldly challenging the definition of death itself.
Big Data Blockbusters
“A world without rules and controls, without borders or boundaries. A world where anything is possible. Where we go from there is a choice I leave to you.”
— Neo, The Matrix (1999)
In the golden age of Hollywood, moviemaking decisions were made in a smoke-filled room, guided by the speculative whims of hot-shot studio executives. Armed only with a short history of box office records and a vague sense of the cultural zeitgeist, filmmakers relied primarily on intuition, staking millions of studio dollars on little more than their own conviction and creativity.
Although this approach generated some of Hollywood’s most admired cinematic masterpieces, many films that are now considered classics still bombed at the box office, missing the mark with contemporary audiences.
Today, things have changed dramatically.
Data is now overwhelmingly abundant. Movie-goers are more than happy to broadcast their sentiments on social media, relaying real-time feedback on films from the moment they hit the big screen. Review-aggregation platforms (most notably, Rotten Tomatoes and IMDb) now serve as universally-recognized industry benchmarks, precisely distilling audience and critic reactions into a singular composite score.
As a result, modern moviemakers have formulated a new approach for developing blockbuster hits, leveraging powerful Machine Learning algorithms to unlock hidden insights about potential productions.
One of the most prominent players in this space is the Los Angeles-based startup Cinelytic.
Boasting powerhouse production firms like Warner Bros. and Sony Pictures as some of their most notable clients, the company has developed a sophisticated actor analytics platform, enabling what some in the film industry have likened to “Moneyball for movies.”
The core proposition of Cinelytic’s offering is simple: measure the “economic impact” of a star’s previous on-screen appearances, then apply Machine Learning to predict how a new release might perform after recruiting actors for a variety of roles.
Which action star will maximize the engagement of Millennial audiences— Tom Cruise or Keanu Reeves?
Will a romance film make more money with Scarlett Johansson or Rachel McAdams as the leading lady?
If Robert Downey Jr. had been replaced by Ben Affleck, would Iron Man still have been a blockbuster hit?
By providing a comparative analysis of a diversity of casting choices, Cinelytic empowers filmmakers to more intelligently forecast the profitability of a project.
Still, many Cinelytic customers do make a point to emphasize that these AI algorithms don’t entirely occupy the driver’s seat when it comes to selecting stars. Rather, the analysis performed merely allows filmmakers to open up “a conversation about different approaches,” providing a framework for understanding how “one or two different elements around the same project could have a massive impact on the commercial performance.”
While industry incumbents may hesitate to entrust too much faith in data-driven production processes, digitally-native streaming companies have integrated AI as the central pillar of their cinematic strategies.
Traditionally, movies could only be classified within a handful of categories—action, drama, comedy, and so on. With the advent of advanced Machine Learning algorithms, however, producers have gained far more granular insights into the relationships between their content collections.
One of the most powerful efforts to emerge from the initiative to categorize media metadata has been the Video Genome Project. Merging the methodologies of both data scientists and film critics, this startup’s sprawling database (containing data derived from over 8 million pieces of content) allows them to aggregate films in a host of unforeseen ways.
The hip-hop driven drama “Empire” and the teenage thriller “Pretty Little Liars,” for example, are two TV series that seem to share very few similarities on the surface. As VGP’s algorithms delved deeper, however, it uncovered a connection many viewers may not recognize: both shows “use music to drive the central plot.”
Ordinarily, this sort of insight would have a relatively insignificant impact on a viewers’ browsing habits— when combined with a network of millions of other data points, however, the result is an exceptionally seamless content discovery experience.
That’s why, in 2016, the Disney-backed streaming platform Hulu acquired the core technologies behind the Video Genome Project, elevating the algorithmic recommendation engine to a global audience.
Hulu isn’t alone in their advancement of Artificial Intelligence.
Netflix famously leveraged Machine Learning methods to uncover an equally obscure intersection in audience preferences, using the technique to develop their first breakout series, House of Cards.
For decades, only Hollywood’s most elite institutions have possessed the unique ability to spot talent, recognize trends, and produce sensational cinematic successes. As AI begins to expand its filmmaking footprint, however, we may soon see data-driven disruptors wield a far greater influence when it comes to how movies are made.
The Final Cut
“An idea is like a virus, resilient, highly contagious. The smallest seed of an idea can grow. It can grow to define or destroy you.”
— Cobb, Inception (2010)
The Internet ushered in a new age for video content.
The explosion of mobile media platforms, in particular, played a huge role in shifting consumer preferences. Prior to the global proliferation of smartphones, audiences essentially had two choices in terms of video content: half-hour TV episodes, or full-fledged feature-length films.
Once online video-sharing platforms opened the floodgates for user-generated content, however, creators quickly conformed to the dwindling attention spans of their audiences.
On YouTube, this transformation drove the average length of a video down to just below 12 minutes.
The (now-defunct) video platform Vine gained widespread popularity among Millennials by extrapolating this trend even further, condensing content to a length of only 6 seconds.
TikTok— the most popular successor to Vine’s video service— achieved exponential growth by lightly loosening these duration restrictions, allowing users’ posts to reach up to 15 seconds in length.
As a financially-fertile Silicon Valley continues to sprout numerous new competitors to legacy content producers, show business is beginning to suffer. Movie theater attendance has plummeted to a two-decade low, exhibiting many of the same troubling signs that preceded the collapse of the print publishing industry.
Clearly, short-form content has taken center stage.
In the midst of this digital transformation, however, one facet of the film industry is thriving: the movie trailer market.
In the pre-Internet era, there existed only a dozen or so companies in the business of slicing and dicing feature films into a collection of commercialized clips. Just two decades later, there are hundreds of these boutique studios.
As the front line in the promotional push to attract audiences, these editing artists now play a critical role in generating buzz for upcoming releases, especially in online avenues. In 2015, users viewed over 35 million hours worth of theatrical trailers on YouTube— experiencing year-over-year growth of nearly 90 percent.
With such an enormous volume of content to compete with, the most successful trailer houses have attempted to pioneer a variety of new marketing methods, employing cutting-edge digital technologies.
In this new-age trailer-making toolset, AI has undoubtedly become the most powerful instrument.
In a 2018 paper published by a team of researchers at 20th Century Fox, machine learning algorithms were used to analyze similarities in the studio’s movie trailers, allowing producers to predict the composition of their most commercially-viable audiences.
An in-depth understanding of overlapping customer segments has always been a critical mandate for Hollywood marketing teams. Traditionally, these insights had been obtained through rather meticulous and resource-consuming means— focus groups, questionnaires, and interviews.
These researchers, however, decided to leverage Google’s Tensorflow framework, developing a breakthrough new neural network architecture for Fox’s movie attendance prediction and recommendation system, affectionately referred to as Merlin.
While the use of deep learning algorithms was not an unprecedented initiative in the field of movie trailer analytics, this team’s approach was innovative: rather than simply categorizing each trailer based on the presence particular visual cues— fight scenes, car chases, and the occasional steamy romance— Merlin took a novel approach by also examining how these sequences were timed.
By incorporating an awareness of chronology in addition to simple object recognition, Merlin was able to pose a number of more interesting questions:
Just how differently might a horror movie and a superhero flick sequence scenes containing violence?
Does an extended series of conversational close-ups indicate a film is aimed towards a more sophisticated audience?
What’s the best way for a comedy trailer to balance clips of slapstick humor with scenes of witty dialogue?
Prior to systems like Merlin, the commercial success of many feature films rested solely on the skills of the editor charged with creating an intriguing composition of movie trailer snippets. As Artificial Intelligence continues to infiltrate the ranks of Hollywood, however, the power to enchant future filmgoers may yet become one of the most accessible tools of the trade.
Even today, it seems AI is no longer satisfied with a role in post-production— in many cases, studios are infusing machine learning directly into the trailer-editing process.
The creators of the Sci-Fi thriller Morgan— a film centered, appropriately, on an artificial being with a knack for rapid learning— were early pioneers of this AI-first approach, leveraging IBM’s Watson AI to accelerate their trailer’s scene-selection process.
As training for this assignment, Watson was first tasked with reviewing over 100 horror and thriller trailers. With each clip, Watson examined ingredients such as musical score, color grading, and even the facial expressions of actors featured, tagging human emotions from a bank of over 24 categories.
Once Watson had been exposed to some of cinema’s most thrilling trailers, the AI was challenged with applying this newfound production knowledge, selecting 10 of Morgan’s most captivating scenes to help drive a narrative for the film’s promotional premiere.
With the assistance of an-in house filmmaker from IBM Research, Watson accelerated an editing process that would normally take weeks to the span of just 24 hours, helping the team quickly stitch together a truly ominous and unsettling preview.
Movie trailers serve as the ultimate manifestation of humanity’s susceptibility to storytelling. Expert editors can leverage a deep understanding of narrative, human emotion, and timing to craft a tempting teaser, skillfully coaxing audiences into an evening at the theater.
Soon, however, AI may grow to understand us even better than we understand ourselves— in doing so, developing persuasive powers that are both captivating and terrifying.
For decades, AI has been a poster-child for the science fiction film genre. From Stanley Kubrick to Spike Jonze, several generations of Hollywood’s most prolific filmmakers have grappled with the idea that the human intellect may one day be eclipsed by a mechanically-minded counterpart.
Suddenly, this decades-old contest has leapt straight from the silver screen and into the production studio.
In a few short years, Artificial Intelligence and Machine Learning have completely transformed the inner workings of Hollywood, enabling studios to innovate far beyond the scope of traditional filmmaking. Now, only one question remains: how will they wield this potent new power?
AI is no longer a concern for some futuristic dystopian civilization.
It’s here, now, and soon…
Coming to a theater near you.
Can you think of any other examples of how Artificial Intelligence has been applied in the world of filmmaking? Let me know in the comments below (or just tell me about your favorite sci-fi flick)!
Artificial Intelligence Will Catalyze a Remote Work Revolution
More and more Americans are working from home— unfortunately, this presents a number of social and technical…