Artificial Intelligence is completely reinventing media and marketing. The results are much weirder than expected.

Robert Tercek
ID in the IoT
Published in
32 min readJun 4, 2019

--

When artificial intelligence is fully operational, it will transform the media and marketing industries. In particular, I believe that synthetic personalities powered by AI will change the way we learn about new products and how to use them.

In my previous article, I showed how the collapse of broadcast TV exposed a huge weakness in the advertising industry. And I pointed to the nascent field known as Influencer Media, and especially Virtual Influencers, as a harbinger of the future of engagement brand-building.

But that’s just the beginning.

What happens when artificial intelligence is available to any app, any advertising campaign, and any brand marketer? How will that change things?

Here’s my answer: the media landscape will be transformed so deeply that it will be completely unrecognizable. All the leftover junk from the 20th century will be kaputt, including one-size-fits-all video programs for mass audiences, appointment viewing of a TV schedule and the very concept of TV channels, and the outdated intrusion of interruption advertising.

Personalized programming and fully-responsive adbots will be the new norm.

Two weeks ago, I gave a speech to 4000 executives and managers at one of the biggest telecommunications companies in the world. They asked me to speak about the future of media and technology.

I decided to focus my remarks on artificial intelligence.

This was no arbitrary decision on my part. The single most important thing to understand about the new 5G networks is that they are software-defined. 5G involves a massive investment in new hardware, of course, but what’s special about 5G is the software. The entire 5G network will be designed, planned, monitored, managed and optimized by artificial intelligence. A software-defined network governed by AI will be flexible and responsive in ways that single-purpose hardware never can be.

5G will be the biggest deployment of distributed AI yet.

Today, the mobile operators are not advertising this fact widely. They are not eager to telegraph their competitive advantage to competitors.

Soon, however, they will need to speak openly about this new capability if they plan to open up their 5G networks to developers and ecosystem partners who will build fabulously inventive, unanticipated, mind blowing applications on top of the network.

They will, that is, if the mobile network operators are smart enough to invite the developers, encourage them, support them, provide stable APIs and enable them to make a huge amount of money. Historically, this has not been a great strength of mobile network operators. They are famously clumsy to the point of incompetent when it comes to dealing with partners, especially when it comes to managing their developer ecosystems, so it remains far from clear that they will execute this tactic successfully.

But I am confident that AI as platform will happen eventually because of competition. Even if the mobile operators remain as closed and controlling as they have been in the past, they will face ferocious competition from the big cloud computing platforms that are already racing to deploy AI as a service. Google, Microsoft and Amazon know a lot about managing developer networks, and they will either partner with or supplant mobile operators by making powerful AI available to developers right at the edge of the network where it is needed.

One way or another, it seems inevitable that artificial intelligence as an on-demand service will be widely available soon.

Are media and advertising companies ready to use it?

Usually, when we think about the future of media, we don’t consider AI. As I’ve noted in the previous articles in this series, the folks in the media and advertising business are intensely preoccupied with streaming video platforms, including on-demand services like Netflix and (considerably less so) with live streaming systems like Twitch.

In their defense, this myopic focus makes sense because streaming video has permanently altered consumer behavior, but at this point it is also kind of an obvious trend: there are more than 200 OTT streaming videos services available in the United States today, and thousands outside the US.

What comes next after OTT?

In my speech, I aimed to push past the obvious trends like streaming video to explore something that is still emerging and evolving. To me, right now, that’s AI for media.

Today the question “How will artificial intelligence influence the future of the media and entertainment industry?” is valid because we now have some early answers and sufficient information to speculate about its future trajectory.

But first we need to surmount a barrier of skepticism. Until recently, inside the media industry, there was deep resistance to the notion that AI will play any role whatsoever. Many creative professionals remain convinced of that today.

The prevailing notion is “Robots can’t do creative work.

For 30 years, I’ve worked in the media and entertainment industry. I live in the heart of Hollywood, amid the greatest concentration of professional creative talent on the planet. Most of the folks I know in the media business would reflexively dismiss the notion that AI can perform any creative function better than a human artist. And nearly all of them would insist that an AI will never steal their job in particular.

But this is the wrong way to frame the question.

Fears about robots and AI “stealing jobs” from humans are overblown. Magazines flog this sensationalistic claptrap relentlessly because scary stories about rogue AIs always sell. Every magazine has succumbed to the temptation of publishing a lurid cover photo featuring a humanoid robot with glowing red eyes and a doomsday headline about the AI apocalypse. This stuff grips the public imagination, but it is poppycock.

While there is no doubt that software automation will displace some workers, there is zero evidence from 300 years of industrial activity to suggest that all jobs will be taken by machines.

The Luddite argument that machines steal jobs is kinda-sorta true in the narrowest view, because some workers are displaced every time a new system of automation is deployed, but it is wildly inaccurate in the macroeconomic perspective.

Here are the facts. Automation increases productivity as it frees human workers to attend to higher-value problems, typically at higher wages. Increased productivity is the only non-inflationary route to higher real wages for workers. Automation tends to expand the economy, which generates entirely new kinds of jobs. (I made this argument in detail in my book Vaporized, and you can read it here, here, here and here.)

A more constructive way to think about AI, robotics and other forms of applied software automation is: how will this technology enhance human labor? How will it give human workers superpowers? How will it free human workers from drudgery and rote tasks so that we can turn our attention to the more interesting and vexing complex creative tasks?

How will AI help creative talent produce ever more amazing entertainment experiences?

Let’s consider entertainment and media from this perspective. What is the most constructive way to envision artificial intelligence applied to media and marketing?

I think there are three primary questions:

  1. Can AI improve workflow in entertainment production and distribution? If the answer is yes, and AI saves time, money and human effort, then it will surely be adopted on the broadest scale.
  2. Are consumer audiences willing to engage with automated systems for entertainment and information? If yes, then this should dispel the lingering notion that humans won’t pay attention to content produced by robots.
  3. Are consumer audiences willing to pay for entertainment and information presented by robots and AI? If yes, then there’s evidence of a business model.

In my speech at the telco, I addressed all three questions with examples and evidence from early deployments that are taking place today, not in the future.

This evidence overwhelmingly supports an affirmative response to all three questions. Moreover, it points to a future of media that is more bizarre and far more exciting than anything I have ever previously designed and launched in my 30 year career.

Here’s what I told the telco executives:

1.Can AI improve the workflow in entertainment production and distribution?

Yes. There are examples of artificial intelligence in use today at nearly every stage of pre-production and post-production as well as distribution.

Consider the following examples:

Algorithms and AI in Programming and Presentation.

Streaming video pioneer Netflix famously relies upon big data analysis to determine whether or not to greenlight a new film or series. Ever since the success of House of Cards in 2013, Hollywood pundits have speculated breathlessly about exactly how much Netflix relies upon algorithms to make programming and acquisitions decisions. Chief content officer Ted Sarandos downplays the significance of data-driven programming, claiming “It’s 70 percent gut and 30 percent data.” Outsiders suspect he is deliberately underselling in order to put rivals off the scent.

Netflix has been at the forefront of algorithmically-powered recommendations for more than a decade. In the half-decade since House of Cards, the OTT giant has developed an increasingly complex stack of machine learning algorithms, including open source tools, to improve system recommendations.

Netflix uses data analysis to predict audience behavior rather than to estimate the performance of a particular program. In this sense, Netflix is in the content personalization business. The company claims that 75% of viewer activity is driven by algorithmic content recommendations. But even that figure understates just how pervasive algorithmic recommendations are within the Netflix service. Sarnados frequently points out that each viewer’s experience of Netflix is unique. As the company says, there are more than 100 million versions of the service. Even the individual key art that appears on your TV screen to promote a single episode of a show will vary from the art that other subscribers see.

Today, the most successful startup media firms begin with Netflix-style algorithmic programming built into their software architecture from the outset. AI is in their DNA.

Enter TikTok, owned by the Chinese firm ByteDance. You probably don’t use this app, and you may not have even heard of it, but your ten-year-old kids are using TikTok to watch music videos and make their own funny videos.

Part of TikTok’s extraordinarily fast growth can be attributed to machine learning algorithms which provides each viewer sees a completely unique series of clips. It is a software-defined media service, programmed entirely by AI. It’s a glimpse of the future of media.

Other new firms apply machine learning to make predictions about whether or not a film will be profitable based on the story or script. Startup ventures like Cinelytic, ScriptBook and Vault offer algorithmically-generated insights to major motion picture companies. ScriptBook claims to be twice as accurate as the major studios in predicting a particular film’s eventual box office success.

Other entertainment giants are racing to catch up with Netflix. 21st Century Fox partnered with Google, using the TensorFlow framework to develop a machine learning system intended to predict audience preferences and film performance before greenlighting a production.

From Prediction to Visualization and Content Generation

Some media companies are moving towards algorithmic generation of content. The Walt Disney Company uses artificial intelligence software to generate rapid prototypes: the Disney AI can interpret natural language descriptions of scenes and settings in movie screeenplays in order to generate storyboards and rough animation sequences.

Some ambitious startup firms are venturing further that they. They seek to generate the entire entertainment experience algorithmically. California startup rctstudio is determined to leverage AI to generate open-ended immersive story worlds in which the audience can roam freely and interact with non-player characters (artificial personalities powered by AI). Think of rct’s work as a blend between online video games and movie worlds.

AI in Post-Production

Hollywood tech firms now use AI to improve the accuracy of human characters in computer-generated imagery and digital special effects, or to insert and remove objects from a scene, and to optimize the performance of a particular movie trailer.

LA-based VideoGorillas has developed a suite of tools called “Bigfoot” which employ generative adversarial networks (GANs) to upscale standard-resolution video to 4K quality. Other tools in the Bigfoot suite already automate the drudgery of routine editorial tasks like identifying and removing problematic shots and conforming source footage to existing master versions of shows.

None of this technology is “stealing jobs” from film editors or special effects crews. The AI is a tool to free these pricey specialists from the drudgery of scrolling through miles of footage to find a shot or wasting hours manually tweaking a special effect. The AI saves time, money and effort.

You don’t need to be a professional to use these tools. On Reddit, video game modders are using AI to upscale old video games for display on higher resolution monitors.

AI in distribution.

Machine learning algorithms are excellent at predicting whether a particular person will engage with a video clip or not. Netflix is at the forefront of applied artificial intelligence in every stage of video delivery.

But that’s not all. AI also helps to govern the quality of service to each subscriber. Netflix uses artificial intelligence to monitor bandwidth in the network and optimize a particular household’s video and audio streams based on available bandwidth and network congestion.

Netflix even uses artificial intelligence to monitor whether subscribers share their passwords.

AI can also aid monetization of video by improving the environment for advertising. Today several firms compete to offer systems powered by AI for brand safety, efficient targeting and more completed views. Now that programmatic delivery of ads is well established, the next frontier for advertisers will be to use AI to increase the relevance by personalizing the message.

Let’s cap this first question with an enthusiastic yes. Whatever can be automated, will be automated. Today it is easy to find many examples of companies in the motion picture and entertainment industry that rely upon artificial intelligence in every stage of planning, programming, pre-production, pre-visualization, post production, distribution and monetization. Some of these applications are more advanced than others. There’s room for improvement in all of them. But each confers an advantage on the companies that use it today. AI in media is here to stay, and it will just get better and better.

2. Are consumer audiences willing to engage with automated systems for entertainment and information?

Yes. Today there are numerous examples of large numbers of consumers interacting with automated systems in ways that would have been considered preposterous science fiction fantasy just a decade ago.

Would you have a conversation with your home stereo? Ten years ago, this would have been a silly question. Today 100 million people talk to Alexa. If we include smartphones and other smart speakers, then the number of people interacting with voice-driven AI assistants soars even higher into the many hundreds of millions. That’s people talking to machines. Think about that for a moment. And if you still consider Apple’s Siri “embarrassingly inadequate”, just remember that this is an incredibly complex feat of engineering. Like all technology, this is the worst it will ever be, as each successive generation continues to improve.

Would you ask your TV to find something to watch? Thanks to embedded Alexa and Google assistants and improved smart TVs, the habit of talking to your television or to your remote control is now commonplace.

This could seem like an absurd new habit, until you compare it to what came earlier. Back in the days of satellite and cable TV, we used fumble with overcomplicated remote controls, jabbing with our fat fingers on tiny buttons to navigate through endless scrolling menus. Looking back at cable TV’s cumbersome interface from today’s vantage point, that was clearly absurd.

Ten years ago, it was the opposite. The only idea that sounded dumber than a television that you talked to was the infamous “smart refrigerator”, a durable trope from CES that dates all the way back to the 1990s. Back then, we all agreed it was a stupid idea. Now it’s here. There are now Alexa-powered refrigerators, microwaves, cars, mirrors, showers and even toilets, whether we asked for them or not. All of this proves the age-old observation that new consumer technologies are often considered preposterous until the technology advances; then when they work acceptably well, we begin to take them for granted and promptly forget what came before.

Would you chat with a robot? Until recently, this question would have been elicited memories of failed 1960s experiments like the original Eliza textbot.

Today chatting with machine intelligence is commonplace. Millions of people interact with chatbots via messaging apps like Kik and WeChat every day.

Many folks prefer this type of dialog to dealing with conventional customer service. Unsurprisingly some customers find it more satisfying to communicate with companies via their chatbot instead of coping with a disempowered human operator who happens to be sitting in a call center in Bangalore. In 2018 more than 2 billion business-related messages were sent via 30,000 chatbots in Facebook Messenger.

Baby boomers seem to enjoy chatbots even more than Millennials. Inventor and AI cheerleader Ray Kurzweil predicted, “If you think you can have a meaningful conversation with a human, you’ll be able to have a meaningful conversation with an AI in 2029. But you’ll be able to have an interesting conversation before that.”

Would you accept broadcast news that is read by a robot anchor? Maybe the US is not quite ready for this level of innovation, but it’s a fact in Russia and China where robot newsreaders are already on TV. (As Broadcast News revealed in 1987, the newsreader is not necessarily the person who does the investigative journalism or the one who even writes the copy. In many cases the human newsreader does just that: he or she reads the news displayed on a teleprompter while seated in a studio looking pretty. That’s it. In other words, the newsreader is a biological robot ripe for displacement by a machine).

Would you read a newspaper written by robots? You probably already do, at least in part. Most readers of newspapers cannot distinguish between articles written by human reporters and those generated by automated systems like Narrative Science. Today automated news coverage is limited mainly to simple reporting of sports scores, weather and stock market results, but the learning algorithms are improving constantly.

Would you listen to music generated by AI? A German app called Endel has been signed by Warner Music in the first-ever multi-album deal for a non-human recording artist. Endel is under contract to deliver 20 albums of mood music by the end of the year.

For years, fans have speculated whether services like Spotify relied upon machine intelligence algorithms to generate ambient music and generic dance tracks by fake artists. Now Spotify has come clean: AI finally has its own genre and playlists.

It’s not all ambient trance music, either: now an AI-fueled live stream generates endless death metal on YouTube.

Currently, better results come from joint efforts between human composers and AI systems. An entire cottage industry of firms has sprung up, offering AI tools to burnish the efforts of recording artists and generate ideas for fresh tracks. It’s a growing field with plenty of room for improvement.

See? This reinforces the initial point above about robots stealing human jobs. AI is not replacing the recording artist, it’s making the artist better. Just ask Billboard Magazine, the house organ of the music labels.

Would you watch a film written by an algorithm? Honestly, you wouldn’t. Not yet, at least. So far, screenplays written by AIs are most notable as the occasional oddball novelty rather than compelling entertainment. Why is this the case? More on that below.

But wait. Before you reject the notion that artificial intelligence can generate a script, consider this Lexus commercial that was written by an AI.

Would you play a video game against an AI? If so, you may have a death wish. Seriously, AIs have been crushing human opponents in an unbroken succession in every obscure corner of the gaming racket since Deep Blue beat chess grandmaster Gary Kasparov, followed by Watson trouncing two Jeopardy! champions, followed by AlphaGo thumping Lee Sedol in Go. And now there’s an AI that kicks professional DOTA player butt. Do not accept a challenge to play against an AI. You have no chance, puny human. You’ll just be humiliated, and you’ll regret it when SkyNet takes over.

To summarize this section, there’s no question that human audiences enjoy interacting with AIs and automated systems, even though some of these systems are works-in-progress. Today there are numerous examples of humans interacting with artificial intelligence at scale on a daily basis to obtain information, instructions, search results, news, and even original entertainment. It no longer seems odd to shout at your stereo, “Hey Google, play the Chainsmokers on Spotify.” We seem to be at the early stage of developing a rapport, even a relationship of dependency, with non-human intelligence, even though other tasks like rich storytelling remain out of reach of AIs today.

3. Are consumer audiences willing to pay for entertainment and information presented by robots and AI??

The examples cited in the previous section don’t necessarily prove whether or not consumers will find entertainment generated by machines truly valuable enough to warrant a cash payment.

It’s one thing for a machine to provide passable entertainment or news for free; it’s another thing entirely to persuade audiences that machine-generated content is worth paying for.

If we build it, will they pay? The surprising answer, based on all available evidence, is hell, yeah.

The best examples come to us from music tours. Concert tickets are expensive and competition between tours is fierce. It’s a tough market.

Now there’s a growing cottage industry that specializes in bringing dead celebrities back on stage in the form of rudimentary hologram simulations in otherwise live concerts.

Dr Dre and Digital Domain introduced the idea in 2012 at Coachella, when Snoop Dogg performed on stage with Tupac Shakur. The performance was notable because, 15 years earlier, Tupac had been killed in a shooting spree in Las Vegas.

Until showtime, no one in the cast or crew was quite certain how the audience would respond to a reincarnated rapper. In the end, Tupac’s return was a smashing success, and Dre had the good sense to limit the engagement to two performance.

Since then, plenty of dead celebrities have been digitally exhumed as performing zombies, including Michael Jackson, Roy Orbison, and Frank Zappa. A planned Amy Winehouse tour was recently postponed, and a Whitney Houston hologram revival was announced two weeks ago.

The dead-celebrity-concert-tour business is pretty weird and legally complicated, as this Vox article explains. But the consistently positive reaction from audiences suggests that there is an appetite for simulated crooners that goes far beyond ghoulish novelty. A long list of dead singers, ranging from Patsy Cline and Marilyn Monroe to Maria Callas is currently under consideration for, ahem, revival tours.

Just to prove that this gimmick is not solely reserved for dead artists, last month Madonna used augmented reality to present not one but five versions of herself dancing during a live performance at the Billboard Music Awards. Let’s hear it for split personalities, live on stage!

Weirdness aside, the success of such virtual concert performers indicates that audiences are clearly willing to pay for entertainment that is generated by machines.

What about an entirely synthetic performer? Will audiences accept a singer designed from scratch in software instead of one derived from a living or dead celebrity? Again, the answer seems to be yes.

The best example remains Hatsune Miku, the animated singer from Japan whose concerts consistently sell out when she tours. Miku, whose Japanese name means “first sound of the future”, began her existence as a Vocaloid, a voicebank powered by vocal synthesizer software from Sapporo-based Crypton Future Media and Yamaha.

Her popularity stems in no small part from the fact that her performances are crowdsourced. When fans downloaded the Vocaloid software and began writing tens of thousands of songs for Miku, posting them alongside hand drawn pictures and animations, Crypton responded to popular demand by commissioning an official character design from manga artist Kei Garo. The embodied voicebot has all of the attributes of a classic anime character, complete with flowing turqoise braids hair unnaturally long limbs. She debuted on stage in 2013, opened for Lady Gaga’s tour in 2014, appeared on Late Night With David Letterman in 2014 and toured the US in 2016.

Hatsune Miku concerts routinely sell out in Asia and North America at prices ranging from $60 to $150 per ticket. Enchanted fans sing along and mimic her dance moves, waving color-coded glowsticks in rhythm to her songs.

But sometimes they go even farther.

Last year, a 35-year old school administrator named Akihiko Kondo spent $17,500 on a “cross-dimensional wedding ceremony” to be married to the virtual performer.

But Kondo’s not the only one. Gatebox, the firm that sells a household-scale hologram device depicting Hatsune Miku, has issued marriage certificates to to more than 3,700 people .

“I think it’s inevitable that it grows into something bigger.” said Anamanaguchi’s Peter Berkman, who toured with Hatsune Miku. Referring to “pop stars and icons that aren’t attached to physical bodies,” Berkman said, “the potential is infinitely larger and is only just getting to be explored.”

The appeal of synthetic characters is not limited to live performances, either. The Walt Disney Company has enjoyed a string of successful motion pictures, including Jungle Book, Beauty and the Beast and Dumbo, all remakes of classic hand-drawn cell animation now updated as “live action” films that convincingly blend computer graphics with the natural world in ways that make James Cameron’s groundbreaking Avatar (2009) seem quaint.

When you watch the Disney live action animated films, they are so visually coherent, and the reality is so complete and persuasive, that it’s sometimes difficult to snap out of the trance and remind yourself that what you are seeing is not real video shot with a camera.

Our Preference for The Fake

Something is shifting in our culture. We are normalizing altered reality. The boundary between real and fake has been blurred, and plenty of folks seem to prefer the fake.

Today we spend our waking hours steeped in manufactured realities, devoting nearly ten hours each day to digital screens, immersed in fake news and filtered selfies and CGI-enhanced superheroes. Some of us seem to be developing a preference for computer-generated people or computer-remixed reality.

As Cameron-James Wilson, the creator of Instagram virtual model Shudu put it, “ I think we’re at a place now where real people are so filtered, so photoshopped, that there is no actual differentiation between 3D art and a photo.”

We seem to be cultivating an affinity for enhanced versions of reality. Call it superreality. And in the process, each of us is diverging away from consensus reality.

Exhibit A of the blurring distinction between real and fake is the psychological disorder known as “Snapchat dismorphia” whereby teenagers demand that plastic surgeons edit their facial features to better resemble their filtered selfie portraits. We want to edit reality to conform to our mediated experiences.

The Generative Potential of Deep Fakes

No media technology has triggered more fear in the past year than deep fakes, the application of deep learning to the task of generating completely fake video from nothing more than a photo or a TV appearance.

Video created by a GAN (generative adversarial network) can make impossible things seem true. Deep fakes can put other people’s words in politicians’ mouths, put celebrity heads on porn star bodies, alter sworn video testimony, tweak TV news segments after they’ve aired, and as a side effect, greatly degrade our confidence in the accuracy of video.

So much for the veracity of police body cams, surveillance footage, video evidence, passive video monitoring. If a GAN can crank out a video clip without the cooperation or even the participation of the people depicted in it, then we can no longer believe what we see.

Pundits have fretted about the impact of deep fake videos on the political process. Last week we had a chance to see the impact of such a clip in real time with a crappy video that wasn’t even generated by a GAN.

The obvious fakeness of the Nancy Pelosi video was not a deterrent to those who wanted to find truth in it. Phoniness did not diminish its impact in any way. Quite the contrary.

The poor quality of the fake worked like a litmus test to sort the true believers from the doubters.

The edited video was designed to warp reality (a conventional TV interview with a politician) to conform to the viewer’s pre-existing bias (She must be drunk or she’s on medication, and that’s why she’s slurring her speech, so we can justifiably ignore everything she has to say).

It’s binary. You either believe that video is true, or you don’t. If you do, you subscribe to one version of reality. If you don’t, you are opting to participate in a different version of reality. The viewership divides neatly into two separate non-overlapping realities.

This happens every time a fake news clip is viewed. The universe divides. Each fake clip creates an alternative universe that is held together only by biases, beliefs and convictions. Cue the Everett Postulate.

This contrived reality is a fragile construct. It is composed of “alternative facts” that are under constant assault from contradictory evidence in the real world. It takes a lot of willpower to maintain a fictional world.

That’s why partisan audiences have such a voracious appetite for shareable media. These viewers must continuously consume fresh media narratives that reinforce the illusionary world they’ve opted into. And they require the constant reinforcement of likes and shares from likeminded viewers to shore up their delicate and illusory worldview.

That’s the bad.

Is there a good side to Deep Fakes? Sure: generative power. The good news is that neural networks can now generate original video that is uncannily close to natural video. It’s a new way to generate original video. It is neither animation nor an expensive post production process.

Deep fakes offer the tantalizing possibility that neural networks can generate convincing video without a script or storyboard.

Google is experimenting with convolutional networks that can generate video from just the start frame and the end frame. Everything in between is generated by the neural network.

Other researchers are generating animation from recorded audio of a voice.

Deep fakes are not 100% perfect yet, and they might never be, but after just a couple of years the technology is improving at a breathtaking pace.

It’s just a matter of time before this technique is co-opted by marketing firms to generate highly believable viral video that convinces us that famous people, dead people or entirely fictional characters said and did things.

For instance, deep fake technology can helped David Beckham speak six languages fluently for a public service announcement about eradicating malaria.

At this point, let me recap the growing list of technological capabilities and evolving consumer preferences.

  1. Although many professionals in the motion picture, media and news industry believe that their jobs could never be done by a robot or software, the fact is that today there is software automation for every step in the process, from writing, storyboarding, programming, acquisitions decision-making, greenlighting, to production and post-production, and even in distribution and audience targeting. Whatever can be automated is in the process of being automated. Even the generation of original video can be done by algorithms.
  2. Although many media professionals operate under the conviction that consumers value accuracy, real news and real human talent, a growing body of evidence suggests something very different is happening. Consumers demonstrate a marked preference for synthetic performers and fake news across a wide range of media types.
  3. Consumers are prepared to pay for a completely synthetic performance.
  4. Consumers are so committed to these virtual unreal experiences that they will go to surprising lengths to preserve the illusion. Some are willing to get married to a fictitious personality, have surgery to look more like a Snapchat filter, and even ignore factual information that contradicts their fake version of reality.

Now let’s imagine how these factors will transform the media and marketing landscape in just a few years’ time.

There is no shortage of predictions about how artificial intelligence will affect society. Many of these predictions are made in isolation, as if AI is the only emerging technology on the horizon. That’s not true, and it therefore underestimates the full potential of what’s about to occur. I want to do something different.

My purpose in this article was to make an estimate about how AI will affect the media and marketing industries in the future. I believe the best way to do this is to talk about AI in the context of other enabling technologies that are now approaching operational maturity.

Now let’s try to imagine how emerging technologies and evolving cultural norms might be combined into a set of conditions that will make possible a new kind of marketing and entertainment medium.

I envision the advent of synthetic spokespeople and virtual companions on our devices and AR screens and engage with us in realistic natural language on a hyperpersonalized basis.

For starters, we know that there will be a lot more clutter and a lot more noise. The glut of digital video will continue to grow. The number of streaming video outlets will continue to rise.

As a consequence, there will be more fragmentation and, therefore, less consensus on whatever is considered “mainstream” media.

How will marketing evolve in this environment? More precisely, how will marketing agencies take advantage of the tools and social trends described in this article?

To cut through the noise and the clutter, I’m banking on relevance.

I’m betting on synthetic personalities and hyper-personalization. I believe that brands will come to life with virtual characters who are personalized to each individual. Custom entertainment and information for an audience of one.

These personalities will appear on every screen in our proximity, making contextual recommendations in the style of Minority Report. They will rely upon location data, our proximal graph, our previous behavior and known preferences, as well as souped-up prediction engines.

Unlike Siri and Alexa, these personalities won’t be blank or generic. They will have flair. Ideally, we’ll be able to customize them or modify them.

Every TV generation encountered their own version of an iconic brand mascot. Imagine if we could combine the trademark characters from the golden age of TV with today’s virtual influencers and could turn it all into motion video and augmented reality generated by artificial intelligence in real time (or near-real-time).

Imagine a future where all the technologies described above have matured and converged. In that not-too-distant future, you’ll enjoy the company of a friendly virtual companion who appears on whatever screen happens to be near you. Or she might even appear right in the room you happen to be in, thanks to Augmented Reality. She’ll have plenty of useful suggestions for whatever you seek to do next, whether it is finding a good restaurant or book or finishing a professional task or tracking down a technique for personal hobby or the answer to some trivia question that your guest asked. She might be your DJ, your guide, your assistant, and your knowledge navigator.

What might a synthetic personality provide to us that a classic advertising campaign cannot?

First of all, the synthetic person will be able to do everything that AI already does: give us turn-by-turn directions while driving, make recommendations informed by our previous behavior, tell us the weather, remind us of appointments, screen incoming phone calls, translate foreign languages. She will scour through the listings across 200+ OTT video services to recommend programs that merit the time investment.

Second, they will add value by showing us instead of telling. The synthetic mascot will demonstrate how to cook, how to use a new tool safely, how to wear a garment stylishly. It will help us visualize a holiday destination or new item of furniture in the living room. It could do everything that an Instagram influencer does now without the hashtags and airy spirituality. Thanks to GANs and “fake news” the synthetic personality might even be derived from an existing influencer video.

Third, the synthetic personality will continue to add value by knowing more about our goals, our schedules, our plans and appointments, our intentions, our destinations and our associations. To succeed, this must be more than just another form of interruption advertising. One way to accomplish that is to enrich our experiences.

In fitness, our synthetic friends will serve as personal instructors, monitoring our workout progress and suggesting ways to increase the challenge, how to improve our golf swing, or how to get into a more advanced yoga pose. They will add variety by suggesting new workouts, different classes, new hikes and even new workout buddies.

In global travel, the synthetic friend will help us prepare for a trip by showing us a guided virtual tour of attractions, helping us make bookings, giving us language instruction and simple phrases, recommending restaurants and menu items that match our preferences.

In a grocery store, the synthetic friend will highlight the items that match our dietary preferences, and optionally will hide those items that we want to remove from our diet.

At work, the synthetic personality might serve as a virtual assistant, keeping us on track, reminding us of appointments, directing our attention to priority items (and hiding low priority distractions), collaborating with other virtual assistants to keep everyone productive and align meeting schedules.

Finally, I envision that the virtual companion will also help us make deeper connections, showing us friends who happen to be in near proximity, reminding us when we drive through a friend’s neighborhood, suggesting more ways to connect with those we like and love, letting us know about a terrific view or a point of historical interest or a great underground club.

It’s not hard to envision scenarios whereby such a personality can help us. The main point is that this is a different approach to marketing. Instead of stealing our time and attention, this kind of marketing seeks to engage meaningfully by serving us.

We already have the desire and appetite for this:

· The growing appeal of synthetic characters and virtual influencers integrated into real world settings and familiar cultural contexts

· The willingness of consumers to accept virtual personalities and even pay for entertainment by synthetic performers

Much of the technology to deliver this sort of experience exists, some in prototype stage and some in early operational stage but much of it is immature. None of it is integrated yet into a single package. Nevertheless:

· Operational artificial intelligence that provides us with recommendations, directions, natural language dialog, real time translation, scheduling and routing assistance, and hypertargeting of messages already exists and is deployed at global scale today.

· Location awareness and a proximal graph that knows our location history, our daily commute, our local preferences as well as other people, devices, screens and points of interest in near proximity.

· The integration of artificial intelligence into every phase of media production and distribution is happening right now.

· Generative adversarial networks and high resolution game engines to create lifelike synthetic video without a camera, human actors or even motion capture data are in a very fertile stage of innovation and improvement.

In the next three or four years, I expect that we will have some version of the following:

· Low-latency, high-speed mobile networks that deliver enough bandwidth to render convincing augmented reality overlays on real-world settings so that we can perceive the synthetic characters integrated into our own environment. This is supposed to be delivered via the coming 5G mobile networks, in combination with WiFi or perhaps it could be done via unlicensed spectrum via low cost wireless ad hoc networks.

· Powerful artificial intelligence on demand at the edge of the mobile network to imbue the characters with lifelike responsiveness in real time

The last piece requires an explanation. We need a better kind of artificial intelligence. The currently prevailing implementation of AI, machine learning, is a very powerful technique, but as a tool it is limited to a narrow context. ML is brittle. There is a great deal that it cannot do and it breaks when we misapply it.

The limitations of current-gen AI explain quite a lot about the shortcomings in today’s version of machine-generated content: the incoherent dialog in screenplays written by AI; the obvious blunders in images generated by neural networks; the atonal music; the glitchy video. These problems won’t be solved entirely until we build a better AI.

Until these shortcomings are fixed, human creative talent will be irreplaceable in the process of producing entertainment. And there will always be a role for human talent in the broader creative process of designing and managing and improving synthetic personalities.

Next-gen AIs will move beyond deep learning to replicate different parts of the brain. Because they are more like us, future AIs will be more responsive to humans, more aware of non-verbal cues and human emotions. They will be better able to engage with humans and anticipate their reactions, and thereby offer something akin to empathy. This, in turn, will make their conversation more natural and less stilted. They will learn faster how to mimic our natural language and thereby they will eventually generate more convincing narratives and screenplays. In sum, they will become more human.

This is not science fiction. Legit firms are working on these problems right now.

One company that is actively working at the forefront of developing next generation artificial intelligence based on neuromorphic computing architecture is Brainworks. (Full disclosure, I am thrilled to be affiliated with Brainworks as a senior advisor because it is challenging and fun to serve the team that is solving these hard problems.)

It’s very early. There are good reasons to be skeptical about this scenario. None of the technologies described here is fully mature. Few of them are integrated into a suite of tools that can usefully interoperate on the same set of data. Some of them barely work. Some will require massive computing power. 5G might not work as advertised.

To be clear: we are a long, long way from an AI that can generate an entire video from scratch. Recent efforts to automate the full process have been interesting but unwatchable.

It may take a decade or more for this combination to work properly.

But here’s what we already know. Consumers want it. They demand this experience so badly they will attempt to create it for themselves using whatever tools are at their disposal.

And they are willing to pay for it when it is done well.

Also: Creative artists are already using whatever rudimentary tools are accessible now to deliver a rough approximation of the experience that consumers crave. Even though these experiences may be lack polish and sophistication today, there’s real money to be made when the experience is good enough. And all of it will improve as the technology matures.

This is an opportunity for a bold advertising agency that intends to reinvent the entire model of brand identity and brand communications. Action is required to make this happen.

If these scenarios strike you as preposterous or even impossible, remember that just 20 years ago, the notion of putting streaming video on a mobile phone was considered impossible, too. I launched that in spite of fierce resistance from experts. Now we all use mobile video every day.

I‘ve learned through personal experience that every time anyone decides to do something that has never been done before, an expert will come forward to tell that bold person they will fail.

They’ll tell you exactly why you are doomed to failure and how the failure will occur. And they’ll still be talking that way when you launch.

I’ve lived through several technology launches where experts said that whatever we envisioned couldn’t possibly be done… and then after we launched, they became our customers.

Today we have entrepreneurs who are deadly serious about traveling to Mars, and they are building the infrastructure to accomplish that mission.

Impossible is nothing.

For those who raise the objection of tremendous cost, remember that the advertising industry paid for nearly all of the newspapers, magazines, radio and television on earth. That’s a trillion dollar ecosystem globally. Surely for a trillion dollars a year, we could each have a personal avatar or synthetic personality.

Nature isn’t the only thing that abhors a vacuum. The Internet hates it, too.
If the advertising agencies fail to take action to harness the power of synthetic personalities, they will be supplanted by new companies like Brud.

One day the executives that manage the big agency holding companies may wake up to find themselves and their outdated interruption advertising model rendered as irrelevant to the future as Kodak, Polaroid, Linotype, Wang, Compaq, Nokia, Palm and Blackberry.

So go for it. Get started now. Don’t let the naysayers slow you down.

The next articles in this series will address the other aspects of product identity, namely UPC codes and barcodes, and how they are evolving in a software-defined economy.

+++++++++++

I am writing about identity — both product identity and brand identity, as well as human identity. The previous post in this series is called “The Decline of Mass Advertising and the Rise of Fake Influencer Culture” You can read it here. I sure hope you enjoy reading it. Maybe this post will make more sense after you read that one.

+++++++++++

Digital identity is a complex and sometimes bewildering topic. I’m writing this series of articles to help clarify my own understanding, and I welcome your comments, corrections and contributions. If the topics in this series of articles interest you, then why not join me for a discussion in person? I will be the host and master-of-ceremonies for the Innovation Track of GS1 Connect, the biggest gathering of supply chain experts in the world. I will interview the leading experts on digital identity in a roundtable discussion at GS1 Connect on June 19

If you are interested in any of the urgent topics that pertain to digital identity and product identity, such as blockchain for supply chain, business process automation, the application of artificial intelligence to manufacturing and retail, then this is a conversation you don’t want to miss.

This year, GS1 Connect takes place in Denver Colorado on June 19 to 21..

For 30 years, I’ve been focused on designing and launching new digital services. In the process, I’ve grown fascinated with the way we are constructing a digital version of the real world. During my career, I’ve supervised the launch of the world’s first mobile video services, some of the earliest PC games, online games and mobile games, and the biggest live online learning programs in the world. I’m also the author of the award-winning book Vaporized: Solid Strategies for Success in a Dematerialized World which you can read in entirety here on Medium (or if you are feeling generous, you can buy the book on Amazon . Thanks, I love you for that!). Today I serve as the Special Advisor for Digital Identity to GS1 US. GS1 is the global standards body for product identity

--

--

Robert Tercek
ID in the IoT

Author of Vaporized. Special advisor to GS1 US. Keynote speaker about the future of media, commerce, culture, audiences and society in a two way environment