Are You Not Personalized?!

Personalized Video, dynamics and a fancy word for “targeted”

Johan Belin
Dinahmoe
Published in
7 min readNov 1, 2017

--

I know that the title should start with a “How”, “Why”, or be “ten things to think about when doing X” to get the most views. But here you are reading this anyway, good for you, brave soul 🙂.

We know more about the user than ever

We know more about the online customer than ever before:

  • demographics: sex, age, education, income etc,
  • psychographics: values, personality, attitudes, opinions, lifestyles, interests etc,
  • behavioral: user interaction, browsing history, purchases etc,
  • and more: device, location, time of day, weather, traffic etc.

Sometime we even know their real identity. Yes, privacy is a concern, we’ll get to that (and the fancy word) a little later.

Targeting based on user data

Personalized video experience has the potential to be more engaging and relevant since it is using personal data. But delivering targeted creative is nothing new so what’s the big deal?

Producing personalized videos for e.g. different demographics has meant editing x numbers of variations of a video and select which to deliver depending on user data. Any change to the video means going back to the video edit suite and make a new one. The more versions the harder to manage, it is simply not a scalable solution.

Enter Dynamic Personalized Video

By generating the video dynamically when it is requested we solve the scaling problem. Each video can be adapted to the viewer all the way down to one-to-one communication.

The personalization can be subtle or it can fundamentally change the user experience. Here are examples of some of the aspects that can be changed:

  • selecting what individual clips to show, swap individual clips in a video to target specific personal preferences.
  • changing the voiceover: male/female, old/young, in full or in parts.
  • selecting music track depending on e.g. the user’s musical taste
  • changing copy and images in the video

All this can be used to create a relevant, engaging, personal experience. Dynamic personalized video also simplifies e.g. A/B testing and updating of different creative. Many changes can be done without any changes in the assets at all.

If we have access to actual personal data then the video can be even more personalized, e.g, including the user’s name or personal images/sound/video in the video.

If the user has been actively involved in personalizing the video, i.e. creating the video by their interaction, then they are also much more likely to share it.

An example: Sense8

Sense8 is a Netflix series about 8 people that are emotionally connected in a supernatural way. The fan culture for the series was very strong and users produced fanvideos with their favorite characters from the series. Campfire approached us with the idea to enable users to create fan videos without having to know anything about video editing.

The solution was a chatbot driven experience where users communicated with one of the main characters from the series, Nomi Marks. During the conversation we detected what characters in the series that the user liked and what emotional content they preferred. Based on this we rendered a personalized video that the user could share on social media.

Behind the scenes

We already knew how the fans wanted a fanvid to look like since there already was a ton of them on YouTube. The raw video material was 15 hours from season one and a bonus Christmas episode. The first challenge was: how condense all this material down to basic building blocks that can be combined into an infinite number of fanvids that really feel personal.

We started with editing the raw material into separate clips. We did not use the audio from the clips since we wanted to add a super emotional music score on the final video. This excluded all clips where you could see the characters talking.

The first tests assembling the clips into videos showed that using separate clips was the wrong approach, we lost all story telling and the final video just felt random.

We needed to keep related clips together to make sense so we experimented in combining the individual clips from a specific scene into one or more mini stories. The editing had to be like a movie trailer so every single clips was cut down to its minimum. Except the love scenes which does not work well in action editing style.

We were now down to around 1000 mini stories, 1,054 to be exact. The length for the final video should be between 35 and 45 seconds. Each final video would then be a combination of around 5 mini stories, each between 5 and 8 seconds long. This would create enough number of combinations to blow my tiny math brain.

All mini stories were categorized for character and feeling in a custom CMS that allowed easy testing and rendering of videos. New tests started to look good but still felt a little random.

We added a logic layer to handle the obvious story telling errors:

  • some clips had to be in order, e.g. a house that is blown up cannot resurrect in the next clip.
  • some were different versions of the same mini story and should not be in the same video,
  • some only worked as the last clip etc.

Now we had all pieces in place. The server application rendered a title card with the user’s name, combined a selection of clips based on the user’s preferences and the clip logic, added a custom music score, rendered the final video in less than 2 seconds, returned it to the user so they could share it in their social channels.

The final result has made people cry, literally. I am not kidding.

During the first week with a completion rate of 90% over 8,000 users went through the 3 minute long chatbot conversation, got their personal video and shared it in their social channels.

Two approaches: in-browser or server side

There are two approaches to generate dynamic personalized video. Either the personalization is done in the browser or it is rendered on the server side and then delivered. Both methods have its pros and cons so which to choose depends on the project. Below are the main differences:

In the case of Sense8 we used server side rendering of two reasons:

  • the high number of mini stories
  • we wanted the user to be able to share a native video, not just a link

These are the two most common reasons for any project to use server side rendering. The main benefits to do the personalization in the browser are

  • scalability. In-browser uses CDN for assets and can be scaled to any number of simultaneous users. Server-side rendering requires dedicated hardware that has to be scaled linearly with the number of users.
  • immediate playback, server side rendering takes time for the rendering which makes it unsuitable for realtime delivery.
  • interactivity, e.g. added functionality, user selections. Dynamic personalzed video in the brwoser can be combined with any of the functions described in the article about interactive video.

And the fancy word is…

So what about that fancy word? As I mentioned in an ealier article the use of the word “personalization” in marketing is somewhat ambiguous. It has been used with two different meanings:

  • marketing that is based on the user’s personal data, OR
  • marketing that is targeting specific demographics and/or personal preferences.

The first handles personal data, the second does not, so they are fundamentally different when it comes to privacy and legal regulations. To solve this ambiguity Gartner have suggested to use the word “personification”, derived from marketing personas, to describe targeted marketing that doesn’t use personal data.

I promised start using it but immediately ran into problems, is it “personified video” or “personificated video”!? I think I’ll stick to the original ambiguity until Gartner sorts this out.

This was part three (four?) of the video series. Here are all articles currently published:

Forget about VR, AR, AI and ML: Video is the Next Big Thing!

Interactive Video: Why and How

Interactive Video — Example Projects

Personalized Video, dynamics and a fancy word for “targeted”

I will add new video related articles now and then.

Before you go

Clap 👏 👏 👏 5, 15, 50 times if you enjoyed what you read!
Comment 💬 I’d love to hear what you think!
Follow me Johan Belin here on Medium, or
Subscribe to our newsletter by clicking here

--

--

Johan Belin
Dinahmoe

Founder and CD @ Dinahmoe, passionate about digital, looking for likeminded