Real Personality for ChatBots
While attending the recent Loic Le Meur “Leade.rs in ChatBots” event I was struck with two things:
No one was working on developing real Personality for ChatBots
Almost everyone remarked on how they fake real Personality in their ChatBots
eg. Though no one is trying to solve this huge issue, they all realize how important it is and they’ve all come up with some way of getting their ChatBots to have some SORT of Personality!
The closest vendor to providing a ChatBot with Personality is Replika.ai. Their service allows one to “sign-up” a ChatBot that would be a Proxy of yourself. Their approach is to feed a neural network archival conversations, Tweets, Blog posts and on-going txting back and forth — and “train” the ChatBot to develop some sort of Synthetic Personality.
I am actually incredibly enamored of such an approach.
Ray Kurzweil recently mentioned feeding Blog posts to an AI engine — to train the bots. Reece Jones tells me that Ray has been saying that — for a while — as part of “the Singularity.”
The problem is that this approach leaves a “lot for granted.” This approach assumes that some deep learning technique will auto-magically deliver one a “Geni-in-a-Bottle” AI.

I just don’t buy that.
I think humans want to be more involved in the authoring of their Proxy Bots. I think humans don’t want to leave that much for granted.
Phil Libin joked (a the Loic event — which he co-hosted) that in the future he’ll be in a conversation with others and along will come a PhilBot and “who knows what he’ll say!”
I disagree. I think Phil will know EXACTLY what PhilBot will say. That’s cause Phil will TELL HIS BOT what to say. They’ll be new authoring tools to “program” these bots. But obviously these ain’t gonna be programming tools, ‘cause no human will be able to figure out how to use those tools.
Feature Engineering is one of the magic weapons in the machine learning arsenal being talked a lot about nowadays. Feature Engineering is that magical moment when sample data (being fed to neural networks) is created that converts human thought, concepts or notions — into algorithms.
When I asked Peter Norvig (Google’s head of R&D and one of the fathers of modern day AI) “how does one do feature Engineering?” he told me
“You start with how you did it the last time, and you base your NEXT algorithms on it.”
He then put on a sly smile.
eg. it’s fucking hard to do Feature Engineering. Yes neural networks are the way to do machine and deep learning, but its ONLY AS GOOD as the sample data you hand it! And no human is ever gonna be able to just “crank out” some Feature Engineered sample data for their neural networks.
So how are we (the humans) gonna get these ProxyBots to represent ourselves properly and reflect our Personality?”
Replika’s approach says “let’s use previous and on-going txting conversations” as the sample data and feed it to our neural networks.
Well I welcome anybody to go to Replika.ai, get themselves their very own ProxyBot and give it a try. This is not something that works — at all.
But before I tell you how WE (Cantervision) think we’re gonna solve this challenge, let me first go over what I see are the layers of the onion of faux, fake, pseudo, reflective, and extrapolated Personality. THEN I’ll tell you how we think TRUE Personality can be synthesized!
Faux Personality
Almost all ChatBots have Easter Eggs built in. These Easter Eggs are triggered when you ask “Tell me a Joke” or other techniques. Siri has a gotten famous by these stupid ass behaviors. Alexa has them as well.
All they’re doing is triggering content and “hoping” that users “think” that this gives the ChatBot some sort of Personality.
Yah! Right!
Fake Personality

This technique goes a little further. Fake Personalities are when a ChatBot takes on an accent, uses cultural norms or vernacular phrasing and acts all “cool and shit.”
Sorry — that doesn’t work either.
I’m talking bout REAL Synthetic Personality.
That’s the holey grail!
Pseudo Personality
Pseudo Personality goes a little deeper. This is when actual “intelligence” (smart software) comes into play.
Pseudo Personality based ChatBots look at embedded semantic keywords or phrases in a conversation and try and understand what the “embedded Content” is — in conversations — and use those keywords as triggers to get a ChatBot to say something both “relevant” (which reinforces how “smart” the ChatBot is) but also to give the ChatBot the air and feeling of some sort of Personality.
Travel Bots do this like this:
“Oh you’re going to Jamaica, I just LOVE Reggae.”
or Shopping Bots do it like this:
“I just LOVE that color on you, it reminds me of…….”
Services like PandoraBots are real good at this sort of stuff.
Lauren Kunze (Pandorbots CEO) was at the event and was absolutely brilliant explaining howher company is kicking ass. So clearly there are LOTS of folks who are happy with these kind of “pseudo personality” ChatBots.
Ethan Bloch (of Digit) talked about (at Loic’s event) how their customers (Digit is a savings App) are very sensitive to be “judged” by the Digit Savings Bot. This (of course) is exactly the effect Digit WANTS. They want their customers to think of the DigitBot as some sort of “wise, your-best-interests-at-heart” assistant.
That’s the “pseudo-personality” effect.
Reflective Personality
Stephanie Volftsun of Bubby talked about (at the Leade.rs event) matching their ChatBot’s behavior to the personality of the human — who is assisting their ChatBots in their on-line dating concierge service.
You can’t have a ChatBot talking one way (via the human) and sounding completely different from it’s automated scripts and question/answer dialogs.
Facebook’s M service will have humans combined with automation. So this is totally a “trend.”
Creating a hybrid human-ChatBot service is actually kind of smart. Its really hard to scale and I assume that’s what’s “holding up” Facebook’s M — but regardless — reflective personality is definitely “a thing.”
A reflective approach to Personality matches what the end-user humans hears and sees in the txting experience — keeping the experience consistent.
Though this clearly is an important angle to get right — it has NOTHING to do with real Personality!
Extrapolated Personality
This is what Replika.ai does.
It’s based on the notion that a Personality can be extrapolated from txting and conversations and that this content can then be used to train a machine learning engine.
Yes — that’s partially true — but not enough.
Cantervision’s approach to Synthetic Personality
I am an authoring tool dude. I’ve been creating authoring tools my entire career — and that’s what I’m best known for.
When I first hung out with Adam Cheyer (of Viv) we talked about the tools they’re developing for Viv. Adam made it clear that they are exclusively focusing on programming tools for coders.

So I started focusing on authoring tools for end-users. These end-users will be NON-programmers who wish to define, control and manage ChatBots.
So imagine how excited I got when Esther Crawford announced her EstherBot and her friend Chris Messina’s MessinaBot!
These sort of “build your own Bot” kits make no attempt at focusing on Experience. They deliver a totally customized, personalized bot, but it has no specific “personality” per se.
Its all about Intent.
Today’s ChatBots are all about figuring out what an end-user wants to see happen, and making that happen. Its a goal-oriented approach — which is what’s required for ecommerce, support, wellness, recommendations, etc.
One might have 50 ways to say “I want to buy a pair of shoes” — and the ChatBot’s job is to narrow that down to one empirical answer.
That has nothing to do with self expression or Personality.
So Cantervision’s approach is to ignore Intent and 100% focus on Expression.
This “self-expression” game started with Dave Winer and blogging and grew into something pretty big and important! Who would ever have thought that a President would be scandalized (Monica Lewinsky) or entire countries would revolt — because of blogging? Who would have thought that multi-billion dollar empires would be born (Facebook, Twitter, Wordpress, etc.) — riding on the coat tails of blogging?
The same thing is true about Internet Video Personalities!
Did any of us see PewDiePie or Twitch or even Chris Pirillo or Leo Laporte — when YouTube launched?
I think the same huge thing is gonna happen with ChatBots.
Cantervision’s approach is to ingest as much as possible (homage to Replika.ai and Ray Kurweil) but ALSO provide authoring tools which will allow end-user authors to feed their bots — exactly what they want their bots — to say.
That’s what (I think) my daughter will want and that’s what we’re gonna build for her.