Day 1: Emo robots

SXSW: An Australian’s story

WOOHOO! We made it!

And to clarify who we and I am — I’m the shorter half of a creative team at WiTH Collective (Phil Robbie is my whiskered partner in crime).

This year we were lucky enough to win People’s Choice (and this trip to SXSW) in the Brainbox innovation competition run by the Dentsu Aegis Network (ANZ).

…So a big muchos gracias to Brainbox, DAN, our mentor Ben, and WiTH Collective for getting us here!

This is us (Phil and I) just before boarding our 15hr flight to Dallas, oblivious of the jetlag ahead.

But before I get into the learnings and all the fun educational stuff of SXSW— I just need you to know that my first impression of America is EXACTLY what I thought it would be:

Our welcome sign at the Dallas airport
  • The accents are straight outta the movies,
  • Lots of people speak Spanish,
  • Americans don’t understand my Aussie accent,
  • The cabs are yellow,
  • People tip (but not on coffee?!),
  • The free-pours are both free and poured freely (whoever invented media sponsorship deserves a medal),
  • The BBQ is PHENOMENAL,
  • AND there are cowboys [if anyone knows if their hats are akubras, Phil and I are in debate and don’t want to google the answer — so please feel free to weigh in].

Now, down to business…

Designing Emotionally Intelligent Machines

by Sophie Kleber (from Huge)

I started my day off with an EPIC talk on affective computing: systems and devices that can recognise, interpret, process, and simulate human affects — basically teaching AI to feel.

And this is kind of a big deal ‘cos it’s expected to be a $36.7bn industry by 2021.

In a nutshell, AI exists to help make people happy — a support system that is always there for us, at any given time. But human emotions (and happiness) aren’t black and white — they’re really complex.

Martin Seligman’s interpretation of happiness is to “flourish.”

And before we can build these bots to give us what we want (and make us happy) — we need to know about how and why we feel emotions in the way we do.

But this is all uncharted territory, and we can only design what we already understand — so, to avoid repeating mistakes like Facebook’s [remember when their testing made thousands of users sad in 2014?] we have to be careful when we do it. [That, and we also don’t want to bring Ex Machina to life.]

So for the moment, the tech has been divided into three facets; covering facial recognition (SIMSensei), voice recognition (Siri and Alexa) and biometrics (Fitbit).

But before we build, we need to ask, what is the desire for emotion?

And then we need to get permission to play:

Which will lead us to a choice of reaction we want from the AI:

And give us the below quadrant for possible responses:

The three reactions in more detail:
1. React like a machine (Eg. Car AI safety tech that measures your face and knows when to pull over) and is trainable (Eg. If you tell Ok google to “shut the f*** up” it will follow command, without getting emotional about your language).
2. React like an extension of self — the human user is still in control and the AI is aware of their emotions, but doesn’t try to affect them. Eg. in the wellness space with Baymax in Big Hero 6 or the “Feel” wristband
3. React like a human — the user lets the AI interfere with emotions — so a machine actively aims to change emotions (Eg. If you tell Alexa you’re sad she will give advice on how to improve your emotional state).

At the moment, the three main fields of research are stress research/relief, emotional eating and weight loss, and trouble with understanding — so interpreting, and teaching to interpret, feelings for people with autism or similar.

Sophie Kleber then went on to discuss the basic characteristics that the AI needs to be comfortably accepted by users:

Be obedient — The machine says you’re the boss.
Empathetic — It immerses itself in your feelings, while having no feelings of its own.
Authoritative — It has the knowledge to assist you, and you can trust it.
Good humour — It can help you out of a difficult situation (utilising humour helps to make it easier for the user to forgive the machine when it’s wrong).

And just like a human relationship, it should be a gradual progression of intimacy between machine and user, evolving and growing with time.

Overall, emotions are serious business and we shouldn’t be manipulating them for fun. We should only be designing what we truly understand, and be happy to take it slow.

It also means that we now have the ability to further develop brands to have stronger and deeper personalities than ever before. AMAZING! The opportunities!

Anyway, back to myself, Phil and our Austin adventure:

We explored lounges…

The worst kept secret of SXSW. Between speaker sessions, these business-funded fun-houses are all over the place.

We made our way through the Intel, IBM and National Geographic lounge spaces - AND THEY WERE AWESOME.

Not only did they have open bars and free food — but we got to play with the latest tech as we drank!! Geek heaven.

<Video evidence of our attendance will eventually live here>

Absorbed more smarts:

So post-fun, I went to another two sessions: one being a panel on whether or not the internet has a kill switch (it doesn’t) and another on emo AI…

And then the jetlag licked me:

Now I’ve got to be honest — the time difference whooped me and I basically lost consciousness in the second emo AI session.

So I went back to the hotel to nap off the zombie, before an attempted entry to the Opening Party (which became a defeated, but exciting wander around Downtown).

And on the way home (back to bed after the 2hrs of Austin action), I came across this gem:

Jewellery AND firearms. Errythang a girl needs in Texas.

And that was day one.