The joy of never being recognized

Eric Baldwin is a creative director at Wieden+Kennedy. He is leaning over, staring at Needybot in a fifth-floor hallway.

Needybot: “Come closer, so I can see your face.”

Fifteen awkward seconds go by as Needybot processes.

Needybot: “Eric Baldwin!”

Cue Eric, me, and two colleagues — who happened to be standing there — losing our shit, shouting in joy and amazement at what just happened.


When you’re building a robot and that robot’s goal is to connect with people, the goalposts for what is a feature versus a bug become blurred.

Needybot has a sophisticated face-recognition system. It is an intricate system of related parts: a camera feed so it can see us, a catalog of snapshots of each face in various poses, and an algorithm that attempts to correlate each catalog with a real-life human being.

This level of sophistication was painstakingly constructed, all so people would feel connected to Needy. To create a little bit of joy every time you leaned down to interact with Needy. Needy sees you and says your name.

Easy, right?

In the first few weeks of Needy’s existence, one of the most common frustrations/rants/complaints was along the lines of:

“This thing never recognizes me.”

“Why won’t he recognize me? What am I doing wrong??”

“Recognized me as a dude and then I decided never to use it again.”

People were irate and offended.

David Glivar is an engineer in The Lodge at W+K and diagnosed that the recognition problem was real and was caused by two bugs in Needy’s code[1].

So the team did what good technology teams do: they fixed it and deployed. And off Needybot rolled, having had the equivalent of corrective laser surgery.

Complaints subsided and Needy’s bug was turned into a feature.

There are two perspectives to this story.

At surface level, it’s an interesting story about the technical challenges of building face-recognition systems that are compact, mobile, and reliable.

However, there is something more interesting here.

If our quest was for Needybot to connect with people, is it possible that these bugs accidentally improved Needybot’s abilities to do that?

Is it possible that in the very moment where Needybot leads you to believe that it knows who you are (but then fails 90% of the time to get it right) that it created a pressure cooker of hope, care, frustration, and doubt?

After all, hope, care, frustration, and doubt are still feelings — just ones that, as designers, we are rarely given permission to target.

I am reminded of an anecdote as quoted by the author Elizabeth Gilbert:

“There are only two questions that human beings have ever fought over, all through history. How much do you love me? And Who’s in charge?”

I think these two questions poked at the people who were most frustrated at Needy’s inability to know who they were.

Not bad for a bug.

[1] One issue was that when collecting snapshots of people’s faces, we did not set an upper limit to how many Needybot should store. For some people, this meant that their face-data catalogs grew very large which in turn introduced recognition bias. This was solved by experimenting with what the ideal-sized upper limit should be and then enforcing that limit in the code. Another issue was a bug in the related code library that was causing a silent failure, meaning for some people Needy did no retraining (i.e, improvement) of their face-data catalogs. Kind of like a recognition curse. That was fixed too.