There’s a bad habit that we, as an industry, need to shake. We need to stop treating machines like they’re people. Example? I was in a shop the other day and one of the self-service checkouts had a note covering the screen.
“Sorry, I’m poorly at the minute.”
If you’re not from the UK, ‘poorly’ in this context means unwell. I hate to break it to you but the machine isn’t unwell. It can’t tell you that it’s not feeling itself. It won’t ever apologise for it.
Machines don’t get ill; they malfunction. Machines don’t feel remorse that they are out of service; they just sit there unable to do anything. Machines don’t have a sense of self; they’re just waiting for an input so it can perform some calculations to give an output. Suggesting otherwise is an accident waiting to happen.
We do it because it makes machines look less intimidating. However, it’s as stupid as the “shrink it and pink it” phase we went through when we were trying to make things appeal to women by making them smaller and more girly. It’s an obvious strategy that has a low chance of working.
Worse, it has a chance of backfiring. While machines are kept at arms length, there’s no emotional connection. If it breaks or performs badly, then we’ll turn it off without thought. If we treat it as a member of the family or a colleague then we’ll hesitate, not wanting to hurt its feelings.
It’s been shown that if you give your dog a human name, it’s more likely to be obese. We’re hardwired as humans to be nice to other humans. We can’t turn off the instinct if we get confused between what’s a real person and what appears to be a real person.
I’ve built a chatbot at work to help with customer service. It has a human name, because it was a good project codename (hardest part of software engineering) and it never got changed. For a month, the accuracy hasn’t been very good. The team didn’t flag the problem, they let it underperform.
“He’s only just started, give him time to learn.”
Yes, it’s a machine learning system that should improve over time, but it requires training and we weren’t doing a good job of that. Just the act of giving the bot a human name caused the team who rely on the software to ignore the bugs in the vague hope that they’d go away in time. Let me assure you that they’d never normally stand for buggy software!
On the other hand, for the customers receiving answers from the chatbot, there was nowhere for us to hide. It looked like we’d hired a terrible person with no empathy. If the bot didn’t appear so human, they would have just thought that the bot had given a bad answer. It’s amazing how differently people behave with machines than with other people.
Remember when Boston Dynamics went through a phase of torturing their robots? You had to be pretty cold-hearted and emotionless to not go “awww” watching the robot dogs skittering around an icy car park after being kicked. The good news is that the robots arguably liked it! The robots are rewarded; not with a treat but by reinforcing the learning systems that control their limbs when they successfully stayed on their feet.
We need to keep machines at arms length to ensure we don’t get too attached. We should be clear-minded if we need to pull the plug on our machines. We shouldn’t be in a position where we don’t want to hurt its ‘feelings’.
My point here is that we should present artificial intelligence for what it is. It’s not as clear cut as not giving bots a human face, because users need some emotional connection to the thing they’re talking to in order to have trust that it’s giving correct and considered answers.
Is there a specific amount of humanness that’s appropriate in every situation? No, it depends on the interaction. If you’re talking about taxes, you probably want it to be as robotic as possible. If you’re talking about healthcare, you probably want to lean more towards the human end of the spectrum. It depends how emotional your interaction is in real life.
When was the last time you considered how human to make your software?
I’m is a software developer based in Birmingham, UK, solving big data and machine learning problems. I work for a health tech startup, finding creative ways to extract value from our customer data. I don’t have all the answers but I’m learning on the job. Find me on Twitter.