Ethical considerations for practical AI applications

We’re only human, after all

What kind of line has sixteen balls?
I don’t know. What kind of line has sixteen balls??

A pool cue!

OK, so… How did that joke make you feel? Maybe it made you laugh a little. Maybe not. Or maybe you did what I did, and made a *badum tss* noise in your head.

Sure, it’s structurally sound. It has a setup, and a punchline that makes sense. But it’s the kind of gag that is more likely to elicit a groan, a pity laugh, or perhaps even no response at all.

But there’s something different about this joke. It wasn’t written by a person, but by a machine.

Here’s another example:

What do you get when you cross an optic with a mental object?
I don’t know. What do you get when you cross an optic with a mental object?
An eye-dea!

We all react differently to jokes because humour is a subjective animal. The things you find funny depend on a range of factors — the language you speak, the cultures you operate within, your mood at that moment. That means that to be humorous requires skills like self-awareness, spontaneity and empathy.


At TWG, we have a Slack channel called #dadjokes. It would be simple enough to use Machine Learning to write the kind of so-bad-they’re-good puns that live in the #dadjokes channel. Jokes that sort of make sense, based on a predefined set of rules.

Source: TWG Slack: Channel — #dadjokes

Over time, our #dadjokes AI could “learn” more about which jokes are funniest based on the data from our emoji reactions, say. But, even with inputs, training and time, the differences between our robo-comedian and a professional stand-up comic would still be palpable.

That’s because the AI can’t actually understand why something is funny, or in what context it would make sense.


“But Holly,” I can hear you asking. “What exactly do joke-writing robots have to do with how I should be thinking about AI IRL?”

Never fear, dear reader. All is about to become clear…


🤖 Robots make pretty rubbish comedians 🤖

Experiments like these teach us that robots make pretty rubbish comedians.

Or, to put it another way: Experiments like these remind us of what we’re excellent at.

Lately we’ve been asking around about the things humans can do that robots can’t. And here’s the cool thing… The same ideas keep cropping up, time and time again, no matter who we’re asking.

Generally, people think people are excellent at things like:

  • Empathy, passion, imagination, joy 💜
  • Creativity, spontaneity, linguistic sophistication 💛
  • Love 💚

Turns out, when we’re asked to compare ourselves to robots, we consistently come up with some really excellent ideas about our strengths.

Or, to put it another way: When we compare ourselves to an AI ‘other’, we connect more deeply with what we collectively believe are essential human qualities.

I mean, that list is pretty hard to argue with I reckon. (However, I do fearlessly invite you to submit your counter-arguments in the comments section, on Twitter, Facebook, over email or via WhatsApp — +1 647 920 9650)


“But Holly,” I can hear you asking. “Can you just get to the point? I want create valuable, meaningful outcomes with AI. I have needs.”

You’re right. Let’s get it into it.


When redesigning software to make space for AI, we need an ethical, human-centered approach

An easy way to think about AI is as a set of computer science techniques that can give your software superpowers. And there are lots of practical ways you could be applying AI to your software today to make it better.

Research conducted by Harvard Business Review offers identified three patterns that separate the best from the rest when it comes to applying AI:

  1. Put AI to work on activities that have an immediate impact on revenue and cost.
  2. Look for opportunities in which AI could help you produce more products with the same number of people you have today.
  3. Start in the back office, not the front office.

You can also review some examples of AI in action here, here and here. So, lots of options, and plenty of opportunities. But how do you pick the right thing to build?

The answers come from taking an ethical, human-centered approach. From putting humans (that’s your team, your customers, and humanity-at-large) at the heart of your planning, and empowering them to do what they do best.

If we understand AI as a set of computer science techniques that give your software superpowers, we must also think deeply about the implications of unleashing that power.

Or to put it another way: In imagining how you can make use of these fresh, exciting techniques, it is imperative to think deeply and critically about the why as well.



Positively shaping the development of artificial intelligence is one of the most pressing challenges of our time. AI is powerful, but it is also risky. It’s risky because intelligent automation has real-life consequences for how people work. It’s risky because machines that learn will learn to replicate our biases. And it’s risky because making dumb choices about deferring to smart machines could mean we miss out on some valuable, nuanced human-to-human interactions.


Consider this: The rise of the robo-personal assistant

Let’s look at an example. Robots might be crappy comedians, but they have the potential to be great personal assistants. The intelligent virtual assistant market was valued at $1.1 billion in 2016, and is projected to reach $11.9 billion by 2024. Examples of new, software-centric companies building products in this space include X.ai, Zoom.ai and Fin.

“Get back to your real work and leave the busywork to us.” — Zoom.ai

When we look at these products through an ethical, human-centered lens, what do we see?

As a user, these products totally help me focus on doing more of the things humans are great at. They empower me to be more creative and imaginative, because they offer the gift of time. I can be more present with my team, because I’m lowering the amount of cognitive overhead required to organize my day.

But what about the risks? Here are three of the things I’d be thinking about if I was developing a robo-assistant product:

1. Are my data sets amplifying bias?

AI-infused products are trained on sets of data — a “learning corpus.” But these data are often riddled with bias. For example, researchers at Cornell University discovered that language around cooking is 33% more likely to involve women than men. The model they trained amplified this disparity to 68 percent.

2. What will my product learn about people?

As we design machines to serve the needs of humans, are we including all humans? Whose data are we collecting?

Fin hones in on the notion that “everyone needs an assistant.” But their definition of ‘everyone’ seems worryingly narrow — articulate, monied early-adopter types who need to balance their Soulcycle schedule with trips to the dog groomer, or rent picnic tables for a friend’s birthday party.

Fin’s definition of “everyone” is narrow

As one of the aforementioned early-adopter types, I’m ready for Fin to take my money (as soon as they launch in Canada). But you know who really needs more time? Single parents working multiple jobs. Unpaid caregivers looking after their families. Stressed-out nurses.

Fin’s pricing model — $1/minute with a monthly minimum of 2 hours — is deeply exclusionary. It’s a powerful product that’s completely out of reach for the people who could stand to benefit the most. And the problems it will be learning how to solve? More like minor inconveniences, really.

Perhaps a Toms-style “one-for-one” model, or a massively reduced rate for underrepresented groups, could help here. Let’s generate data that helps smart machines understand there’s a difference between real human need and privileged wants/desires.

3. Who stands to lose when my product succeeds?

The Philippines is the world leader in virtual assistance services. Virtual assistants provide remote support for administrative tasks, such as scheduling meetings, and often work from home whilst supporting clients from around the world.

In an emerging market that offers few career opportunities locally, the virtual assistant industry has been driving steady economic growth. It provides jobs to more than a million people, and is the second biggest contributor to the country’s GDP. Are intelligent virtual assistants eating the lunch of personal virtual assistants? Robots don’t need lunch. But people definitely do.

AI might be risky. But some of the biggest risks can be mitigated if we remember to stay curious, think philosophically and ask interesting questions. That’s the path to leveraging AI to create products that are accessible, inclusive and serve a diverse range of people.


Let’s close things out with one final joke

Waiter! Waiter! What’s this robot doing in my soup?”
“It looks like he’s performing human tasks twice as well, because he knows no fear or pain.”

This one was written by a human… pretending to be a robot… telling a joke to another robot. It’s funny — or at least, I think so — because it taps into our very human fears about intelligent machines.

Even for us committed optimists — the ones who believe in the power of tech to help create a peaceful, sustainable future for humanity — AI can feel kinda scary.

Philosopher Mark Kingwell, writing about artificial intelligence in 2017, explains it like this:

“Fear remains the dominant emotion when humans talk about technological change. Are self-driving cars better described as self-crashing? Is the Internet of Things, where we eagerly allow information-stealing algorithms into our rec rooms and kitchens, the end of privacy? Is the Singularity imminent?” — Mark Kingwell (Artificial Intelligence in 2017)

And yep, I’ll admit it. Even I have some fears about a future where super-intelligent robots understand humanity only through the data of the privileged.

But we shouldn’t be afraid. Instead, we should be hopeful whilst thinking critically and carefully about the products we build. We should be bold, but not cocky as we work together to shape the future.

Or to put it another way: AI-enhanced software has the potential to transform both business and society. Let’s focus on applying it in a way that gives us all more time to invest in the things we’re collectively great at — empathy, passion, imagination and love.


💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚
🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖🤖
💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚💚


Holly Knowlman is TWG’s Director of Impact. There, she works to align the company’s growth initiatives with opportunities to make meaningful contributions to society’s most pressing challenges. Connect with her on Twitter, Facebook, over email or via WhatsApp — +1 647 920 9650)

Part of #ConnectTheBots: A new series exploring the human side of artificial intelligence from TWG. We help teams apply practical, purposeful AI solutions to make their software better.