What Is Human about Artificial Intelligence?

Lessons from

At Salesforce, we are delving into the ways AI might shape the experience of our users (e.g., How to meet user expectations for Artificial Intelligence). With that in mind, several members of our UX team were keen to attend (videos) in February. “Humanity.AI is a single-track conference chock full of talks with designers, animators, engineers, researchers, and product folks who are keeping humanity at the forefront of AI advancements.”

Below are the key insights Kathy Baxter, Jennie Doberne, George Hu, Amy Lee, and Ayesha Mazumdar took from the event. From these insights, we make five recommendations for our teams and anyone working on AI.

The Algorithmic Justice League (Photo credit: Joy Buolamwini)

Insight 1: Focusing on an apocalyptic AI future distracts from the real risks of AI in the present

The widely-publicized fear of AI robots rising up to kill us diverts much-needed discussion about the current risks we face with AI. These risks include: unconscious bias, data security, and abuse. Camille Eddy spoke about the danger of baking in bias when systems are trained on limited or prejudiced data sets.

For example, facial recognition technology trained on photo sets lacking African Americans, is not able to accurately detect African Americans. AI learns over time through complex algorithms. Yet when unconscious bias exists in the original algorithm, it becomes increasingly magnified and persists with quiet malevolence. Will biased bots then “train” humans — reproducing stereotypes as technological truths we no longer question? We must develop AI guardians to protect the humans we are trying to help. We can do this by designing bots that render visible inequalities, security flaws, and abuse.

“Good advice for AIs = Good advice for people: Do what the best and smartest of us would have you do.”

Chris Noessel, Global Design Practice Lead, Travel & Transportation at IBM

Recommendation: Build AI guardians, not just AI features and tools.

Dr. Vivienne Ming, Theoretical Neuroscientist, Technologist & Entrepreneur (Photo credit: @Humanity_ai)

Insight 2: Human creativity and adaptation are key to job security in an automated world

We may endeavor to build guardian bots; however, this does not alleviate a widespread concern with AI that robots will take over our jobs. This fear isn’t limited to blue collar workers: even professionals who went to school to develop specific skills are losing their jobs to automation. Yet none of the speakers believe that computers/AI will replace the creative aspects of work/life or remove the craft in craftsmanship. Thus while efforts are being made to teach kids, coal miners, and others to code, job retraining will not address the challenge posed by AI and automation. As Dr. Vivienne Ming explained in her talk, “How to Robot-Proof Your Kids,” future workers will need to be creative, adaptive, problem-solvers to successfully adjust to an ever-changing job market.

“The future of work in an AI-infused world is simple. We need creative, adaptive, problem-solvers.”

— Dr. Vivienne Ming, Theoretical Neuroscientist, Technologist & Entrepreneur

Recommendation: Nurture the creativity, adaptability, and problem-solving skills of your children, and your employees, coworkers and users.

Mark Walsh, CEO at Motional

Insight 3: Bots can improve human collaboration

Fears of AI taking away jobs isn’t the only risk in an AI-infused world — loss of human connections and relationships is another. However, AI has the potential to improve human relationships. Chatbots can expand horizons in social networking and work collaboration as individuals and companies use these tools to enhance their brand and values. Chris Messina demo-ed what he calls a “Mebot” — in this case, his Messinabot. Most work profiles, according to Messina, are like our old MySpace profile pages — static and unappealing. Messina’s bot aggregates content from other sources, integrates with other apps, and persists the collaboration history between the bot and user so that previous conversations endure.

Veronica Belmont and Jeremy Vandehey of Growbot focused on how their offering gamifies meaningful feedback to grow and enrich relationships at work. A successful company is one that encourages open feedback between colleagues, while defining and modeling what appropriate feedback looks like. Growbot’s position is that constructive criticism (or feedback of a more negative variety) should be delivered in person so its product is limited to positive feedback. While bot-ified feedback may enter the workplace, human judgment and emotional intelligence will still have a place in the future of work.

“Humanity doesn’t have anything to do with the bot; it’s the community. It brings out the humanity in the community.”

— Veronica Belmont, Product Manager at Growbot

Recommendation: Leverage employee and personal chatbots to enable collaboration and growth.

Joshua Browder, Founder & CEO at DoNotPay highlighting examples of how AI can help the most vulnerable. (Photo credit: Mindy Gold)

Insight 4: AI is good for the social good

In addition to enhancing collaboration, AI tools can enable amazing possibilities to improve the human experience. Dr. Vivienne Ming used AI to predict — 30 minutes in advance — when her son’s blood glucose would drop. Joshua Browder, a Stanford student who created the DoNotPay lawyer bot, has expanded his bot to help homeless individuals apply for government-assisted housing and refugees apply for asylum in the US, UK, and Canada.

Right now, most bots are used for entertainment and sales. As Browder demonstrates, however, we have the capacity to do so much more with AI beyond our tendency to focus on improving tasks (e.g., increase click through rate, decrease time to resolve customer service issues). This is what should get all of us in AI up from bed in the morning.

“Chatbots can prosper if they help humanity. To do that they have to do a lot more than order pizza.”

— Joshua Browder, Founder & CEO at DoNotPay

Recommendation: Think beyond fun or tactical applications of AI to ones that benefit the greater social good.

Elena Ontiveros, Content Strategist at Facebook, shared recommendations for creating a useful and engaging chatbots like having multiple responses for each user intent to avoid sounding like a robot.

Insight 5: Creating a good bot takes a lot of effort (and needs user research to validate)

It is clear that AI opens up a world of possibilities to enhance the human experience but it is also clear that developing such AI is no trivial task. To gain acceptance, win adoption, and protect users, bots will need to build human trust — responding to human interaction through character, consistency, and tone. It can take years to build up a company’s brand, but in a world where anything can go viral, it only takes days or hours for an AI tool to destroy it.

Elena Ontiveros, Content Strategist on Facebook Messenger, shared insightful recommendations for creating a useful and engaging bot that upholds and protects brand values. In the same vane, Mark Walsh at gave a compelling presentation on the importance of developing the character of a bot. He demonstrated that bots are sociopaths and you have to do a lot of work to fix that.

“Characters make us feel. We identify with them because they reflect back to us who we believe we are or want to be.”-

Mark Walsh, CEO at Motional

Oren Jacob at PullString spoke about developing conversational relationships. Many AI assistants are already living in our homes (e.g., Alexa) and interacting with their human counterparts. What will happen when Alexa joins in a family argument or when our children scream demands at her? How do we protect users from AIs mimicking our bad behavior and perpetuating the biases Camille Eddy warned about?

As technology improves and human acceptance of it relaxes, we can only guess at the future social norms of AI. With all of AI’s unknowns, organizations would be wise to enlist research to explore the exciting benefits, while mitigating the potential abuses of this emerging technology.

Recommendation: The rules of engagement are as important as the functionality. For the best results, leverage user research and design thinking approaches to ground decisions in data, not preference or intuition. brought together amazing minds working in the field of AI and each one focused on how we can keep the humanity in artificial intelligence. We haven’t always gotten it right and there will be more missteps to come, but as researchers, designers, and developers it is our responsibility to stay focused on the user and their needs.

If you attended, we’d love to hear your insights and how or if it has changed how you approach your AI work!

Thank you Jennie Doberne and Raymon Sutedjo-The for all of your feedback!

Follow us at @SalesforceUX.
Want to work with us? Contact
Check out the
Salesforce Lightning Design System



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Kathy Baxter

Kathy Baxter

Architect, Ethical AI Practice at Salesforce. Coauthor of "Understanding Your Users," 2nd ed.