To be or not to be…more human-like

Matt Himes
Futures, Entrepreneurship and AI
2 min readSep 4, 2017

When we discussed the meaning behind the term cognitive design, we talked about designing for systems that can understand humans, learn from us, and reason like us. In other words the current goal for AI designers is to create machines that are as human-like as possible. This makes sense, since for the foreseeable future AI’s will mainly be task oriented and are not nearly advanced enough that they could actually be mistaken for a real human being. I wonder, however, creating something as human-like as possible should really be the ultimate goal.

Admittedly, this is treading more into the philosophical and less into the world of design and UX, but I think it is an interesting thing to thing to talk about and to keep in the back of our minds when designing. Bill gates recently expressed concern over the future of AI (source), and Elon Musk warned a room full of state leaders that “AI poses an Existential Risk” (source). Now, from reading the context, it appears Musk was mainly referring the dangers of “big data” coming to define more and more industries, but the underlying issue I think comes back to the idea of just how human do we want our machines to be?

In It’s Time to Design Emotionally Intelligent Machines, the author mentions how humans are comfortable with machine interactions as long as they are in complete control over them, because if they are, “it most closely supports the idea of a servant or an emotional support system” (source). However, if the goal is truly to make machines able to understand, think, and reason like humans, then eventually we will reach a point where we no longer have complete control over what artificial intelligence systems think, say, or do. Do we want machines to be so human-like that they can think and make decisions for themselves even if it means we are no longer in complete control? I am not saying that Terminator is our future, simply asking where do we draw the line? For now, designing around the philosophy of making machines more human-like seems logical, but eventually we may reach a point where the differences are indistinguishable, and that presents some very real ethical dilemmas. I realize this is a design class and not a philosophy class, but I think starting out by questioning the basis premise of making machines more human-like could make for some interesting discussion.

--

--