Having a friend named AI

two visionary ideas I’d like to see AI be able to achieve one day.

Wei Cheng
4 min readFeb 13, 2019

Nowadays, there are a lot of AI chatbots and assistants, but I think one day, AI will be a more common existence just like our friends. I wrote about two visionary ideas I’d like to see AI be able to achieve one day.

Idea 1. AI classmate

from giphy

Imagine you’re in a group meeting, it is a team for usability class, two guys are arguing about the decision between two proposal for the final project. The meeting have already been for 2 hours, but the submission due is 11 pm. It’s 9 pm already, you really want to go home and chill. You hope that you guys can make an agreement on one good solution now.

……….💤
Just at the time you almost fell asleep, Andy said something. He pull up some data organizing the information that you guys had discussed tonight, combined with all the data from 10 papers, 5 text books and 10 interviews audio file from 10 participants. Then the group made the decision based on Andy’s suggestions.

Who’s Andy?

Andy is an AI classmate, he’s round, 4 inches tall, and likes to sit on the table. After this “AI classmate” went to the market in one day, almost every class invite him/her to join in each class group. Andy is the student 30 days free trail that you signed up. You think he is doing pretty well in assisting decision making. You then consider to buy a premium version — unlock more features.

Will students be lazy because of them?

AI classmates aims to assist and guide students with their learning, not to do works for them. However, some students might use it without their own original discussion, they just feed AI classmates data and sat back for results. Therefore, the company might design an algorithms that the output of analysis will only be given to students once they detect that they really have face challenges to some extent or really struggle with it..

Also, since these AI classmates listen to words and review the data a lot. One of the concern will be security issue. Even thought the AI classmate company tell you that they care about the data security, how can you ensure that the AI won’t be hacked by other more advanced AI? Or they might leak sensitive information to others when they are searching on the internet?

Idea 2. AI Mind Reader

from giphy

A lot of movies and fictions talks about mind reading. The character put on a headset or a glasses, or maybe sometimes a person with superpower born with this ability. After he/she interacted with a person, they can know what they’re thinking! Will it be possible that it will really happen in our life? I think the answer is yes.

How AI can do mind reading?

One of the description of AI I learned in class is that it…

“Has better insights, with more confidence, faster than humanly possible.”

Thus, AI mind reader will not be actually “reading” the mind, instead, it collect data and predict the outcome within a very short time and create a feel like mind reading. But how do they exactly collect data? There are some studies show that there are some ways that can do kind of “mind reading”.

For example, analyze with people’s intentions. Studies show that there is a part of our brains that is specialized for reading the intentions and traits of others. Called the mentalizing network, this collection of brain regions works together with the brain’s mirror system. The mentalizing network helps us read the intentions of others, while the mirror system helps us read and experience their emotions. That is why that sometimes people can feel like they can understand others mind.

Who uses AI Mind Reader?

The users can be people that have problem in recognizing others emotions, people who can not hear, see, or receive information from others easily. Or maybe psychology therapists or consoler who want to know what their patients are thinking.

“Yeah, cool!” or “No way, it’s so creepy.”

There will be a lot of ethical concerns about this idea. What if I don’t want my intentions to be record by others? What if the AI misunderstand emotions and might hurt the users? Or maybe AI accidentally revealed intentions that the people being detected doesn’t know him/herself? It is very important to consider ethical issues about how to deal with the feeling and findings that AI mind reader discovered.

Wei Cheng. Feb/05/2019

--

--