Can we make our own decisions?

Seminar 1/ CMU MDes 2016 Fall

Among all those science fiction movies, my favorite one is A.I. Artificial Intelligence. The reason why I am so moved by this movie is that the robot boy actually has the ability to love, just like a real human being. Instead of behaving like a machine, the boy is intelligent that he has needs for affections. Now, back to the Earth, as to define the machine intelligence, I would love to discuss about something more practical, such as decision making.

There is no denying that decision making is a extreme complicated process, which involves relevant information gathering, alternatives identifying, evidence evaluating, choosing among alternatives and eventually taking action. This process is not only time-consuming, but also annoying. No wonder why people are willing to give up the right to choose as soon as there came a algorithm which enables machine to make “objective” decisions for them.

In Alan F. Blackwell’s article, he expressed his concerns about possible problems which could be caused by decision makings in machine learning systems based on the “unseen data or expert design abstractions”. Similarly, an techno-sociologist Zeynep tufeckci has given a speech about the same topic on TED this June.

Video is from TED.

Different from traditional coding, machine learning is a process people feed machine with all kinds of data, “including unstructured data”. After digesting all these data, instead of producing a single and clear answer, the machine learning systems provide ambiguous ones, such as “This kind of things may be the ones you are looking for”, just like human. Although this kind of systems could be very powerful and helpful for solving problems which might took tons of time in the past, there are some flaws that can not be neglected — people know nothing about the process of the decision making — the algorithm.

Since all these algorithms are generated by enterprises to achieve certain business goals, there are strong enough reasons to question how they are designed, and what exactly is going on in the black box. We don’t know what they learn — what if the data that they learned are biased in the first place? How they analyze these data — are they designed to meet users’ needs or a company’s interests? Is there a machine learning system influencing us behind the product or the service we use?

When J.C.R Licklider was talking about Man-Computer Symbiosis, I am sure he was not expecting machine to take place of human as well as our responsibilities and right to choose. Before being threatening by A.I., we are already being threatening by our avoidance of responsibility and ignorance of the defects in machines learning processes.

As designers, we should not only be aware of all these problems, but also attempt to fix them. Start by giving the right to choose, to make decision back to people, we may explore some new possibilities and redefine the roles of human and machine in the Man-Computer Symbiosis.

--

--