Can machines think as we think?

Yi Chin Lee
4 min readAug 7, 2019

--

Plans and situated actions

https://www.amazon.com/Human-Machine-Reconfigurations-Learning-Doing-Computational/dp/052167588X/ref=sr_1_2?keywords=plan+and+situated+action&qid=1565217728&s=gateway&sr=8-2

Human now lives in a world that involves more and more machine with different level of intelligence. It is inevitable for people to interact with machines, and those interactions aim at facilitating human’s daily life or increasing happiness. However, the machine could breakdown sometime and make human’s life harder. For example, the coffee machine cannot recognize the transparent cup and won’t pour out the coffee. Or, more seriously, the Google AI mislabeled the black people as gorillas. A biased database causes this problem with Google AI. However, bias is part of human nature, so we cannot get rid of this shortly. If we can create the AI that can adapt to different situations, and behave according to the given circumstances, they might have the chance to learn beyond all bias. How can we teach the machine to learn beyond the given data-based and make the situated action?

In Lucy Suchman’s book Human-Machine Reconfiguration, she mentioned there are two kinds of human intelligence: plans, and situated action. Suchman uses European sailor and Turkish sailors to illustrated this idea. Europeans travel with a plan and make each decision according to the initial setting which embodied plan action; while the Turkish sailors don’t have plans, they navigate according to the nature such as wind, temperature, and ocean phases, they embodied the situated action.

Like European sailors, current practice focus on making plans to solve problems. Computer scientist collecting every rational human decision, and use those data to train the machine to behave like the human expert. Additionally, cognitive scientists believe that mind is the information system that could exist in any physical form, and intelligence is an element that can be extracted from the human mind and embody. Therefore, cognitive scientists introduced the computational approach to program human intelligence into the mathematical model, which provides the framework for today’s’ Artificial Intelligence.

However, Suchman believes that those computational frameworks cannot reflect situated intelligence. She claims, in the real world situation, all human activity is based on situated action. Even the planned actions are “taken in the context of particular, concrete circumstances.” She continuous to argues that even plans could provide useful information for building the computational model, that didn’t reflect human’s natural behavior. And also plans are the weak resources for circumstantial actions they are task-oriented, they do not reflect the unpredictability of specific situations. Therefore, we need to find a way to describe the Turkish navigation method. By studying how people react to the environment situationally, we can build better models to bridge machine and human.

To address this issue, scientists come up with some direct methods, they want to change our current algorithm to model situated action. When Suchman was writing this book, the supercomputer Deep Blue, created by IBM, was introduced to the world. The huge memory space and quick processing capability let the let Deep Blue beat the human player. However, the supercomputer learned how to play go game by memory all the manual, which shows that Deep Blue exists in the planned category. After a few years, we had AlphaGo Zero in 2017, and the system can teach itself from scratch how to master the go game. With the deep neural network, AlphaGo Zero could take a totally different approach and general-purpose algorithms that know nothing about the game beyond the basic rules. It could be a chance to prove the computer could evolve without human input and could respond accordingly.

While we pour lots of effort in the development of advanced algorithm such as the neural network, that is still not a universal solution. At the same time, as a designer, we are capable of supplementing the machine with another method to rebuild human-machine communication, which could have more flexibility and more sustainable. In the design practice, building an interface that could improve the machine’s self-explanatory capacity and help with mutual understanding.

Take Roomba, the most successful home robot that exists in the market, as an example. If Roomba just shuts down or keep have annoying beeping when it got stuck under the table or hit the wall, humans may feel the machine is dumb or unprofessional. But if Roomba keeps moving back and forth, act like human or pet set stuck, or asking for help in an adorable way, it can encourage humans to help it in situations where it can’t respond to real-world challenges. A simple user experience design may not profoundly gain machine intelligence, but it could defiantly improve the human-machine interaction.

Suchman’s idea helps us understand that we have two kinds of intelligence: while humans are good at situated action, machines are currently more planned-based. Current research tries to both improve algorithms with situated action and focus on HCI to improve the interface to respond to the real-world situation. Machines have their limits, but through these two solutions, we can help them move from planned to situated action.

--

--

Yi Chin Lee

First year Master Sturdent in Computational Design of SoA, Carnegie Mellon University