There are many images invoked by the word “interface.” It might be a computer, or you might think of a phone screen. If you are a programmer, you might push your idea a little bit further, and argue that something not tangible as an API (Application Program Interface) is also an interface. All of these examples are correct. However, one common trait amongst them is that they are all computing related, and they are all artificial.
Before jumping to the definition of interface (which I believe there are a lot of accessible definitions online), it is important to pay attention to the artificial nature of interface. Herbert Simon, a cognitive scientist and economist, also one of the pioneers in artificial intelligence, argues that we live in an artificial world. By artificial, he means “produced by art rather than by nature; not genuine or natural; affected; not pertaining to the essence of the matter.” (Simon, 2019) “Artificial” is not equivalent to “synthetic” in the sense that it can be an imitation and be inspired by nature. However, artifice and synthesis are similar in the way that they both refer to a process of making adapting to human goals and purposes. A farm is an artifact because it is adapting food to meet human needs, versus a forest, which can be understood as a natural phenomenon (Simon, 2019).
Based on the understanding that the world is crafted around the goals of humans, we can soon recognize that interface is everywhere in our lives. According to Simon, an artifact is a meeting point between the inner environment and the outer environment, an interface between the substance and organizations of the artifice itself and the surroundings in which it operates. That is why a screen of an iPad is an interface, because it is where the inner operation system built with chips and wires meets human beings who want to complete different tasks with it.
This definition is crucial to the discipline of Human-Computer Interaction (HCI) today, because the acknowledgement of our artificial surroundings pushes forward a more distinct discipline, cybernetics. In 1948, Norbert Wiener coined the term “cybernetics” to describe self-regulating mechanisms (Wiener, 2019), and laid down the foundation of automation in his 1953 publication, “The Human Use of Human Beings.” It is undeniable that cybernetics is the foundation of how the HCI world understands and defines interfaces today. The idea of “smart” and “context-aware” devices today is a manifestation of Norbert’s vision of machines, that they are information systems closer to human-beings. Machines should also have “sensory organs” so that they can not only produce information, but also constantly take in information to adjust their behavior based on the memory of their actions (Wiener, 1968).
When Norbert Wiener first published his vision on cybernetics, it was a groundbreaking theory that challenged how the society imagines a human-machine symbiosis relationship. Yet, today this philosophy of incorporating a closed-system loop in system design is so integrated with the HCI discipline, that almost every digital interface today incorporates the loop of interaction, where some change in the environment is being collected, and that change is reflected in the system in some manner (feedback).
The idea of “smart” and “context-aware” devices today is a manifestation of Norbert’s vision of machines, that they are information systems closer to human-beings.
For example, the most standard elements of interactive interfaces that we use today, WIMP (Windows, Icons, Menus, and Pointer), fall into the paradigm of the cybernetics model of human-machine interaction (Ko, 2020). From desktop computers to current AR/VR interfaces, they are all following the cybernetics vision of how machines operate. The machine presents its availability, demonstrated by the user’s ability to access different functions through menu bars and icons. In other words, the machine collects information from the outer environment, user inputs commands using mouse cursors or controller pointers, and the machine processes the input information internally through the computing process that is hidden beneath the interface. Finally, the machine outputs the result via display, such as a window of browsers or applications.
As crazy as it might sound, the interface today has evolved into every scale and form possible that is integrated into our modern life.
It is easy to name interfaces that manifest themselves in the form of screens, but understanding where they are from is crucial to broaden our understanding of what interfaces are. Interfaces can be more tangible than a screen. Artifacts surrounding ourselves are embedded in ever more complicated systems, and the “sensory organs” of machines are communicating to different layers of computing infrastructures. A screen in a Tesla is an interface between the driver and the operating system, but the driverless car itself is also an interface between an automated navigation system and the complexity of traffic on the highway. As crazy as it might sound, the interface today has evolved into every scale and form possible that is integrated into our modern life.
This is a chapter of the thought on “Design Beyond Interface”. If you are interested in reading more, you can find the table of content here.
I am an interaction designer. You can find me at https://mikibin.design/.
References
Ko, A. (2020). Interactive Interfaces. Retrieved June 15, 2020, from http://faculty.washington.edu/ajko/books/uist/interactive.html
Ko, A. (2020). What Interfaces Mediate. Retrieved June 15, 2020, from http://faculty.washington.edu/ajko/books/uist/mediation.html
Ko, A. (2020). Declarative Interfaces. Retrieved from http://faculty.washington.edu/ajko/books/uist/declarative.html
Simon, H. A., & Laird, J. (2019). The sciences of the artificial. Cambridge: The MIT Press.
Wiener, N. (1968). The human use of human beings. London, U.K.: Sphere Books.
Wiener, N., Hill, D., & Mitter, S. (2019). Cybernetics ; or, Control and communication in the animal and the machine. Cambridge: MIT Press.