Association Mapping
A New Method for User-Research in Post-Ubiquity Human-Computer Interaction
I wanted to re-write my dissertation’s introduction after the work I've been putting into it for the past few weeks. This version is just under 1000 words. Before, the introduction was around 8000. Those other 7000 words end up forming the basis of the Lit Review though some will also be used for expanding the Intro past this bit.
It is now more difficult to escape the computer than it is to find one. Through myriad devices, users are now performing tasks within an eco-system of applications. No single company, single developer, or single user can comprehend the entirety of that eco-system outside of their respective boundaries. Software design as well as the manner through which user-testing is performed is in need of new approaches that allow these disparate devices, applications, users, and tasks to be considered in concurrence. In doing so, hybrid actors consisting of human and non-human objects and their multi-faceted contexts will allow designers and researchers to construct a wider, more society-facing picture of use.
We, as in everyone who uses or designs something on a computer, all subscribe to a single ontology that forms the basis of our collective experiences. Computer software and computer hardware, even computer languages are designed through the ontology of 1 user (or function) to 1 program (or calculation) on 1 device (or processor). When networked or distributed, this paradigm does not change; rather, it becomes many doing the same task one-at-a-time at incredible speeds. This ontology has been extremely successful. However, as the computer has gotten smaller and been inserted into smaller and smaller objects, it is increasingly difficult to study use through this 1–1–1 approach. To understand this ontology and why it has become problematic, it is necessary to consider a the development of the computer.
Throughout its brief history, the computer has been marked by three distinct developments: smaller, faster, and cheaper. When considered through the lens of early human-model processing theories, these three developments seem obvious. Computation is founded on augmenting human knowledge processes. As more and more human knowledge processes are given to the act of computation, these processes need to be available, reliable, and at the speed of human society. Throughout the pre-internet days of computation, determining the speed of human processing and how best to augment it with computation was a focal point of development.
At some point, these developments shifted from augmenting humans to augmenting human society. The computer is no longer augmenting human consciousness or human processing speed. Instead, the computer has begun to mimic the substance society itself is formed by. Devices and applications are now so integrated with everyday life that the possibilities of computation have become difficult to comprehend at scale. This is evidenced by the increasingly regular deployment of automated misinformation, purposeful data perversion, and the spreading of alternative truths to users untrained in the possibility of these techniques. It is here that the 1–1–1 model becomes more problematic.
The complexity of the substance we call computation is buried under easy-to-use, easy-to-understand, single-use software. In most cases, these software or applications are not meant to be able to communicate with one another in any other way than required to by an operating system. The study of computer use has changed to meet the tenuous relationship between humans and their machine by focusing on each application in its own context or as a context. As computers have become smaller, faster, and cheaper, the discipline of Human-Computer Interaction (HCI) has expanded to include more and more human-centered research spaces.
As a field that was birthed and developed in tandem with the development of the personal computer, HCI has been uniquely situated to aid the computer’s insertion into first the workplace, then the home, and ultimately society itself. There are two moments within the history and founding of HCI that form the foundation of this research. First, in the early 1980s a researcher at Xerox‘s PARC watched video footage of people trying to use a new printer. The researcher noticed that many users approached the machine with nothing but a goal. For each user, learning took place, “on the fly.”
Second, in 1987 two researchers proposed that users paradoxically “asymptote at relative mediocrity.” They went on to propose that rather than solve the paradox, it was something that needed to be designed around. From these two pieces of research, it is easier to understand why complexity was re-distributed through single-use applications. Designing products this way solved a simple problem; however, as smart phones have created unthinkable, unknowable use scenarios, HCI has begun to re-approach many of these early issues.
Through new approaches like Humanistic HCI, Actor-Network Theory, Object Oriented Ontology, Re-modernization, and different deployments of the tenets of play, it is becoming possible to construct a new ontology to use as the basis of design. In order to begin construction, the user and the context of use must be deconstructed. In essence, the user and its contexts must become parts again. This research presents a method — Mapping Associations — in an attempt to disassemble humans and the many objects required to become and stay human. In doing so, the hope is that instead of designing a single product meant to perform a single task, designers gain insight into the hybridity and agency of computationally-mediated objects in every day use.
