Human-centered Software Development
Build stronger software by designing for human interaction
Author: Edward Park, Senior Software Engineer
Creating software is a human-centered endeavor. The software that we build is used directly by human users. Whether we’re building a mobile application, service, or tool, the cumulative moments of delight (or abhorrence) that our human users experience determine the efficacy of the software we build.
For this reason, software organizations invest in user experience design, in addition to the nuts-and-bolts of the technology under the hood. Product designers leverage their expertise in interaction psychology to create user flows that are useful, familiar, and discoverable. Designers understand that application interfaces are only as effective as their users find them to be useful. Therefore, the best user experiences incorporate visual components that facilitate task completion while working with the psychological heuristics that humans intuitively employ. Humans seek sensory cues and constraints to navigate a deluge of stimuli and process large amounts of information, and we do so while interacting with computing interfaces as well as the real world.
In essence, software organizations are constantly trying to optimize how humans interact with computers. Today, the practice of using design to hone user experience is predominantly applied to visual interfaces. These are commonly the front ends of applications, where humans interact with the visually rendered software (in a browser or mobile app). What about the act of software development itself?
OUR FELLOW DEVELOPERS ARE USERS, TOO
Most observers would think that software development strives to translate human instructions into machine-understandable directives in a reductive, one-way communication flow. We are “programming” computers to do our bidding, and we do so with cold, logical artifacts composed of code modules. While the software itself is mostly evaluated against human experiences, we compartmentalize the act of developing as one devoid of human factors.
Here at Headspace, the process of building and maintaining our software is actually centered around human users — the developers. This runs contrary to the common belief that the act of writing software is quite literally the opposite. From our experience, the ability of an organization to effectively build and scale software is dependent on developing with our engineers’ human factors in mind. Our engineers write code that is meant to be read by other engineers. We build software with the assumption that other engineers will need to maintain it. We compose features with the understanding that they will be evaluated through pull requests and comments. We write modules knowing that other engineers will inevitably reuse them. We design systems and transcribe these designs into documented specifications in order to maximize fellow, human understanding. We foster healthy, technical debates amongst one another in order to strengthen the quality of our software. If you’ve written software with at least one additional person, your experience was likely similarly filled with human factors.
By crossing over the imaginary divide between end-user software interaction and software development interaction, we can see that both practices involve a human-centered interface. Through this lens, we’ve identified three high-potential ways to apply interaction psychology to your organization’s software development philosophy.
DESIGN FOR A TASK, NOT FUNCTIONALITY
In the world of interaction psychology, effective user interfaces are designed around a central task. This task is the highest-level abstraction of what the user wants to accomplish by interacting with an interface. This abstracted “task” is composed of all the individual interactions and functionality that the interface supports. And yet, both designers and engineers alike tend to focus disproportionately on adding individual functionality during interface design.
Take a home thermostat, for example. You might have used one in your own home recently. When you use a thermostat, you may be trained to toggle specific pieces of functionality, but your ultimate goal is to modulate the internal temperature of your home. That goal represents the overall task.
As the designer of the common home thermostat, one might be tempted to maximize functionality by jamming in as many options onto the interface as possible. This should lead to maximum options for the user, more features for each use case, and ultimately a happier user, right?
Instead, what you get is a cluttered interface similar to the above, with functionality that requires the user to understand the specifics of the interface’s idiosyncrasies. In this case, the thermostat requires the user to have specialized knowledge (or training) on how each button option impacts the output of the actions one has taken. Suddenly, a significant chunk of domain knowledge about operating the interface itself is required in order to accomplish the straightforward task of temperature control.
Alternatively, interfaces such as a Nest thermostat are centered around the overall task itself, not the knobs and levers. By abstracting most of the available functionality behind a small, task-centered interface, the user can focus on the abstracted needs around temperature control and automated regulation, significantly shortening the divide between the user and the task.
Task-centered user interfaces lead to more intuitive user interactions, reducing cognitive overhead for novice users and improving overall effectiveness. Similarly, developers writing modules will benefit from taking this approach when writing code. By centering interface design on another developer’s anticipated task needs, and not individual functionality, software teams will be encouraged to shift module design towards a cleaner, less complex, and more reusable manner.
We see this principle echoed in the canons of software design philosophy. For example, John Ousterhout devotes a whole chapter of A Philosophy of Software Design to encouraging the design of “deeper” modules in order to reduce maintenance and reuse complexity. “The best modules are those that provide powerful functionality, yet have simple interfaces,” says Ousterhout.
DESIGN IS CONTINUOUS
Effective interface design hinges on a user’s task — in other words, the design process must collect information about the user’s needs in that task. Needfinding then leads to ideating on design alternatives, followed by prototyping, and finally evaluating the prototypes. This is the design lifecycle.
What happens after evaluation? The process does not end here, as further needs may be surfaced during prototype evaluation. Therefore, the design lifecycle is a continuous process.
In software development, we take this approach with user interfaces through iterative experimentation. But internally, software development processes often eschew this continuous loop in pursuit of code completion. Once we’ve devised an approach for an internal tool or code module, as long as this solution satisfies the original requirements, we (the authors) are largely satisfied. However, a software organization’s priorities and challenges can evolve, just as priorities can evolve for a single end user.
User needs are not singular or static. Similarly, a developer needs to evolve over time. Even after code is approved, passes tests, and is committed, we must apply the approach of continuous design to our software in order to remain human-centered in our software development approach. We must continuously critique our code from the perspective of how it supports other developers’ needs; in order to effectively do so, we must continuously assess what our fellow developers actually need.
HUMAN ERROR IS ACTUALLY BAD DESIGN
We’ve all been there — a teammate consumes a package we built or invokes an SDK in a manner that was not intended, leading to questionable results. After some back and forth, the details of the misuse become clear, and we can chalk this up to human error. We blame them, especially if the error is a mental lapse. We then continue to do things as we did; after all, to err is human.
But interaction psychology tells us that in order to design effective interfaces, we must not only expect human error — we must anticipate it. Humans perceive and operate through pattern matching and heuristics. This allows us to process a significant amount of information in reasonable time, relying on cues and existing knowledge to navigate the world instead of consciously evaluating every circumstance.
As a side effect, humans are naturally error prone. These errors can take the form of mental slips or knowledge-based errors. And despite increases in automation across every domain, we still rely heavily on human intervention for the most complex tasks. Once we start seeing the act of software development as a human-centered endeavor, our approach for interface design will shift.
In the context of software development collaboration, human error (e.g. misusing a module, mistakenly deploying breaking code) is essentially a human action that misunderstands the needs of the technology used. Therefore, as designers of software interfaces, we should look at ways for interfaces to assist translating human goals to the appropriate computational expressions.
This “assistance” can take the form of characteristics of the interface itself. For example, knowing that developer users might take erroneous action when implementing an external module, the designers of this module might implement constraints in the module logic that prevents the user from entering that path altogether.
The module designer might also incorporate affordances, which are tantamount to push handles on a door. Affordances tell the user how an interface is meant to be used in an obvious way — this might take shape as self-documenting code or selective class variables that are exposed to guide the developer user on appropriate actions.
Finally, effective software interface designers might employ techniques around error resiliency, such as reversibility (e.g. undo) or timely feedback messaging. This increases the fault tolerance of the software being used, as well as bringing undesirable user actions to the forefront for the developer.
Software development is inherently a human-centered endeavor. We write code for applications that service human users. We also write code that will be used and maintained by human developers.
We employ interaction psychology to optimize end-user experience, using stylistic and iconographic levers to facilitate task completion and reduce cognitive load. In a similar way, software organizations must take the same human-centered approach to writing and maintaining our source code and systems. In doing so, we will optimize the way that other developers use internal code, continue to evolve with a shifting, problem landscape, and endure inevitable human errors at scale.