Optimizing Code for Human Performance

Mychilo Cline
Capital One Tech
Published in
11 min readJun 23, 2016

--

When we look at code from the perspective of cognitive science and interface design, we see that there is often a disconnect between how code is written and how humans process information. From this perspective, the programmer is the end user — one with limited processing capabilities — and it changes the way we view and write code.

How do we create an intuitive experience for the software engineer, helping to increase throughput and reduce errors?

Programming languages are, to a large extent, not just for computers; they are also for the people who use them. After all, if we wrote code for computers we would just communicate in ones and zeroes. Code helps us organize our thoughts, express them clearly, and share them with our compiler and with others.

Cognitive science tells us that a large percentage of coding errors are the result of high cognitive load, poor organization, and ineffective communication. Furthermore, it is clear that the programmer is usually the source of these coding errors. In software development, we can optimize for computational speed, flexibility and reuse, or program stability; yet little emphasis has been placed on optimizing human performance. By placing a greater emphasis on this element, it may be possible to achieve significant gains in ease of use, speed of implementation, code reuse, and an overall reduction in bugs.

When you look at this image, the bizarre effect is caused by top-down processing. Top-down processing is based on expectation, whereas “bottom-up processing” is based on incoming sensory inputs.

Engineering Psychology — Big O for the Human Brain

When a rocket or spaceship is about to collide with another object, a loud siren is activated, altering the crew of the spaceship of their impending death. At least, that is what we learn from watching movies. But in truth, NASA researchers explain, a loud noise is likely to result in catastrophe and death. A loud noise produces a startle response, which may result in a three second delay before the brain is able to process the information. In contrast, a soft, soothing sound alerts the pilot and crew that the ship is about to crash, thus giving them time to react. — Cognitive Scientist, Dr. Robert Cooper

Scientists have come to know a lot about how the brain works and how to optimize human performance. At NASA Ames, scientists in the field of Engineering Psychology explore the capabilities and limitations of the human brain. Why? Because it sucks when you crash a multi-billion dollar spacecraft.

In engineering psychology (a branch of cognitive science), we start with the assumption that the brain is a computer. When we look at a programmer from this perspective, it is not unlike looking at a computer with limited processing capability and RAM. The human brain is made of different processing units and we see multi-tasking, multi-threading, and memory switching; as well as user inputs, network calls, nested call-backs, and concurrency/synching problems.

As programmers, we understand how complex and error prone these systems can be; and that data input must be in a format the system can work with and understand. Thus, we may ask, “If the human brain is a computer, then how do we optimize this computer for reliability, program stability, or performance? How do we maximize throughput and minimize error?

By way of example, when the human brain does not have enough resources to perform a task, it is like trying to surf the web on an iPhone I. That is, tasks with high cognitive load may overtax limited system resources, resulting in processing or memory errors. An example of a task with high cognitive load is adding up your grocery bill in your head. Adding two numbers is easy, but adding fifty grocery items is not a lot of fun.

In computer science, we regularly measure how long it takes a computer to perform a computational process, using a mathematical notation called “Big O.” It seems like a natural next step to ask how long it takes the human brain to perform a task and how much stress it puts on system resources — the “cognitive load.” I like to call this “Big O” for the human brain.

To illustrate this in laymen’s terms, let’s look at the following two methods. How long does it take to scan the first method and find the first line of code? How long does it take to identify the difference between our two methods?

As a rule of thumb, the more the user is expected to process, remember, or know, the greater the cognitive load, the lower the throughput, and the greater the probability of error.

Cognitive scientists have performed numerous studies examining the correlation between things like “text formatting and position” and “target acquisition time,” looking at everything from visual sampling and signal detection to cognitive processing strategies. By way of example, if we were searching for a letter amongst a field of other letters, the brain will use a linear search progression, checking one letter after another until the correct one is found.

For a computer programmer, this analysis looks surprising familiar (it is identical to that of a basic search through an array of unsorted elements). In this way, we see how cognitive scientists are able to analyze cognitive performance just as programmers analyze software performance.

There is a lot we know about the way the brain processes information. This has been put to good use in the design of air traffic control systems, cockpit displays, and heads up displays in fighter jets. We may, thus, ask, “If we approached coding from a cognitive perspective, how would it change the way we wrote code and how different would our code be?”

Modern Programming Challenges Meet Ancient Brain Processes

A Code Challenge: What started as a simple process any child could explain was transformed into a confusing program with multiple entry points, complex state logic, no clearly defined sequence of events, nested block and delegate calls. That is, as the program grew in size and complexity, keeping track of who is doing what and what happens next became very challenging. It overloaded the student’s “working memory.” Humans have limited cognitive capacity. - an observation from my time working at a Mobile Makers (an iOS boot camp)

The lack of clear organization and logic within many software apps suggests that there is a disconnect between how code is written and how humans process information. This isn’t just an issue of poor communication. Modern programming languages replaced goto commands with loops, and object-oriented programming traded speed for organization, enabling software engineers to build programs of shocking scale and complexity. Yet, modern trends in mobile development — including asynchronous calls and user driven experience — have undermined much of the organization and control flow of modern structured languages. In particular, we may note that:

· User input initiates various sequences, giving us a spider web of states and events, in which the overarching logic must be inferred.

· Simple linear processes (adding notifications to your app, for instance) are broken down into a complex series of disjointed delegate calls, as we wait for different events to occur.

· Code is often broken up into objects and blocks which are not tightly coupled, but maintain a network of logical dependencies, making them difficult to understand and use.

· Nested block calls, often seen in network calls or animations, are hard to read; they disregard proper scoping; and violate the single responsibility principle (applying it to methods, as well as classes). Every method should have a single responsibility and it should be clearly stated in the methods name.

In fact, if we were to ask an English professor — an expert in written communication — to grade an iOS project, they would likely comment, “Please rewrite. Missing summary and conclusion. Difficult to follow. Missing logic and transitions. Non-standard English usage. Needs proper paragraphing. No clear organization.” Thus, bringing up a number of interesting questions about design patterns, the basic logical structure of an app, and the design programming languages.

I once wrote a multi-player dice game as a run loop with a wait command, in order to illustrate how simple and explicit programming logic can be stated.

In contrast, when I was a teacher and assigned this code challenge to students, their code was often difficult to follow or completely unintelligible. The heart of the problem seemed to be the concept of asynchonicity; we need to wait for something — whether that is a network call or a button press — before continuing. This can play havoc with well-structured code.

The questions I wish to pose are,

· “Is what we see here a necessary adaptation to the demands of a computer networking and user-driven experience, or is it just bad code?”

· “Is this merely an issue of industry-wide poor documentation and communication standards?”

At the very least, it is important to include basic artifacts explaining the logical organization. Documentation, just a quick explanation, photo, or sketch, should be dragged and dropped into the project, itself — not put in a location where no one will ever see it. That is what Agile Development is all about.

In the long run it may be necessary to structure and organize our code in new ways. This is part of a natural evolutionary process. By way of example, object-oriented programming traded computer performance for usability, and Ruby traded type-safety for a more intuitive interface.

Putting Things Into Practice

In a study conducted at NASA, eight air crews simulated flying into Los Angeles International Airport with their flight instrument readings projected directly onto the windscreen in front of them. In one trial, they were suddenly confronted with an aircraft sitting on the runway, a situation that calls for a go-around. Only two pilots noticed the plane and flew the go-around, while six would have collided with the plane. “They’re tunneling their attention at the head-up display at the expense of looking out the window,” Jordan says. “These are pretty significant issues.” - Funding the hunt for the human factor Psychology: Professor Kevin Jordan wins $73 Million NASA Research GrantSJSU ScholarWorks, 2012, Together, No. 2

In my previous career as a math and philosophy teacher in both Russia and China, I learned a lot about throughput and error. How do you maximize throughput? How do you minimize error? How do you manage cognitive load through effective communication, organization, sequence, and pacing so even struggling students can master calculus?

When I look at a piece of software code, I ask myself, “How many of my former students would understand this code on first read?” How many would have questions? How many would be confused how to implement a class or method correctly?

One strategy for reducing cognitive load is to break things up into smaller components. As a rule of thumb, if you have to work “really hard” to understand a complex piece of code, then we should ask if there is a way to reduce the complexity. A nested for loop, for instance, could be rewritten as a “for loop,” which makes a call to a helper method that contains the second loop.

Another strategy for reducing cognitive load is through using clear, unambiguous method and variable names, following accepted English grammar rules. When things are stated, clearly and simply, they are easy to remember and to follow.

Writing good, self-documenting code is a lot like writing a college essay, and it starts with the basics. “Each method is a paragraph. Each method name is a topic sentence. Use proper word usage, not slang. Break out lengthy completion blocks or nested code into methods of their own — that is, use paragraphing and spacing in order to break ideas into pieces and to emphasize salient points. Make sure that the underlying logical structure is clear. Don’t forget to add comments to provide enough context for the reader to understand exactly what it does, what it does not do, what it is for, and how to use it.” Code is about communication, it is about remembering who our audience is.

Analogously, we may view design patterns as organizational structures which promote human usage and consumption. In software development, we use design patterns to manage complexity and to promote flexibility and reuse. (OO, MVC, delegation, factories, SOLID design principles, etc.) But managing complexity is often just another way of saying reducing cognitive load. We want to promote flexibility and keep code DRY because we may want to make revisions someday. Nevertheless, we should not seek to eliminate cognitive load, because many tasks require sophistication. Instead, we should seek to avoid unnecessary complexity that may result in error, and to use design patterns to build intuitive interfaces that the brain can easily work with and understand.

Building Rocket Ships

There is a lot we can learn about how to optimize human performance, but perhaps we should start by trying to write beautiful code, or at least good, self-documenting code. In its coding standards, NASA’s Jet Propulsion Lab explains that:

Simpler control flow translates into stronger capabilities for both human and tool-based analysis and often results in improved code clarity. Mission critical code should not just be arguably, but trivially correct. …code should be written to be readily understandable by any competent developer, without requiring significant effort to reconstruct the thought processes and assumptions of the original developer. — JPL Institutional Coding Standard for the C Programming Language

While we are not building rocket ships, Engineering Psychology is a worthy topic of study, giving us important insights into how to optimize human performance. When we evaluate code in terms of user experience and cognition, it begins to change how we write code. “How many mistakes would you expect to find if you had a hundred users and what would the cost be in terms of man-hours?” It is through effective organization, reduction of cognitive load, and clear communication of data and organizational structures that we may maximize throughput and minimize error, promoting ease of use, speed of implementation, and code reuse.

Mychilo Cline is an iOS developer at CapitalOne and holds a graduate degree in Human-Computer Interaction / Technology and Society, focusing upon: Human Factors. Research Methodologies, Human Interaction Patterns, and End User Requirements. Social-Engineering in online communities. Anthropological and historical patterns in the adoption of new technologies.

For more on APIs, open source, community events, and developer culture at Capital One, visit DevExchange, our one-stop developer portal. https://developer.capitalone.com/

--

--