One of the first things I learned as an HCI researcher was the design principle of learnability: people should be able to quickly, easily, and independently learn how to use software. People should be able to launch a new program, figure out how it works, and never have to consult a help web site or contact customer support. Whatever learning had to occur, we wanted it to be quick, seamless, and situated in the context of use. We invented HCI methods to measure how learnable an interface was. We talked about the notion of “breakdowns” as an indicator that we had failed to make a design learnable. This was a vision of the world in which every person sat alone in front of a screen clicking buttons and reading labels until they had an accurate mental model of how it worked.
Of course, most software in the world still fails to achieve the “quickly” and “easily” parts. But it definitely fails the “independently” part of this principle. We just have to think about our own personal experiences to see that nearly everyone learns how to use all but the simplest software socially, not in isolation. Our friends and family introduce us to new software and teach us how to use it. Our parents call us and ask for help troubleshooting software behavior they don’t understand. Our children teach us about new apps. Millions wait hours at the Apple Genius Bar to have someone teach a critical concept about an app that they couldn’t figure out independently. And billions search the web every day for answers, trying to figure out how to use software to accomplish their goals. When they don’t find them, they contact customer support, costing companies billions of dollars in time. Learning software is therefore a mostly social thing.
Some might argue that if designs were better, it wouldn’t need to be social. If only our designs were so exceptionally intuitive, everyone would be able to infer how to use it, or at least tinker with it enough to discover how to use it. Good designs should be learnable without needing anyone’s help, right? This perspective essentially argues that good software should itself be able to teach how it is used, or even more strongly, that good software should not have to be taught at all.
I believe this position is not only too optimistic, but that it is actually impossible to achieve for every potential software user. One reason is that, fundamentally, people vary in their willingness and desire to learn independently. Some people want to be taught how to use things. Whenever I encourage my Dad to tinker with something new, he refuses; he’s worried he might break something (and this fear is justified, because he has!). Others want to tinker independently and learn themselves, and refuse help (that was me as a child). Others still want to discover how to use something with other people, to share the joy of discovery, or at least share in the frustration. Therefore, even if software could be designed to be learnable without assistance, many people will not learn it in that way, and so we must design software under the assumption that it will be taught and learned in a social context.
But I have an even more radical position. I think that all software must be taught somehow. After all, there is nothing natural about software. Software is composed of entirely invented abstractions. We might design it to closely approximate ideas that people are familiar with from the world, but even still, these approximations aren’t perfect, which requires people to reason about the edge cases. And while some people are perfectly capable of and willing to guess how software might work, and correct their theories when they are wrong, this process of discovering how to operate software is essentially a form of self-teaching.
Aiming for independent learning of software is therefore highly-biased toward people with strong computer self-efficacy, a fearless desire to tinker with things, and the ability to regulate one’s own learning process, even in the face of confusion and failure. We know that these are not common traits, and that they also tend to be gendered.
Unfortunately, even if we reject independent learning as a goal, most software is not designed to be taught in social contexts. What would it mean to design for teachability rather than learnability? It might mean supporting the creation of not just one tutorial, but a myriad of tutorials, each supporting learners with different prior knowledge and interests (much like one kind find scattered across YouTube for popular applications). It might mean software companies having their app’s splash screens start with the question, “How do you want to learn this app?” rather than dropping users to a home screen and giving them a few tooltips. It might mean designing software to have teacher modes, where someone could go through and annotate key parts of the interface for someone they are teaching how to use an application (e.g., “Dad, remember to always click this box this before you submit!”). I could even imagine companies offering live tutorials, like the one portrayed in the “Safe and Sound” episode of the Philip K. Dick Electric Dreams series, in which a representative from an AR company remotely walks her through key features of the device and encourages her when she struggles to get some of the gestures right.
Of course, none of this means that software should be hard to learn. It just means that in our efforts to make software easy to learn, we should assume that the context of learning will be a social one, not a solitary one. I’d love to see what kinds of help systems emerge from this perspective, aside from mostly anti-social crowdsourced systems like the kind I’ve invented with students in my lab. I’d love to hear your ideas!