Users develop craft — incredibly complicated folk-ontologies, rituals, procedures, and heuristics — for working with whatever tools they regularly use, & this craft is itself a tool. A user who identifies as ‘non-technical’ is in fact extremely technical: their desire to stay roughly within the bounds of the craft they have already developed leads them to invent incredible workarounds.
For instance, my father once was required to take a short course on Microsoft Publisher 2003. He now still does everything in Microsoft Publisher 2003, including an enormous website I help him run. His familiarity with Microsoft Publisher 2003 is deeper than many professional Java programmers’ familiarity with Java, although there are large sets of features he’s totally unaware of. Almost none of that familiarity translates to Publisher 2007.
When it comes to this kind of craft, there’s a mutual legibility problem. Programmers tend to like to extend themselves to totally new environments, so the justification for making an ugly hack in excel rather than learning R is sort of alien (though we all have encountered situations where the obvious right way to do something seems like it will require so much research that we would prefer a hack). And, on the other side, because the folk ontology of users is based on empirical experience with user interfaces that go out of their way to make internal behavior inaccessible (to ‘hide’ rather than ‘tuck away’), the concepts and terminology used by programmers is meaningfully different than those used by users (and users cannot, largely, communicate among themselves about this either).
Nevertheless, craft is highly effective. That’s its primary attribute. End users get things done by inventing a superstructure that keeps their existing knowledge valuable.
When I call for more flexible UIs with more programming-language-like expressive power, I am not suggesting that all end users should go learn C or something. I’m suggesting, instead, that we should design interfaces so that when users try to apply their existing familiarity with a system to problems on the fringes of the application’s use case, they find themselves working with something more generalizable.
With applications that are too opinionated, using them outside the happy path requires identifying glitches. In more flexible applications, the outer fringe of possibility exposes the underlying language. (Think of speedrunning versus modding.) In the latter case, there is a smooth path from user to developer — and users can turn around as soon as they’ve gotten the features they need.
If we go all-in on this flexibility (as I am trying to do with my experimental environment Kamui), what do we lose? Scalability (because applications quickly become customized by users to meet their specific needs and habits), top-down legibility (because those customizations are done by people without formal training, who lack the terminology and idioms of computer science), and proprietariness (since there is no core that will not eventually be touched).
I make the big computing vs small computing distinction primarily based on whether or not, for some particular function, scale & legibility is more important than flexibility and extensibility. If flexibility is more important, we are in a small computing context. I firmly believe that big computing contexts are rare & can be minimized even further, with care and foresight.