How to Design a Better Internet: Calming Technology, Humanizing the Algorithm, and a New Xerox PARC
by Amber Case
This is my second year as a fellow at Harvard’s Berkman Klein Center for Internet and Society, and the Civic Media Group at MIT Media Lab. I’ve been researching the link between media consumption and increased depression and isolation. My next big project will explore how media and technology both enhances and takes away from being human — and how we can use technology as a tool, not have it use us.
With so much of my time spent in academia, I’m launching this Medium to share the topics that are most on my mind lately, and hopefully inspire a larger conversation around them:
The End of the Algorithm — And the Beginning of Human-Assisted AI
Since I joined the Berkman Center in 2016, I was able to watch, alongside my colleagues, the startling technological shifts that we’re still dealing with as a society:
The a priori algorithm has reached the limits of its usefulness: In the wake of US election tampering, Facebook and Twitter are still struggling to deal with the blowback from algorithmic systems architected to optimize ad revenue, but are ripe for exploitation — while also maximizing the chance of user depression. The fundamental problem with algorithms is you can only correct them up to a point; after which, they resemble an ouroboros eating its own tail.
I’m reminded of the pet store chain which noticed the sales data on its line of dog treats were going down, and was preparing to discontinue them — until a marketing ethnographer who actually visited the stores pointed out the problem that the data wasn’t showing:
“Did you know,” she said, “your dog treats are being stocked on a high shelf, where kids can’t reach them?”
No algorithm can supplant human experience. And now that our confidence in algorithms has been shaken, we have an opportunity to revisit them with a critical eye. MIT researcher Joy Buolamwini is doing outstanding work exposing the algorithmic racial bias of everyday objects and machine learning algorithms.
Automated AI reaches the end of its usefulness. This, again, is the ouroboros effect of a snake eating its tail. My late grandfather worked on early artificial intelligence, as did my late father after him, so it’s a topic I’ve talked about for nearly my entire life. One point my dad made about AI still resonates with me today, now that we’re in the latest cyclical burst of enthusiasm for AI and machine learning:
“If you have a human body,” he told me, “you feel blades of grass, you have a culture. If one hundred thousand people put their perspectives into an AI, you would get a reflection of those people and their biases. But you still wouldn’t get real time data, and you still wouldn’t be able to get the AI to think. The AI couldn’t understand what it means to sit on a lawn or feel those blades of grass.”
We went through a period of inflated expectations for AI in the 80s and 90s, and I see a similar thing happening today. We see the CEO of Google announcing that AI will soon be “as large as electricity or fire”, but we need to separate the kind of super-advanced AI that we’ve encountered in sci-fi films from the bots and processes that are largely an extension of the Industrial Revolution.
Participatory AI is Joy Buolamwini’s elegant term for what we should be striving for, instead: Not the aspirational artificial intelligence that only exists in movie depictions, but AI that’s fully symbiotic with human intelligence. An AI that performs the heavy lifting of data processing — which the human corrects, adjusts, re-directs. A far more accurate model for how AI actually works (at least in the near to medium term), Participatory AI also undermines fanciful, apocalyptic visions of Skynet, or massive unemployment caused by automation — and instead, helps us imagine a better future.
Which takes me to another recurring theme of mine:
A Better Internet is a Calmer Internet
My 2016 book on calm design was a response to the stress and distance that devices often cause us. All too often, the technologist instinct is to add more noise to our technology.
For instance, take the recent move to make refrigerators “smart”, giving them internal cameras to detect what items should be added to the owner’s shopping list. (Generated, of course, in a smartphone-linked app.) Consider how much more complexity this brings to the shopping experience. Humans still have trouble identifying what’s in their fridge at any given moment. (When did I buy this milk, is it still good? What’s in this to-go box and why is it even here?) Remanding that task to an IoT device is likely to add another time sink, when simpler solutions exist. Compare that with a calm tech alternative: Instead of inserting an object-detecting camera into the fridge, give it a translucent door.
But we should pull the camera back even further, and ask why we should even consider appifying the grocery experience. Will it really save so much more time that it’s worth sacrificing the mundane pleasures of shopping? (And save time for what — to browse Facebook more?) What’s missing in this approach is any thought to how this solution might interact with the customer’s daily lives.
In South Korea, by contrast, people are able to shop while waiting for the subway — by pointing their phones at large, encoded pictures of groceries displayed at the station, and selecting the items that they want. These groceries are automatically ordered and delivered to the customer’s home — and the boredom of subway travel is turned into a personally meaningful activity.
In the last few months, I’ve been editing and adding to Designing Products with Sound with co-author and experience designer Aaron Day. An unofficial follow-up to my last book, Designing with Sound is about improving everyday sounds, from voice user interfaces and alarm clocks, to open offices and conference rooms. Most of the sounds we hear are not well designed, or designed at all. As the number of alerts increases, we find ourselves in a noisier and noisier world. Sound might be one of the most overlooked aspects of design, but it is increasingly important. It is our hope that we can help give user experience and sound designers a language, process and set of principles for making great sonic experiences.
We Need a Xerox PARC for the Post-Algorithm Age
I’m now at work on several writing projects and talks which will touch on all the themes I’ve highlighted here. What stands out to me is how much there is to learn. What’s needed now is a new Xerox PARC-type think tank that can be as influential for this century as the original was for the first age of consumer computing — an open consortium of generalists and artists working alongside technologists. Dynamicland in Oakland (above) is a promising candidate. We need greater and more diverse perspectives, and deeper research, and less blind acceptance and trust of the new.
Failing that, we’re destined to make the same mistakes over and over again, with unintended consequences threatening far greater impact. That was true when I gave my first cyborg anthropology talk nearly ten years ago, and now, far more important.
Much more on these themes soon. Until then, I hope we can connect on Twitter.