This is a phrase I’ve been thinking a lot about recently. It’s relatively self-explanatory, but I think it’s still instructive to consider what the opposite–inhumane technology–implies.
There are at least three categories of inhumane technology. There’s technology that’s designed to be inhumane, like this:
You can probably think of many more examples.
Then there’s technology that’s accidentally inhumane, out of ignorance or apathy, like this:
This is a password reset dialog from an old version of Lotus Notes.
Then there’s a third category: technology that wasn’t designed to be inhumane, but was applied in an inhumane way.
As you may know, machine readable punchcards were developed for the 1890 U.S. Census.
As you’ve probably noticed by the markings on this one, this is not from the 1890 U.S. Census. This system was designed by IBM for some, uh, data scientists in Germany for the 1933 German census.
It’s categories like this, and historical examples like these, that make me a little uneasy when I see quotes like this:
I’m honestly not sure what Marc Andreesen was thinking when he said this. Did he think it was good? Bad? Just a statement of fact?
In any case, now the only think it makes me think of is this painting by Goya:
I guess I hope that as software eats the world, it doesn’t also eat us.
The good news is that “humane tech” is a thing now.
Which is to say: people are thinking and writing about the philosophical and ethical issues.
Amber Case has been speaking and writing about Calm Technology. She’s thinking about how technology can be designed to quietly and usefully fit into our lives instead of being overly demanding and brittle.
And Anil Dash has been writing about Humane Tech, “about the functional, pragmatic things we can do to make sure our technologies, and the community that creates those technologies, become far more humane.”
This is all great! It’s encouraging to see this issues being talked about.
And like most good ideas, they’re not altogether new, either.
As I was looking around to find writing on this topic, I came across an earlier essay, from 1969.
Paul Goodman wrote an article–“Can Technology Be Humane?”–for the the New York Review of Books, perhaps the Medium of its time.
I found this fascinating, because while 1969 was a very different moment in history, culturally and technologically, he said a number of things that absolutely resonate today.
This is a pretty good encapsulation of the failure of our technological era.
In 1973, Fritz Schumacher wrote Small is Beautiful: Economics as if People Mattered.
He spoke about intermediate, or appropriate, technology–the idea that it’s not about what you can build, but what’s appropriate to build given the context.
Ursula Franklin is also an important thinker and writer in this arena. There’s much to admire about her, including the wonderful 1989 Massey Lectures, published as The Real World of Technology.
Ursula Franklin had many deep insights about the nature of technology, and its ability to diminish or assist our humanity. Her thinking around holistic and prescriptive technology provides a useful framework.
And while I was excited to discover these writings, it brings up a rather disturbing thought: with these incredibly insightful people writing so clearly about these topics in the 60’s, 70’s, and 80’s, why don’t our current products and services reflect this thinking?
Rather than rail against the abstract idea today’s software is bad, let’s look at a few specifics categories and cases.
One issue is when products don’t address major shortcomings that seem to be obvious to a large number of people.
I think this is summed up quite well by this tweet:
We also have the problem of incredibly invasive tracking of our web browsing, and we get essentially nothing in return.
The main result of this is seeing ads on every web page we visit for the thing we already bought.
Then we have the problem of things we thought we owned disappearing without warning.
This includes cases like Amazon pulling 1984 from all Kindles without warning.
Weirdly, we also get the opposite case, where things we don’t want appear without warning.
Think of automatic, multi-gigabyte system updates, or U2 albums.
And, of course, the services you do enjoy using can disappear completely.
At first glance, this may not seem like anything new. After all, it’s normal for companies to go out of business, and for products to be retired.
But this is different now. When you bought a piece of boxed software, you still had it even after the company disappeared. It wouldn’t be supported, sure, but you had a chance to manage your own transition.
Now, if a service is discontinued, they flip that switch and it’s gone, instantly and for everyone.
So what’s going on here? To me, these repeated patterns feel not like a series of mistakes or intentional harmful decisions, but like a system that is working as expected, repeatedly and predictably.
If that’s true, we should be able to consider some of the forces at work. I see two main themes.
One is that technology acts as an amplifier. You’ve probably heard this before, and I think it’s a useful way to think about it. Technology isn’t magically and inherently good or bad. It just amplifies what we feed it.
It takes our clever ideas, as well as our blind spots and biases, and brings them out to the world at larger scale.
This is the superpower and the curse of technology.
To me, though, the main problem is that it enables impact not just beyond human scale, but without human correction.
Virtually all technology encodes rules or protocols in some way. As humans, we can enforce rules in a humane way. We use our judgement to make exceptions, to adjust based on circumstances.
Technology doesn’t do this it all. It just blindly applies the rules.
The other problem with large scale systems is that as the audience increases, more and more issues fall into the bucket of “edge cases.”
You’d better hope you’re close enough to the majority persona or archetype, or else these systems are quite literally not made for you.
The the main factor is this:
Businesses have incentives. They often involve money, but not always.
There’s a simple key idea that’s easy to forget: the interests of the corporations are not the same as your interests. They are working towards their own goals, which may or may not align well with your happiness or satisfaction.
With these critiques in mind, how do we make something different?
We can’t just try to be better, or aim to be more empathetic in our designs. We need new conditions and goals to create different outcomes. That’s how systems work.
What do we need to change in order build technology as if all people mattered?
We have these two problems: amplification without correction, and misaligned incentives.
To me, the idea of human scale is critical. It’s easy to fall into the trap of thinking that every idea must scale. That thinking is distracting, closes us off from great opportunities, and invites unnecessary complexity.
Turn down the amplifier a little bit. Stay small. Allow for human correction and adjustment. Build for your community, not the whole world.
At this scale, everybody counts. Plus, we get a few other benefits.
Small is simpler. This is good from a pure engineering and design perspective. We strive for simplicity in the structures we build.
Even better, though, small things are more accessible.
You don’t need a full team of fancy Google engineers to build something small. You can be new to programming, or a hobbyist. You don’t have to be born in the right place at the right time to the right parents.
Simpler systems are easier to create, deploy, and maintain.
More people can be the creators and tinkerers, and not just the users.
If you make it small, it’s also cheap to run. You can build a service that supports thousands of people on a $5/month server, or a Raspberry Pi.
So cheap, most likely, that you don’t have to charge anybody for it. With the right architecture, you can run community-size services for less than $10/month, total.
And if this works, we can tackle the issue of incentives.
Not to get all Ben Franklin on you, but if you don’t spend money, you don’t have to make money.
If complexity drops, and cost drops, the community can now build its own systems. Incentives align.
So, it really comes down to this:
Do it yourself. Strip it down. Keep control. Make it for your community. Don’t do it for the money.
And this is where I start to understand what my friend Rebecca Gates means when she says that technologists and designers have a lot to learn from punk and indie rock.
Leave the expensive, large scale, commercial arena rock to Facebook, Google, and Twitter.
We can be The Ramones.
And Bad Brains.
We can press our own records, and run our own labels.
We can make our own spaces based on our own values.
And remember that computing used to be pretty punk rock.
This is the first public computerized bulletin board system, which was set up in a record store in Berkeley in 1973.
In 1974, the year the Ramones formed, Ted Nelson wrote the first book about the personal computer.
It contained perhaps my favorite opening line of any piece of literature: “Any nitwit can understand computers, and many do.”
It was basically a giant zine.
We can reclaim autonomy and agency with the incredible tools we have at hand–we just need to approach it differently.
So, less of this:
And more of this:
Thanks to the organizers of Eyeo Festival–Dave, Wes, Caitlin, and Jer–for giving me the opportunity to work through some of these ideas in front of a friendly audience.
And thanks to the wonderful attendees of Eyeo 2016 for playing along.