Human scale technology

Jesse Kriss
10 min readJun 11, 2016

This is an approximate transcript of a talk I gave at Eyeo Festival on June 8, 2016. You can watch the video if you’d rather.

It’s an expansion on some ideas I first wrote about here on Medium.

This is a phrase I’ve been thinking a lot about recently. It’s relatively self-explanatory, but I think it’s still instructive to consider what the opposite–inhumane technology–implies.

There are at least three categories of inhumane technology. There’s technology that’s designed to be inhumane, like this:

Mushroom cloud from a hydrogren bomb

You can probably think of many more examples.

Then there’s technology that’s accidentally inhumane, out of ignorance or apathy, like this:

Inexplicably complex password reset form

This is a password reset dialog from an old version of Lotus Notes.

Then there’s a third category: technology that wasn’t designed to be inhumane, but was applied in an inhumane way.

Punchcard used by the Nazis for the 1933 census

As you may know, machine readable punchcards were developed for the 1890 U.S. Census.

As you’ve probably noticed by the markings on this one, this is not from the 1890 U.S. Census. This system was designed by IBM for some, uh, data scientists in Germany for the 1933 German census.

It’s categories like this, and historical examples like these, that make me a little uneasy when I see quotes like this:

“Software is eating the world”

I’m honestly not sure what Marc Andreesen was thinking when he said this. Did he think it was good? Bad? Just a statement of fact?

In any case, now the only think it makes me think of is this painting by Goya:

Saturn Devouring His Son, Francisco Goya

I guess I hope that as software eats the world, it doesn’t also eat us.

The good news is that “humane tech” is a thing now.

Which is to say: people are thinking and writing about the philosophical and ethical issues.

Design for Real Life

Eric Meyer and Sara Wachter-Boettcher recently published a book called Design for Real Life. They realize that with today’s systems, one person’s edge case can be another person’s nightmare. In an earlier blog post, Eric Meyer described this as “inadvertent algorithmic cruelty.”

Calm Technology

Amber Case has been speaking and writing about Calm Technology. She’s thinking about how technology can be designed to quietly and usefully fit into our lives instead of being overly demanding and brittle.

Toward Humane Tech

And Anil Dash has been writing about Humane Tech, “about the functional, pragmatic things we can do to make sure our technologies, and the community that creates those technologies, become far more humane.”

Hooray!

This is all great! It’s encouraging to see this issues being talked about.

And like most good ideas, they’re not altogether new, either.

As I was looking around to find writing on this topic, I came across an earlier essay, from 1969.

Can Technology Be Humane?

Paul Goodman wrote an article–“Can Technology Be Humane?”–for the the New York Review of Books, perhaps the Medium of its time.

I found this fascinating, because while 1969 was a very different moment in history, culturally and technologically, he said a number of things that absolutely resonate today.

“There are ingenious devices for unimportant functions, stressful mazes for essential functions, and drastic dislocation when anything goes wrong, which happens with increasing frequency.” — Paul Goodman

This is a pretty good encapsulation of the failure of our technological era.

In 1973, Fritz Schumacher wrote Small is Beautiful: Economics as if People Mattered.

Small is Beautiful

He spoke about intermediate, or appropriate, technology–the idea that it’s not about what you can build, but what’s appropriate to build given the context.

Ursula Franklin is also an important thinker and writer in this arena. There’s much to admire about her, including the wonderful 1989 Massey Lectures, published as The Real World of Technology.

The Real World of Technology

Ursula Franklin had many deep insights about the nature of technology, and its ability to diminish or assist our humanity. Her thinking around holistic and prescriptive technology provides a useful framework.

And while I was excited to discover these writings, it brings up a rather disturbing thought: with these incredibly insightful people writing so clearly about these topics in the 60’s, 70’s, and 80’s, why don’t our current products and services reflect this thinking?

What happened?

Rather than rail against the abstract idea today’s software is bad, let’s look at a few specifics categories and cases.

Current problems
Products don’t address major shortcomings

One issue is when products don’t address major shortcomings that seem to be obvious to a large number of people.

I think this is summed up quite well by this tweet:

We also have the problem of incredibly invasive tracking of our web browsing, and we get essentially nothing in return.

Invasive tracking

The main result of this is seeing ads on every web page we visit for the thing we already bought.

Then we have the problem of things we thought we owned disappearing without warning.

Things you thought were yours disappear without warning

This includes cases like Amazon pulling 1984 from all Kindles without warning.

Weirdly, we also get the opposite case, where things we don’t want appear without warning.

Things that you don’t want appear without warning

Think of automatic, multi-gigabyte system updates, or U2 albums.

Google search: “how do i get u2”…off my phone

And, of course, the services you do enjoy using can disappear completely.

Services disappear completely

At first glance, this may not seem like anything new. After all, it’s normal for companies to go out of business, and for products to be retired.

But this is different now. When you bought a piece of boxed software, you still had it even after the company disappeared. It wouldn’t be supported, sure, but you had a chance to manage your own transition.

Now, if a service is discontinued, they flip that switch and it’s gone, instantly and for everyone.

What’s going on here?

So what’s going on here? To me, these repeated patterns feel not like a series of mistakes or intentional harmful decisions, but like a system that is working as expected, repeatedly and predictably.

If that’s true, we should be able to consider some of the forces at work. I see two main themes.

Technology is an amplifier

One is that technology acts as an amplifier. You’ve probably heard this before, and I think it’s a useful way to think about it. Technology isn’t magically and inherently good or bad. It just amplifies what we feed it.

Impact beyond human scale

It takes our clever ideas, as well as our blind spots and biases, and brings them out to the world at larger scale.

This is the superpower and the curse of technology.

To me, though, the main problem is that it enables impact not just beyond human scale, but without human correction.

Impact without human correction

Virtually all technology encodes rules or protocols in some way. As humans, we can enforce rules in a humane way. We use our judgement to make exceptions, to adjust based on circumstances.

Technology doesn’t do this it all. It just blindly applies the rules.

“Edge cases”

The other problem with large scale systems is that as the audience increases, more and more issues fall into the bucket of “edge cases.”

You’d better hope you’re close enough to the majority persona or archetype, or else these systems are quite literally not made for you.

The the main factor is this:

INCENTIVE$

Businesses have incentives. They often involve money, but not always.

There’s a simple key idea that’s easy to forget: the interests of the corporations are not the same as your interests. They are working towards their own goals, which may or may not align well with your happiness or satisfaction.

How do we make something different?

With these critiques in mind, how do we make something different?

We can’t just try to be better, or aim to be more empathetic in our designs. We need new conditions and goals to create different outcomes. That’s how systems work.

What do we need to change in order build technology as if all people mattered?

(Technology as if all people mattered)

We have these two problems: amplification without correction, and misaligned incentives.

Human scale

To me, the idea of human scale is critical. It’s easy to fall into the trap of thinking that every idea must scale. That thinking is distracting, closes us off from great opportunities, and invites unnecessary complexity.

Turn down the amplifier a little bit. Stay small. Allow for human correction and adjustment. Build for your community, not the whole world.

At this scale, everybody counts. Plus, we get a few other benefits.

Small is simpler

Small is simpler. This is good from a pure engineering and design perspective. We strive for simplicity in the structures we build.

Even better, though, small things are more accessible.

Small is accessible

You don’t need a full team of fancy Google engineers to build something small. You can be new to programming, or a hobbyist. You don’t have to be born in the right place at the right time to the right parents.

Simpler systems are easier to create, deploy, and maintain.

More people can be the creators and tinkerers, and not just the users.

Small is cheap

If you make it small, it’s also cheap to run. You can build a service that supports thousands of people on a $5/month server, or a Raspberry Pi.

So cheap, most likely, that you don’t have to charge anybody for it. With the right architecture, you can run community-size services for less than $10/month, total.

And if this works, we can tackle the issue of incentives.

INCENTIVE$ -> $

Not to get all Ben Franklin on you, but if you don’t spend money, you don’t have to make money.

If complexity drops, and cost drops, the community can now build its own systems. Incentives align.

So, it really comes down to this:

DIY

Do it yourself. Strip it down. Keep control. Make it for your community. Don’t do it for the money.

This is a chord. This is another. This is a third. Now form a band.

And this is where I start to understand what my friend Rebecca Gates means when she says that technologists and designers have a lot to learn from punk and indie rock.

Leave the expensive, large scale, commercial arena rock to Facebook, Google, and Twitter.

We can be The Ramones.

The Ramones

And Bad Brains.

Bad Brains

We can press our own records, and run our own labels.

The Teen Idles, Dischord Records no. 1

We can make our own spaces based on our own values.

Crowd at a DC hardcore show, 1979
“Punk feminism rules okay” — Bikini Kill

And remember that computing used to be pretty punk rock.

Community Memory, the first public computerized bulletin board system

This is the first public computerized bulletin board system, which was set up in a record store in Berkeley in 1973.

In 1974, the year the Ramones formed, Ted Nelson wrote the first book about the personal computer.

Computer Lib, Ted Nelson

It contained perhaps my favorite opening line of any piece of literature: “Any nitwit can understand computers, and many do.”

It was basically a giant zine.

Computer Lib, detail

We can reclaim autonomy and agency with the incredible tools we have at hand–we just need to approach it differently.

“Home taping is killing record industry profits! We left this side blank so you can help.” — Dead Kennedys

So, less of this:

Mark Zuckerberg with a roomful of people wearing VR headsets

And more of this:

Found on the circuit board of a guitar pedal: “May the music passing through this device somehow help to bring just a little more peace to this troubled world.”

Many, many thanks to Brady Kriss, Deb Chachra, Sara Hendren, Jesse von Doom, Maggie Vail, and Rebecca Gates, who have had enormous impact on me, and have expanded my thinking on many of these topics.

Thanks to the organizers of Eyeo Festival–Dave, Wes, Caitlin, and Jer–for giving me the opportunity to work through some of these ideas in front of a friendly audience.

And thanks to the wonderful attendees of Eyeo 2016 for playing along.

--

--

Jesse Kriss

Code, design, infovis, music. Current: Netflix. Past: NASA/JPL, OFA 2012, Figure 53, IBM Research. All rights reserved, all wrongs reversed. He/him.