Who made this mess, and how will we clean it up?

Similar to my previous posts where my friend Neilly asked questions about the poem Confessions from My Feed, I asked my friend Kei, who is actually working with me on an independent study focused on the development of ethical software applications, if he had any questions about the work. Whatever useful thoughts I have on the topic of ethical software are probably rooted in some conversation Kei and I have had over the course of this semester. (Note to reader: I fixed some grammatical and spelling errors present in Kei’s questions.)

Do you think social network sites like Facebook are in some way ‘responsible’ for this disparity between your actual and portrayed self? Do they simply bring out inherent tendencies, or do they cause them?

Probably both.

People have always had disparities between their actual and portrayed selves. Even if someone is trying to be as genuine as possible, they still are limited by the fact that language is an imperfect tool for communication. And people generally aren’t attempting to fully portray themselves — someone might feel bad about themselves but not want to show it, for example — so the actual gulf between our actual and portrayed selves widens even beyond the fact that we’re stuck with limited tools.

In offline interactions, we also have visual self-expression, body language, and prosody at our disposal, we’re able to portray more aspects of the self. Online, our thoughts are disembodied text, and though that is partially ameliorated by the presence of, say, profile photos or photos associated with a post, it still remains vastly less expressive than an in-person interaction.

So although people will always be stuck with a gulf between their actual and portrayed selves, independent of participation on social media platforms, the fact that social media offers such a limited toolset does exacerbate the issue.

Maybe ascribing blame to an application isn’t the correct approach here — are the engineers who wrote the product to blame, or should we blame the PMs who made the design decisions? To what extent?

The problems with these applications don’t lie in how they’re executed on a codebase level. The problems are on a design level. While engineers technically do have the capacity to simply not implement software they see as morally disagreeable, I think it’s unfair to place blame on them, when oftentimes their job is simply to write the code their manager tells them to.

(As a preemptive rebuttal to the obvious counterargument to this point: while whenever someone defends an action as simply “following orders” it is easy to compare that to Nazi soldiers who were “following orders,” it is ridiculous to compare code that sends you too many notifications to the Holocaust.)

So the exact problems of execution are probably more due to decisions made by PMs, but they make their choices based on what their company values. If a company values ultimately shallow statistics — ex. minutes spent on the application or ads clicked — then PMs and designers will design their application in a way that maximizes those shallow statistics.

What might be some concrete changes that we could see on Facebook to ameliorate some of the issues that you get at in your post?

I think making the speech bubbles on Messenger accommodate more text would be a great first step. You see that to a certain extent on their messenger.com interface — there’s more screen real estate, so it’d be pretty surprising if they didn’t use at least some of it — but I would like to see that same thought process extended to mobile applications.

Allowing users to turn off read receipts would ameliorate some of the anxieties associated with the fact that we are relentlessly thinking about what whomever we’re interacting with is doing and thinking. The narrator of the poem cares that people know she’s liked specific political pages; she obviously also cares when a message is “marked as read” but has received no reply.

Highlighting notes on people’s news feeds to a greater extent would go a long way towards encouraging thoughtful discourse as opposed to mindless article shares. If notes actually mattered, perhaps the narrator of the poem could have posted a note with her favorite quotes from the PDF story.

Changing default settings from “send me every notification anyone could possibly receive about anything for everything” to something more limited would probably get the narrator of the poem to stop letting Facebook absorb so much of her thought processes. Most people don’t change from default settings, either because they don’t know how or they’re caught in the inertia of hundreds of notifications a day being seen as normal. Changing the default to something sane, or perhaps prompting the user to pick for themselves when they sign in on a new device for the first time, would be kinder to and healthier for users.

Are there any existing applications you would point to as exemplars of avoiding these performative issues extant on Facebook?

One easy example is TurboTax. You use it once a year and only hear from it once you actually need to use it again.

A more fun example might be the New York Times app. I downloaded it a long time ago, so I don’t remember the exact onboarding process, but I do recall indicating at some point that yes, I do want breaking news notifications. They use that power wisely, which I admire. I actually uninstalled CNN because it abused breaking news notifications by sending me endless inane trivia, whereas the Times has yet to send me things I didn’t find important or fascinating.

How might Facebook have avoided running into the issues it currently has before those issues arose? Does computing and application building need a code of ethics or moral guideline akin to the Hippocratic Oath?

I think as we learn more about the impact that many design practices have on people’s health (especially eyes, backs, and attention spans) over time, and as devices become ever more ubiquitous, we will find more people realize that yes, there must be guidelines. The Hippocratic Oath feels like an odd comparison, since doctors are faced with life-or-death decisions on a far more regular basis than application developers are. And while abuses of people’s time and attention are truly egregious offenses, something so somber as the Hippocratic Oath seems like a poor personality fit for the future-minded developers so many aspire to be.

There has to be a middle ground. We have to develop a system which exists to remind developers that their users have but one precious life — so should you really pressure them into taking a three-hour survey? Should you encourage them to spend days of it scrolling, or feeling insecure about themselves? I truly believe that people are good, and that once developers have the tools to understand what it means for something to be designed ethically and how to implement it, they’ll choose good over evil.

I don’t know what kind of training Facebook expects of or provides to their designers regarding these issues, but I do think the solution is at the training level — in addition to the evaluation level, where they determine what statistics to use in order to determine if a change was successful or not.

You could also try to instate regulations, but those would probably be impossible to enforce for the same reasons prohibition was impossible to enforce. Any random Joe can make alcohol in their backyard; any 12-year-old with access to the internet can learn to program an Android game.

Show your support

Clapping shows how much you appreciated Lily George’s story.