What happens when someone is in an emergency, and your service is something they rely upon? Think about accessing online banking from a foreign country when your bank card gets stolen on a bad internet connection. Think about accessing health insurance information on a mobile phone while heading to an emergency room. Think about letting someone know you’re okay after an attack or emergency happens nearby. Think about setting up two-factor authentication after an online account gets compromised.
Whether one tweets, streams on Twitch, or creates Snapchat stories, social media requires an exhausting level of transparency. It requests the sharing of nearly every part of one’s life. Almost anything is fair game — what you wear, what you eat, what you do, or what you think — and it should all be available to the public for free. After all, it might result in some great benefits if you’re popular enough.
That work will be hard, and possibly not rewarding, but it is indisputably ours to do. And it’s possible: Project Coral’s Talk system is a thoughtful, promising way to make online comments better, as is Civil. Facebook has promised to start fact-checking in the News Feed, although they have a long way to go (and some editors to rehire). Twitter has also promised to take abuse more seriously. These solutions and changes were not the result of regulation, but of users complaining—and employees questioning. It’s essential that we, as engineers, designers, and managers, not accept the illusion of the impartial algorithm as an excuse to stop interrogating the effects of our work. Only by critically examining the world we’ve built will we be able to answer: does this work ennoble us?