Timed Blocks, User Preservation, and Public Norms

hey! hoodie here. you may have been following my series of articles that attempts, through a very specific perspective, and framing, to dissect what makes social media tick the way it does, at least in my personal experience, and use that to inform the design of the social media in question, ethically. last post was here: https://medium.com/@novemberninerniner/user-profiles-user-metadata-and-updating-them-eaf4c7cfc182

if you’ve ever dabbled in researching mobile app development, there’s a very unique set of language that surrounds it, that we can use to inform the way users interact with their phones

it’s also worrying language, and i want to get that out of the way first, so that people who might need to think about self care can take a step back and away, if they need to

so what is this language? why is it worrying? what does that have to do with social media

we’re going to address those, in order. first, here’s some gibberish words that get used a lot: notification click through rate, user behavioral expectations, monetization strategies, engagement, user acquisition rate, and so many more

we’ll start with the last one, because this is what i want to really compare and contrast with. User Acquisition Rate is how fast you ‘acquire’ a user, and find out whether or not they will stay.

it’s generally somewhere from a week to a month, and the reason this matters so much is it defines how that first week to month should act, so that the user gets hooked, stays hooked, and doesn’t leave.

that.. sounds worrying now, doesn’t it? part of this cultural phenomenon has to do with capitalism, and market share, and Big Business and the habitual devaluation of personality and humanity in favor of the big money, numbers over people, is probably the easiest way to explain it

but let’s compare this to how social media works, because i think it betrays something very, very important, that a lot of social media utterly fails to acknowledge

mobile apps tend to make money through converting users, acquiring them. they justify their existence, in this method, through the new, the fresh, the churning in and out of new and old. it doesnt matter how long the user lasts, if they make their money

it’s dark. but social media is a very different beast

if we were to state that on average, apps gain value through acquisitions, then social media gains value through conservation of existing users

why is that? well it’s a bit obvious, when you look at it through the lens of capitalistic social media. facebook, twitter, and all the other social media giants all have one thing in common: you cannot talk to any other social media users, on a different service

so the value in a social media service is the net value of all the users that exist upon that service. so why doesn’t social media, on average, put the user’s care and comfort first and foremost?

well, it’s a tough ‘problem’ to attempt to ‘solve’, and the fact of the matter is that it will never be completely solved, if you treat it as a problem

social media is also inherently a safe haven of sorts for users who have a bad position in the real, meat-space world, due to the fact that it can, to some extent, replace, and supplement the social needs of a human being’s life

alright, let’s take a step back now. i’ve gotten rather theoretical, and we’re all of a sudden talking in some really vague, unapproachable terms

how do we, as designers, care for the comfort and emotions of people, using the software

and the answer is that we can’t. bit scary, huh? we cannot, through computer technology, at least at this point in time, determine the state of a user, and react accordingly. it’s just beyond our time, for now

so instead, we need to offer the user tools to allow them to care for themself, and encourage and support them to.

now some of you may be intuitive, and know me enough to know that i’m one of the main reasons that the content warning was invented, on the mastodon software, which was a big influence on that social space, and shaped how we think about caring for those around us

and that’s the other half of the proverbial equation, the ‘solution’ to our ‘problem’. users can care for each other, and that is what we want, more than anything

so, social media, through design and technology, encourage users to care for each other, and exercise practices that preserve the existing user base, and help it to blossom

content warnings are a powerful tool in that arsenal, but we can go a few steps further beyond.

let’s explore the purpose of the content warning, to further gain some insight into this design space. a content warning is subject metadata, a proverbial answer to the proverbial question ‘what is this post about’

it’s user supplied, of course, not automatically detected. sadly we don’t have the tech, yet, to automatically generate this metadata for every post

but it has something important here, an inkling of the actual design choices we want to make relative to the user’s care and comfort

the content being interacted with can be hostile, or at the very least dangerous, to the user

so how do we make room for the user to protect themself, from this very purpose of social media, communication, to be hostile to the user?

the answer is of course, to allow the user to add their own layer of protection to posts, through the technology. so we want dynamically additive content warnings, or, rewording for clarity, we want to empower the user to have total control of whether or not posts they receive have a content warning or not

ah, you most intuitive, observant reader, have already realized where i’m going with this. https://medium.com/@novemberninerniner/social-media-two-necessary-features-and-how-to-make-them-inobtrusive-c8325a62a153 a few articles ago, we went into the concept of ‘rules’ as user controls for managing the content the receive.

so we want a Rule. now we know what our goal is, how to achieve it, and where to put it

what kind of rule did we not go over yet, in our last attempt? a timed one, is the answer, and it’s where things get a bit less vague, i think, for you, the reader.

let’s say you’re on social media, and some heavy, distressing news becomes common knowledge within the time period you are Logged On, and Paying Attention

now, no matter where you look, someone is likely to be talking about the topic in question. this is, on the surface level, one hundred percent acceptable. users are communicating! engagement is being had! what a wonderful userbase we must have

except, now let’s reframe it, and say that the topic in question is very harmful to a potential user we may be able to conserve and maintain should we help them shield themselves from the hurt.

well, this topic in question is temporary, at least in practice, even if it is a part of a bigger phenomena

so we want the user to be able to manually, temporarily, mute other users, so that they might avoid a whole lot of hurt.

let’s go back to the sentence structure i raved about in that post i linked, where we created the concept of rules.

posts containing __Keyword__ by __User__ are removed from the feed

this is the rule for muting via keyword

so a timed mute will look something like this

posts by __User__ are removed from the feed for the next __Amount of Time__.

“hoodie” someone is probably saying to themself “that took an awful lot of talking to propose a single rule you might have been able to cram into the article about that feature in the first place”

and well, you’re kind of right. i could have just tacked it on, there, and acted like it wasn’t a big deal, like this feature isn’t, in some sense, extremely politically charged, and hard to handle correctly

but i want to propose to you a situation, and ask you how you view the situation.

this is dangerous, because, to put it simply, i am talking about something that actually happened, and had a set of effects that is also very worth noting

User A is having a bad day. they’re not really at their best, and honestly, who could blame them? things are hard right now.
User B makes a post. this post contains subject A. subject A isn’t very healthy for User A.
User A asks User B if they would be so kind as to tag subject A in the future, for the sake of User A’s well being, politely
User C doesn’t like this, and says so, publicly, sparking conversation about the action of asking for a content warning

yeah, that got rather weird, rather quick, didn’t it?

User C wasn’t even involved, why did they feel so strongly that they must speak up?

well.. it’s… complicated. I don’t honestly know all the answers. i cannot know all the answers, because i am not simultaneously everyone involved in this set of interactions

but what i can tell you, and hopefully have convinced you of, is the importance of allowing your users to ask for content warnings, and the reasoning behind why it can be so important.

it doesn’t matter if the user in question doesn’t even want to put a content warning on their posts! that is a valid response to User A’s question

but we have to let User A ask the question. we have to let User B say no, and we have to let User A have the tools to still care for themselves regardless of how the situation plays out.

in the end, we don’t want to act like User C did. even if no direct harm comes of the action, User C speaking out publically about it can directly harm User A, and User B

heck, it can even hurt users D, and Z, and Y, and E.

This is not a condemnation.

i want to be very, very clear on that. everyone makes mistakes, myself included, and all we can do, is ask that you learn from those mistakes.

so do me a favor, and let’s make it more normalized for people to communicate healthily about this stuff. ask some questions yourself, or ask in the place of others, if you happen to know someone finds something very difficult to see. let’s change the norm, for the better.