Can We Design Safety Into Games From The Beginning?

Justin Davis
Spectrum Labs
Published in
4 min readMar 4, 2020

And still keep them fun? Why, yes. Yes, we can.

What would it be like to feel completely safe in a multiplayer game, not worrying about hate speech or trolls or grooming or fraud?

I wish I could tell you. Because while those games definitely exist, the threat of ugliness is almost always there, adding a layer of complexity and vigilance to what should just be a fun game where users enjoy themselves, interacting and competing with their communities.

It’s my company’s job to think proactively, to find the patterns and signs that someone’s about to do something sh*tty. We work in the part of the industry known as “Trust & Safety” — basically, keeping ugliness, predators, trolls and frauds off of online platforms so that users can trust that what they see is real, and that they are safe. (You see what I did there?) When it goes wrong — and it does go wrong — gaming platforms are forced into reactive mode.

So it’s worth asking: Is it possible to proactively design trust & safety into games from the beginning, and still keep gameplay fun and engaging?

What would that even look like?

Asked, and answered:

Assign a C-level owner for Trust & Safety

While our customers are proactive, sometimes we get called in when companies are forced into reactive mode after something bad happens on their platform; and it’s pretty common that no one at the executive level actually heads Trust & Safety efforts. Which leads to confusion: Who needs to make the decisions? Who owns policies and enforcing them? Should it be Product? User Experience? Marketing? PR?

Every company needs a leader who can know what is happening on their platform at any time; who knows policies, thinks about preventing toxicity, and has a sense of what will keep users coming back and happy.

Policies aren’t one size fits all, but differ from platform to platform. You wouldn’t ban all sexual talk and flirtation from a dating app; but you can be ready to spot and block sex trafficking, underage users, and harassment, for example. For games, you wouldn’t ban all competitive talk or even trash talk; but your policies need to reflect the outcomes you want, from the get-go, to be proactive instead of reactive.

So what about creating a Chief Trust & Safety Officer?

I know, I know. The tech industry is rife with creative titles. But there’s no point in adding a title unless it has some actual power, to do things like…

Bring Trust & Safety Into the Storyboarding Process

It should surprise no one that, if you gamify the murder of women — to single out just one popular game — you’re normalizing something that you’d hate to see in real life, while also inviting weirdly surreal misogyny from players at the same time.

I’d argue that it’s possible to build a storyline that is still cool, addictive, and with lots of levels and challenges — but designing in the outcomes and behaviors that reflect good policies and head off the stupid stuff.

I’m not suggesting PC Police or funsucking here. By bringing in a Trust & Safety owner from the start, you’re including someone whose actual responsibility is to think ahead of the kinds of gamer actions and speech that drive users from platforms, end up in really unflattering headlines, and impact a company’s bottom line. That’s going to mean being creative. And, pro tip: by being creative, you might just make your game more unique and engaging, not less.

Develop Guidelines with Diversity in Mind; and Label Your Data Accordingly

Speaking as a data guy, for a moment: when you’re developing your policies, you will need to label your data. That means you’re going to need not just guidelines (“we don’t tolerate hate speech”) but definitions of what hate speech actually looks like, for your platform. It can vary — not just from game to game or platform to platform, but also idiom, industry, and nationality/language.

Why? Because data can’t be labeled without definitions, and no labeled data means that whatever content moderation you have in place won’t work and won’t help you.

But this wonky, data-labeling thing actually presents a very cool opportunity to think about inclusivity and diversity, to make your game more welcoming and future-proof your platform against bad actors and worse headlines.

Think about the values you want to reinforce for your company, your game and users: What behaviors, phrases, and actual words can be kept off your platform to clarify that racism, homo- and transphobia, illegal sales, misogyny, or grooming for white nationalism — just to name a few — will never be allowed? It’s all in the labeling. Just as not having labeled data lets bad actors in, improving your labeled data will help keep them out. So let’s get leadership, better labeled data, better ethics, and leadership into our games.

To wrap it up, I’m confident that we can build safety by design into games, without sacrificing fun or user experience — and there are several gaming companies leading the charge in this area. I’m willing to bet the games that are designed with safety in mind will be even better than those we’re used to playing.

--

--