A good start — but still so much to do: Glitch’s response to tech companies’ announcements on ending online abuse
Yesterday, technology companies were celebrated for making announcements regarding their collective approach to ending online abuse and making their platforms safer for women. Here Seyi Akiwowo, Founder and Executive Director at Glitch, explains that while this is a reasonable start, there’s still a long way to go in ending the growing problem of online abuse.
When I started Glitch four years ago, I envisioned a time when tech companies such as Facebook would take full responsibility for their platforms’ role in online abuse; when they would take appropriate, urgent action on this growing issue. Since then, and especially over the last 15 months of lockdowns and global crisis, the problem of online abuse has grown exponentially. Whilst we of course welcome a high-profile discussion around ending the abuse of women on online platforms, and whilst we welcome parts of the announcements made yesterday, we also believe there is far more that tech companies can and must do.
Online abuse: what’s the actual problem?
Online abuse not only violates an individual’s right to live free from violence and to participate online, but also undermines democratic freedoms. In a survey Glitch conducted last year, almost 1 in 2 (46%) of women and non-binary people reported experiencing online abuse. 1 in 3 (29%) of those who had experienced online abuse prior to the start of the COVID-19 pandemic reported that it had worsened during the pandemic. The incidence of online abuse was even higher for Black and minoritised respondents*.
Online gender-based violence can result in a range of harms to women: psychological, physical, sexual and economic. We have worked with women who feel anxiety, stress, fear and panic attacks over using the internet and social media platforms; with public figures and journalists who have been forced out of online spaces, meaning we lose their contribution to sports, politics, the economy, society. The impact is disproportionately greater on women who are racialised and marginalised, and the problem is growing — especially in the context of a global pandemic.
This isn’t, though, just about being able to have an online presence: it is about being able to flourish online. It’s about being able to express opinions and take part in debates without the fear of violence. It’s about being able to stand for political office without receiving death threats. It’s about enabling everybody to feel safe in our online spaces. Just as we collectively campaign for our streets, parks, offices and public spaces to be safe, we need to ensure that our online spaces are safe for everybody.
What the tech companies said — and didn’t say
In our Ripple Effect report of September 2020 (though we’ve called for these changes for far longer), Glitch called for an effective, transparent and timely response from tech companies to address online gender-based violence. Furthermore, this response needs to recognise the intersectional nature of the impact of abuse on women of colour and women from marginalised communities. So, after almost two decades of building tech platforms without prioritising the safety of women, how do these announcements stack up? In short, whilst some parts of the tech companies’ plans tick boxes, others fall short. There is still much to do.
There are some parts of yesterday’s announcements to be welcomed. For example, we believe that policies and terms of service must address the various forms of online abuse that can manifest on their platforms, and specifically consider how online abuse can be used to target women and other marginalised communities to suppress their voices online. This appears to be somewhat addressed by the announcements, though it’s unclear as to what extent, particularly in relation to women with intersecting protected characteristics. The World Wide Web Foundation will be monitoring the commitments that platforms have made on an annual basis, but we call on platforms to undertake their own regular reviews of their policies, and commit to making regular updates to address new trends, patterns and manifestations of online abuse generally, and gender-based violence against women in particular, through an intersectional lens. Moreover, such policies should be clearly visible and accessible to all users, beyond the simplification of the language used within them.
We have also repeatedly called for users to be provided with greater controls and filters over their online experiences, to ensure that everyone has greater agency and decision-making power over the type of content they see, and which users can communicate with them. Companies have now committed to offering more granular settings (e.g. who can see, share, comment or reply to posts). We have also called for users across platforms to have clearer information on what type of content is allowed on platforms; on how they can use platforms safely; and how to report online abuse — either as a person who has experienced online abuse directly, or an active online bystander who has witnessed it. So we welcome these commitments to companies using more simple and accessible language throughout the user experience; and to them providing easy navigation and access to safety tools.
However, there are several vital areas that are not included at all in the announced raft of measures. These are essential to creating meaningful change, and we call on tech companies to include the following in their plans:
- Content moderation practices must be transparent. This includes providing disaggregated data around the number of reports platforms receive related to online abuse, the type of action taken, the time it takes to review reported content and increased transparency around appeals processes. This includes setting targets for response times and gathering customer service satisfaction rates on reporting decisions based on user satisfaction surveys. Whilst yesterday’s announcement includes a commitment to offer users the ability to track and manage their reports of abuse, there’s still lots more to do here.
- Human content moderators need to be provided with comprehensive training about different tactics of online abuse, and on how online abuse specifically targets women and marginalised communities. This training should be in line with human rights standards. Content moderators need to be well resourced, distributed globally to understand local contexts, and — given the often graphic nature of the content they are required to review — also provided with free and easily accessible mental health support.
- The use of artificial intelligence in content moderation needs to be transparent — and not the sole means for removing abusive content. Because human content moderators are better able to understand context, they can help mitigate against any risks of automated wrongful take-downs that can suppress freedom of expression online.
- Technology companies must respect human rights on their platforms, and take concrete steps to avoid causing or contributing to abuses of human rights — including the right to live free from gender-based violence, the right to freedom of expression, the right to non-discrimination and the right to privacy. There should be an ongoing assessment of how policies and practices can negatively impact a user’s human rights, with swift action taken to address abuses.
- All platforms should ensure greater diversity and representation in all job roles and at the highest levels. Diversity of opinions and backgrounds at technology companies will greatly benefit product and policy decisions, particularly around online safety.
So much more still to do
So: these announcements are encouraging, but there’s still more to be done, and we strongly call on technology companies to not just stop here. Headlines are one thing; actual change is another. And that is the real question here: will these changes actually result in meaningful improvements to the lives of women by decreasing levels of abuse? Until the answer is a comprehensive yes, we must keep working.
For Glitch, that means we continue to hold tech companies accountable. We will continue to engage with technology companies, and campaign for legislation and policy changes. The latter includes strong provisions in the Digital Services Act at a European level, and the Online Safety Bill in the UK to create online spaces where all women can not only live free from online violence and harassment, but also flourish.
Until now, there hasn’t been the strong recognition from platforms that online violence against women was even an issue. This long awaited acknowledgement is a huge milestone, and comes after years of evidence gathering and research showing that this really is an issue, and moreover that it is a life-threatening issue for women. So whilst we can celebrate that tech companies have listened, we ask that they don’t take so long in the future; that they keep listening; and that they act on all fronts to ensure our collective online safety.
*All stats: The Ripple Effect: Covid-19 and the Epidemic of Online Abuse. Glitch and EVAW (End Violence Against Women coalition), September 2020.
We were founded in 2017 by then local politician, Seyi Akiwowo, after she received a flood of abuse when a video of her speech at the European Parliament went viral. Through training, research, workshops and programmes, we’re building an online world that is safer for all. We focus our effort on three key areas: Awareness, Advocacy and Action.