All things in Moderation?

Richard Graham
TalkLife Blog
Published in
5 min readJul 11, 2019

The amazing contribution of Machine Learning in Peer Support

One of the most significant outputs of the social media revolution has been the greater recognition of peer support and its valuable role in health and well-being. The reduction of stigma and isolation, even when just listening to the thoughts and feelings of those similar to you, is very powerful. Add in opportunities for social support, and the sense of belonging that follows participation in conversation, and we see the benefits of social prescribing and patient activation but in online spaces.

If only it were that simple. For those that have tried to set-up an online space or group for those in need of support, they will know that more is needed than, say, the functionality of a Facebook Group, for its members to thrive.

Firstly, there is a need to establish boundaries around and within the community. This may be through making the community private, and it’s content not searchable; that way you can only be invited in. This may be the safest option, but many will never discover what support is available, because they just cannot find it. There is also a need within a community to have private spaces, where more difficult issues can be discussed, without a fear of exposure or worries about upsetting others. Private spaces in an online community really take the pressure off, and sometimes allow for more intensive support.

Secondly, even the most private of groups and communities still need to protect their members from aggressive or distressing content and communications, and from exploitation and persuasion. There is a certain irony that if you establish a community of support for vulnerable people, you will also attract trolls, and others whose aim may be to upset or exploit vulnerable people. The UK Government’s Consultation on ‘Online Harms’ spells out very clearly that vulnerable groups are harmed more than other groups online, causing both distress and increasing inequalities. Some vulnerable groups then no longer go online, and the digital divide gets wider. A community’s Terms of Use (or ‘House Rules’) can be a good place to set out what behaviours are expected of members, especially at the point of them joining the community. If a community is young, it may even be possible to work with members to establish what those rules are. The goal is to create a space safe enough for people to talk about what difficult issues, should they wish to, without anxiety or concern about the response they may get. How we create that space, and then strive to keep it safe becomes the foundation of any support.

If providing support for vulnerable people online (in some ways, all of us?), having moderation systems that can act swiftly, to protect members of a community, are essential. At TalkLife, sophisticated Machine Learning, co-developed with MIT, Harvard, Microsoft Research and clinical experts can identify risks within seconds, and remove content that could be distressing, until it has been reviewed. Natural language processing can capture other risks, as can the ever-alert ability of members to report inappropriate content. A safe and trusted space can thus be established and protected by a complex, layered series of filters which capture the many potential risks. The technology is so fast and extensive, in one sense, everything is in moderation, scrutinised in real time, and if a risk is identified, it is flagged for human review. At TalkLife, the admin team though are not on autopilot, letting the technology undertake all of the work. But the Machine Learning frees them, and the members of the community, to spend more time listening to and supporting others. The technology is there to improve the human experience and for most users, you would hardly notice it is there. But is safety enough? Can we do more?

An emphasis on online harms, or ‘safety by design’ moderation, can all too easily focus on ‘antisocial’ behaviours of users without any consideration of what their back story is. I would argue that some moderation processes can then be experienced by the user as cold and brutal (‘computer says no’), and with so little explanation, the user can end up feeling angry and misunderstood. Over time, feelings of exclusion grow, and the journey towards trolling has started. Opportunities to help a person feel included and encouraged to participate needs to start early in the user journey, especially of mistakes are made; getting tips on how to engage others, how to support them and allowing them to support you is a start. At TalkLife, behavioural science is informing how users that may not get it right first time are helped to find other ways of expressing themselves, and a potential troll can become one of TalkLife’s valued members. Technology may come with artificial intelligence, but seldom with sympathy and compassion; two words that must be embraced by everyone who works in the tech sector but also those that use the technology. There is currently insufficient interest in why someone presents as they do, and the cheap intervention of exclusion, of blocking access to platforms is not without cost.

The important point is to harness technology, not just to identify harms and exclude, but to recognise needs, and help someone get the support they need, in real time. Healthy communities care for all of their members, and especially those most vulnerable. The TalkLife Machine Learning doesn’t just flag those who are angry or hateful, but also those who are struggling and desperate for support. Users in need of support then find themselves in an enriched part of the service, where there is additional support, for the time they need it. Machine Learning enables this process to be frictionless; nothing more is demanded of those in crisis, but additional support is there, then, waiting.

Surely the point of technology is to make life better, or easier? If you are at your lowest, but upset and angry, being signposted elsewhere, or worse, banned, is neither right nor an effective solution. You want to be helped to find greater support and people who are there to really listen, without too much struggle. Used well, as it is at TalkLife, Machine Learning could make us more humane, and both faster and smarter about how we support others. And the brilliant thing is that we can do that now, in the blink of an eye.

For More information visit:

Download TalkLife Free for iOS Here
Download TalkLife Free for Android Here

--

--

Richard Graham
TalkLife Blog

Dr Richard Graham is a Consultant Psychiatrist and Clinical Lead, Good Thinking; London’s Digital Mental Well-being Service (https://www.good-thinking.uk/)