Gradient Institute
Published in

Gradient Institute

Caution: metaverse ahead

By combining technologies that are each themselves transformative, the metaverse has the potential to have a profound impact on the world. We would be wise to plan conservatively and ensure that this technological convergence helps, rather than hurts, humanity and the planet.

Facebook’s parent company Meta has recently supercharged decades-long efforts to combine some of the most powerful technologies that exist today. Meta’s goal is to create immersive digital worlds collectively known as the metaverse. Virtual reality (VR), digital commerce, social networks and artificial intelligence (AI) are being harnessed by Meta and others to push towards the vision of a fully-fledged alternative digital reality where billions of people “work, play, relax, transact and socialise”.

A digital alternative reality sounds like science fiction. And it is: Neal Stephenson first coined the team “metaverse” in his novel Snow Crash, building on similar concepts from William Gibson and others. In the 30 years since these works, however, the enabling technologies for the metaverse have continued to advance rapidly. Meta at least believes that they have reached a point where the vision can be realised, and have invested as much as $10 billion to do so. Other tech companies have followed suit including Microsoft, Google and Nvidia.

Lessons from social media

With the metaverse’s mass-adoption perhaps years rather than decades away, we must not repeat the collective mistake we made with social media: that is, of failing to predict and prevent widespread harm and unintended consequences. Social media fulfilled its promise to connect the world, and billions of people now benefit from this technology every day. However, along with the good, the combination of AI and highly interconnected communities that underpin social media platforms has also exploited the brain’s attention and reward mechanisms to sell products and services, creating addictive, harmful experiences, increasing community polarisation and ultimately disrupting national elections and vaccinations programs with viral misinformation.

Curtailing these sorts of damaging impacts from social media through regulation has been hampered by the domination of the industry by powerful technology companies like Meta, Google and Twitter. Tech giants are also the biggest investors in the metaverse, creating the risk of a similar level of concentration of power and the ensuing regulatory challenges.

Risks from AI

Taking AI techniques used in social media and embedding them in immersive VR worlds dramatically increases the access AI has to our perceptions and reward systems, and could lead to new and increased harms from these technologies. VR technology is marching towards its explicit goal of being indistinguishable to the mind and body from real life. In the metaverse AI systems could control every aspect of someone’s world, including everything they see, hear and touch, magnifying the ability to monopolise attention and influence the thoughts, opinions, and fears of users.

Metaverse worlds may also rely on AI to generate the very buildings, neighbourhoods, landscapes and people that make up the worlds themselves, creating yet another avenue for serious harm. Meta and others have indicated that it will be impossible/ impractical for a human workforce to create 3D assets sufficient to populate massive virtual words at an immersive level of detail. They intend therefore to rely on generative AI systems to build content from a more limited set of assets drawn from the real world or from user-generated content. This approach however, introduces the risk of those AI systems perpetuating historical discrimination and disadvantage present in their training data, or reproducing abusive content generated by users. Examples of these risks being realised by AI systems in production abound, in domains including image search, recruitment and chat bots.

Online worlds

Even before combining them with AI and VR, large-scale online worlds such as online multiplayer games have demonstrated the difficulties controlling player experience. Various forms of verbal abuse including bullying, griefing and sexual harassment are rife, and so far gaming companies have largely failed to implement effective moderation and policing. Online games can also be highly addictive, with some countries going as far as opening up treatment centres to address the problem.

Early anecdotes suggest that increasing the immersiveness of virtual worlds may increase the impact of abuse. In February 2022 a British woman, Nina Jane Patel, acting as a beta tester for Meta’s Horizon World virtual reality platform described being “verbally and sexually harassed” by 3–4 male avatars. Patel highlighted the contribution of VR to this “nightmare” experience, commenting: “In some capacity, my physiological and psychological response was as though it happened in reality.”

Unknown unknowns

It’s not just known harms from social media and online games that we should be worried about carrying into the metaverse. Combinations of new technologies could easily create new types of risk that we haven’t yet thought of. One example is the unknowns arising from basing metaverse commerce and property on cryptocurrencies and non-fungible tokens, something many metaverse designers are advocating. The consequences of this heady mix are virtually impossible to predict, especially if crypto regulation is slow to arrive. However the current profusion of scams, ponzi schemes, stolen art and climate-destroying CO2 emissions that characterise the cryptocurrency world indicate that metaverse designers should approach adding yet another new and powerful technology to the metaverse melting pot with extreme caution.

Conclusion

Stephenson’s original vision of the metaverse in Snow Crash was dystopian: a dangerous and potentially addictive place controlled by one or two corporations. The metaverse that we create can be better if we design it carefully. We must identify and control the risks that come from combining so many powerful technologies in novel ways, before those risks are realised as large-scale harm. We can’t afford to repeat the mistake we made with social media; of deploying it across the globe, integrating it into so many aspects of our lives, and only then trying to clean up the mess.

--

--

--

Gradient Institute is an independent, nonprofit research institute that works to build ethics, accountability and transparency into AI systems: developing new algorithms, training organisations operating AI systems and providing technical guidance for AI policy development.

Recommended from Medium

Designing intelligent workflows inside IBM Garage

A visual story of how a Superintendent interacts with a digital robot in their workflow.

The Algorithmization of Payments

Modihost Insights: The role of AI in futuristic Hospitality Management System

Episode X — Superhero Lessons In Analytics, Season 2

Plant stress: what is it and how to detect it. Part 1

Soft wisdom: AI`s rising role

Artificial Intelligence VS Machine Learning — Comprehensive Guide

How to Develop a Chatbot Persona That Fits Your Brand

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Lachlan McCalman

Lachlan McCalman

Chief Practitioner, Gradient Institute

More from Medium

Could a Machine Even Aspire to Freedom?

What is social life?

Stanford professor can now teach in VR

How Global Law Enforcement Can Benefit From the Metaverse