Future #59: Metaverse, multiverse, or more of the same?

Alexis Lloyd
Ethical Futures Lab
8 min readNov 15, 2021

--

Buckle in, friends, because today we try super hard to get excited about Web3. (We manage to find begrudging acceptance, but stop well short of excitement.) We also touch on Facebook’s surprising announcement on privacy and why the company may be destined for the ash heap of history. Stay tuned for a video about a possible metaverse future that’s both fun and deeply unsettling.

— Alexis & Matt

1: #realtalk about Web3

We’ve been having lots of intense conversations lately at EFL, wrestling with conflicting feelings about Web3, crypto, and the metaverse. On the one hand, there are a multitude of reasons for skepticism about the hype, from foundational motivations to environmental impact to the enormous potential for scams, abuse, and general dystopia. On the other hand, we generally like to approach new ideas as optimistic contrarians (to borrow Julian Bleecker’s term). While we maintain healthy skepticism and critique, we really like to get excited about what’s next, and we see new technologies’ potential for creating compelling and beautiful things. So we’ve been trying to get excited about the new spaces that are emerging, but having a harder time with it than usual.

In the midst of that struggle, it’s been challenging to find nuanced writing about Web3 that isn’t totally skewed toward hype or naysaying. This piece by Sam Lessin is one of the more balanced analyses we’ve seen. It’s a good primer on what this new era actually is and why we’re seeing these shifts happen now. Lessin outlines some of the possibilities, but also discusses the potential pitfalls and open questions. We liked the distinction between metaverse and multiverses, the realistic analysis of the creator economy in this context (TL;DR most individual creators will still be in the long tail of modest earnings), and the question of where traditional corporations sit in this space. We did cringe a little at the end, where Lessin talks about assigning new employees at Facebook to read Snow Crash and telling them that their job was to “decide when the time was right to execute the ideas these philosophers and storytellers had already thought about”. It clearly bears repeating: the metaverse was a cautionary tale, not an instruction manual!

The metaverse is coming. What happens next? | The Information

2: Buying & selling the concept of buying & selling

While Robin Sloan’s “Notes on Web3” is definitely on the “deep skepticism” end of the spectrum, it is thoughtful and nuanced, and elegantly articulates our crypto unease. He begins by acknowledging some of the emotional motivations underlying the hype — an exhaustion with the status quo of Web 2.0 and a deep desire for something genuinely new and full of possibility. But then he digs deep into the lack of substance in the Web3 conversation.

In effect, Web3 is only about itself; the more conceptual ideas around it seem to almost be tacked on as a post hoc justification: “Even at comparable stages in their development, the World Wide Web and Web 2.0 were not quite so … self-referential? They were about other things — science and coffee pots, links and camera lenses — while Web3 is, to a first approximation, about Web3.” As @jessfromonline describes in this thread, it is largely about buying and selling the concept of buying and selling — a strange ouroboros of capitalism eating itself when it has consumed everything else.

Sloan also critiques crypto technologies for their shortcomings when it comes to the promise of governance-via-ledger, for the way ledgers’ immutability removes the concepts of deletion and ephemerality from the web, and for the fundamental inefficiency of the technology (“like running your website on a TRS-80 with a coin slot”). This piece is full of complex analysis and worth reading in full. And, in the vein of finding some optimism in every critique, we appreciated Sloan’s closing point:

“Ethereum should inspire anyone interested in the future(s) of the internet, because it proves, powerfully, that new protocols are still possible. I do not think Web3 is a desirable or even tolerable path forward for this web right here, but I take its lesson well. ‘Code wins arguments’, and so do clubs, and cults; time remains to build all three.”

Notes on Web3 | Robin Sloan

3: A framework for safer systems

Artificial intelligence technologies are often looked at with skepticism bordering on fear, and understandably so: “black box” algorithms are making decisions every day about whether we get a home loan, whether a person looks sufficiently like themselves to unlock their phone, or whether the car one drives decides to steer itself away from a lane line or perceived obstacle. These systems are almost universally opaque to both the person who built them and the person at their mercy.

Marianne Bellotti lays out a two-part framework for making a system safer: first, place constraints on the system to make harmful outcomes less likely to occur, and second, require a certain level of expertise from its operators. This framing reminds us of one of our favorite concepts in urbanism, Vision Zero. The basic idea is that it is not sufficient to train drivers to behave and punish those who violate the rules. A safer system also makes it difficult to cause harm with safeguards, obstacles, and clear, consistent interfaces. Bellotti makes much the same case for AI: it isn’t sufficient to insist practitioners be trained in both statistics and ethical considerations, but the systems themselves need to be more transparent and built with safeguards.

In the case of AI, Bellotti adds a critical question: who is the “operator” of an AI system? Is it the engineer who built the learning model, or the person using the device or system that relies on that model? While it seems proper that developing a learning model would require some particular skills, what training or background information should someone have before using an AI-enhanced product?

Are we ready for the script kiddies of AI? | Towards Data Science

4: Braaaaains

Brain-computer interfaces are one of those technologies that pop up in the conversation every now and then, but have mostly seemed to be farther on the horizon. However, advances in the tech continue apace and have gotten an influx of funding in the past several years, spurred by Elon Musk’s foray into Neuralink in 2016. This article gives a great, in-depth look at the current state of BCI, including a robust conversation about the tensions and ethical considerations in the space.

While much of the work around BCI has approached it as assistive technology to support people who are paralyzed or unable to communicate, folks like Musk want to build consumer interfaces with this tech and make it the future of how we interact in digital spaces (hence the funding 🙄). While many assistive technologies eventually make the transition to consumer experiences, there is an attempt here to leapfrog the assistive use case entirely, which creates ethical tensions for many researchers in the space. (There are also significant open questions as to whether the risk/reward ratio is plausible for consumers in the way that it is for those who are disabled or injured.)

In addition, the piece discusses some of the ethical questions stemming from the more experimental work being done in this space. Most of the BCI we’re familiar with is of the “control computer input with your brain” variety, which is kind of magical and relatively unproblematic. But apparently researchers are also exploring some more questionable applications. In one case, scientists hijacked the visual cortices of mice to not only see what they were seeing, but also to make them see things that weren’t there. In another study, one “master” monkey’s brain was wired to control the arm of another “avatar” monkey. Work like this raises some obvious concerns about breaching human privacy and autonomy at fundamental levels.

Brain implants could be the next computer mouse | MIT Technology Review

5: Facebook is ending facial recognition… sort of

It’s been a challenging few weeks for Facebook, in light of Frances Haugen’s testimony before the US Congress, British Parliament, and on 60 Minutes. The company seems to be pointing at whatever shiny object they can to distract — “We’re canceling Instagram for Kids”, “We’re called Meta now”, etc. — but one announcement last week may point to actual change. Maybe.

A group of engineers trained in ethical considerations of AI lobbied to have Facebook’s facial recognition products dismantled, and worked with colleagues across divisions on a 50-page paper listing the pros and cons. As a result, Facebook has announced that they will disable facial recognition features and delete over a billion images and “templates” — biometric patterns describing the faces — from their servers.

It’s important to note that Facebook hasn’t closed the door on the technology; their announcement mentions that they “will continue to explore” other approaches to facial recognition that preserve privacy and provide users with more information about how their data is used. To put this in context, they also described their plans for a virtual reality “metaverse” as having safety and privacy built in “from day one”, but by their own employees’ accounts, have not yet built sufficient moderation controls into their Oculus platforms to prevent hate speech or harassment.

Inside Facebook’s decision to eliminate facial recognition — for now | The Washington Post

6: Deplatforming a platform

While the facial recognition ban is a small sign of promise, “self-regulation” is clearly not sufficient, in that Facebook seems unwilling to tackle the more egregious problems that the platform has created. In this New York Times opinion column, Farhad Manjoo steps back and asks, “OK, but what should we actually do about Facebook?” He outlines a number of approaches: breaking up the company, creating rules for content, regulating the way it can use personal data, forcing it to release internal data, improving content literacy on the part of users, and doing nothing.

Each approach outlined has problems, primarily that any one approach is not sufficient and will likely require a combination of techniques. For example, improving content literacy and technological understanding seems like a pre-requisite to smart regulation. As Manjoo himself outlined a few weeks ago, the latest attempts at legislating against misinformation would do more harm than good if passed, which reveals a lack of understanding on the legislators’ parts.

Any real solution to the issues Facebook has brought up requires several different answers. As discussed in the previous piece, Microsoft has emerged as a solid and relatively trustworthy technology company after a series of antitrust actions in the late 1990’s and early 2000’s. A combination of wider tech literacy and a breakup of Facebook, WhatsApp, and Instagram, could shift company culture toward smarter self-regulation. Privacy and transparency legislation could help, but there’s little evidence that regulators can craft legislation that actually leads to improved user experiences — just reflect on your experience of the cookie warnings required by GDPR. However, as Manjoo posits, there isn’t political will for many of these interventions, so “doing nothing” may be the most likely outcome.

OK, but what should we actually do about Facebook? | The New York Times

One hyperreality check

As the world potentially tilts its way towards the metaverse, we keep returning to this portentous vision of that future by Keiichi Matsuda. Matsuda’s film takes many of today’s trends — hyper-capitalism, gig work, gamification, and advertising — and extends them to their logical manifestation in an AR-saturated world. Let’s hope this is truly a cautionary tale, and not a glimpse into the near future.

To get Six Signals in your inbox every two weeks, sign up at Ethical Futures Lab

--

--