This issue of Six Signals comes to you just two days before the autumnal equinox, so if you live in the Northern Hemisphere, your days have been and will continue to be getting shorter. It seems the news is following suit: we had a hard time finding a balance of light and dark this week, but between using an ancient ritual to confuse a self-driving car and the discovery of proto-sex in a Minecraft-like game for kids, we hope we got close. Read on for lots on Facebook and a compelling deep dive into delivery work.
— Alexis & Matt
1: Privacy isn’t just for users
Earlier this month Ray-Ban shipped new line of glasses with tech embedded, allowing wearers to take pictures, shoot video, and listen to audio. These glasses were built in partnership with Facebook, hence the name: Ray-Ban Stories.
For the last decade there have been all kinds of wearable devices that can help us quantify our exercise, our health, and our moods. In looking at how those devices succeed or fail, Clive Thompson notices that cameras are often the worst thing that can be added, since they immediately turn a personal device into a possible surveillance vector. Earlier devices like Google Glass at least attempted to provide personal utilities like note-taking or search and retrieval of simple facts, but ultimately the camera became the focus (no pun intended). Thompson takes these signals and comes to an obvious conclusion:
Big tech firms have no interest in building technology that actually helps you think, which was the original vision of wearable computers. They just want you to wear something that feeds content to their ad-supported social networks.
→ Wearable computers should never have cameras | Clive Thompson
2: The Facebook Files
A pair of glasses with cameras in them was probably Facebook’s least concern this week, however. The Wall Street Journal unspooled a series of investigative stories detailing how Facebook’s policy of increasing engagement at all costs has led to real world consequences. What’s worse, the Journal’s reporting finds that senior executives — including Mark Zuckerberg and Sheryl Sandberg — were informed about and aware of these consequences, and consciously chose to ignore them. These consequences include:
- Allowing high-profile VIPs to post disinformation, bullying, or other banned content and face no consequences.
- Making one third of girls with body image issues feel worse about themselves after using Instagram.
- Accelerating the viral distribution of hateful, vitriolic content because it showed the highest levels of user engagement.
- Failing to act when criminal enterprises, including drug cartels and human traffickers, used Facebook’s tools to conduct their illicit businesses.
At the heart of our work at EFL is a recognition that the pursuit of corporate goals can easily cause unintended harm, and that consideration of harm is a key step in designing and refining any good product. So we’re not so quick to judge when a company unintentionally causes a negative impact, we do look very carefully at what they do when they become aware of it — how they seek to remedy harm and prevent similar outcomes in the future. What we’re most surprised by in these stories is that overt steps were taken to ignore the harms that were identified and reported. In each of the cases above, Facebook leadership were made aware of the effects their tools and algorithms were having, and explicitly decided that the company’s bottom line was more important than the health, mental well-being, and in some cases, even the lives of the people Facebook affected.
→ The Facebook Files | The Wall Street Journal
3: UN urges ban on harmful AI
Michelle Bachelet, the U.N. High Commissioner for Human Rights, called for a moratorium last week on AI applications that pose a risk to human rights. She explicitly called out facial recognition technology and urged that “governments should halt the scanning of people’s features in real time until they can show the technology is accurate, won’t discriminate and meets certain privacy and data protection standards.” Her statement accompanied a new U.N. report that looks at how governments and businesses have rushed headlong into applications of AI that put citizens’ basic human rights at risk, without putting appropriate regulations or safeguards in place.
While Bachelet calls out facial recognition technology, we can list dozens of AI applications that would fall under this umbrella, including these robots that Singapore is deploying in public spaces to report on “undesirable social activities” which range from public smoking to gatherings of more than 5 people (contrary to Covid-19 regulations).
For a couple of decades now, we’ve seen business interests jump on emerging technologies, while governments that should play a regulatory role lag dangerously behind their commercial counterparts. Just this week, Adam Mosseri of Instagram compared social media to cars — i.e., some people are going to get hurt by our product, but we think the value to society is worth it. In response, pretty much everyone replied: YES BUT CARS ARE INTENSIVELY REGULATED TO MINIMIZE HARM (and they may still be melting the planet). How might governments and civic agencies become better equipped to draft appropriate regulations in a more timely manner, since “move fast and break things” seems to be breaking far too many things?
→ UN urges moratorium on use of AI that imperils human rights | The Washington Post
4: Abundant, carbon-free power?
Fusion power has been a far off goal of scientific research since the 1960s, and has seen its fair share of hoaxes and setbacks. It remains a focus of experimentation, however, for the overwhelming benefits it could bring if it came true: namely abundant, waste-free electricity.
A brief lesson: fusion power is generated when hydrogen atoms are smashed together to form helium, as happens in our sun. The resultant hydrogen + helium plasma reaches temperatures of millions of degrees, so hot that no material can contain it, so a series of high-powered magnets would be used to hold the plasma in place and control the ongoing reactions. Until now experimental fusion reactors have used supercooled superconducting magnets. The additional energy needed to keep the magnets cool enough to operate made the power plan net negative — meaning it needed to consume more power to control the reaction than the reaction itself created.
Researchers at MIT have developed a new magnet that operates at far higher temperatures, using new experimental materials. With these magnets in production, it now seems possible to create a fusion reactor that would be net positive, creating more energy than it consumes and possibly leading to carbon-free electrical power. While full-scale reactors are a way off, researchers are excited that the largest challenge might have been solved. According to Brandon Sorbom, the chief scientist on the project, “basically the papers conclude that if we build the magnet, all of the physics will work in SPARC. So, this demonstration answers the question: Can they build the magnet?”
5: Essential workers are fed up
Since March of 2020, millions of people have relied on delivery workers for most of their households’ needs, particularly for food delivery. Most deliveries are dispatched via gig platforms like Grubhub, Uber Eats, Caviar and others that obscure the critical and often dangerous work done by delivery workers. Now those workers are fighting back, creating grassroots support collectives, labor organizations, and techniques to ensure their physical security.
This deep dive in The Verge is full of amazing details, and the sheer ingenuity and creativity on display is remarkable. That said, it really shouldn’t have been necessary. Workers previously had some level of support from restaurants, when those businesses managed their own delivery crews: they had a place to take breaks, use the restroom, grab a quick meal between runs, and fix up their bikes. With apps pushing such “inefficiencies” out of the system and cities doing nothing to support these workers (when they aren’t actively hindering them through specious fines or confusing regulations) they had no choice but to create their own communities of support.
Read the whole thing before you order Chinese takeout, and look out for “science fiction but completely real” details like this one: “[Cesar] maintains his bike with the help of a traveling mechanic known only as Su, who broadcasts his GPS location as he roams upper Manhattan.” It’s like something straight out of a William Gibson novel, and not in a good way.
→ Revolt of the delivery workers | The Verge
6: Every technology will eventually be used for sex
“A barely legible, pixelated, geometric image that seems to be a facsimile of a woman’s body, clad in white thigh-high boots, twerking its bare ass.” This is just one of the descriptions of strip clubs, “condo games”, and other NSFW sex games that are apparently thriving on Roblox, according to Rolling Stone reporting. If you’re not familiar, Roblox is a gaming platform that is known for being hugely popular with children — over half of its 160 million monthly users are under 16.
The article compares the awkwardly pixelated sex games on the platform to the chat-room cybersex that many bored teenagers experienced in the 80s and 90s. We appreciated that the piece didn’t immediately jump to alarmist conclusions, but engaged in a debate about whether the Roblox activities fall into the “harmless sexual exploration” category or the “fertile ground for grooming and online predation” category. Like most things, it’s almost certainly a bit of both. Once again, the concept of Roblox strip clubs proves that anything that can be used for sexual purposes will be, even a platform whose UX means that “sex” consists of “expression-less Lego creatures robotically rooting into each other’s BandAid-colored flesh.”
One confused Tesla
There is an old saying by Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.” With that in mind, maybe magic is what’s needed to help counterbalance the proliferation of technology?