This week’s issue runs the gamut, from deep thinking on climate change to NFT strippers and everything in between. On the one hand, we’ve got people doing thoughtful work to make our social and technological spaces better. On the other hand — well, the climate change and the NFT strippers. In the words of Ralph Wiggum, “I’m happy AND angry!”
— Alexis & Matt
1: A technology needs the right ecosystem to thrive
When we were working at the New York Times R&D Lab, someone once asked if there was anything we had notably gotten wrong — something we thought would take off that failed to catch on in the way we anticipated. Our answer: QR codes. As early as 2007, QR codes seemed to hold real promise for linking physical space to digital data. And while they did catch on in some places, most notably in China and South Korea, they were a total flop in the United States. See below, one of our favorite flowcharts:
However, as I’m sure you’ve all noticed, something has shifted in the past year or two. QR codes are finally gaining traction, over a decade after the initial hype around them. So what happened? In this insightful essay, Clive Thompson synthesizes four lessons we can take away from the “improbable rise” of QR codes.
The first is that seemingly silly technologies can become transformative, and it can be hard to discern which ones are likely to do so. Second is that sometimes a solution is waiting for the right problem — in the case of QR codes, Covid-19 and the risk of viral transmission was that problem. Thompson frames third lesson as “iteration can make dumb tools become good ones”, but we would actually describe it as “a technology needs the right ecosystem to thrive”. For QR codes, that meant changes like QR code readers being built into the default OS of your mobile phone so it “just works”. Another example of the need for the right ecosystem can be seen with MP3 players, which didn’t really take off until Apple created a connected marketplace to buy and sync your music. And finally, Thompson’s last insight is that “open usually wins over closed” — open standards allow for widespread experimentation and adoption in a way that proprietary ones do not.
→ 4 lessons from the improbable rise of QR codes | OneZero
2: Diagnostic AI doesn’t work. Yet.
As the Covid-19 epidemic was spreading around the world last spring, doctors were eager to find any tool that would help them more quickly identify patients who were more likely to have been infected by the coronavirus. Since China had a four-month head start on the rest of the world, it had data on patients and how they presented that could be used to train AI models — if successful, those models could greatly speed diagnosis.
In a paper in the British Medical Journal reviewing the hundreds of models created, none were shown to have any ability to assist doctors in diagnosing Covid-19. While doctors may have been able to make better progress if they’d collaborated on projects rather than hundreds of small teams going their own ways, the largest contributor to the failures was the quality of data in the training sets. For example, models classified patients lying down as more likely to test positive, since patients photographed lying down tended to be in worse condition and therefore more likely to be sick. Some models also extracted clues from the fonts used at different hospitals, and assumed patients at hospitals with higher positivity rates were more likely to have Covid. In both cases these observations were correct, but not as relevant predictors. These shortcuts to answers reminded us of a survey of how AI-based agents would “cheat” at various tasks. In one instance a diagnosis agent assumed skin lesions photographed next to rulers were more likely to be cancerous; in another, a Tetris-playing bot simply paused the game to avoid losing.
Beyond the schadenfreude we get to feel in laughing at the AI models for making such silly conclusions, this also shows the importance of having machine learning models that can provide signals about how they arrive at their answers. Black box algorithms that take in data and spit out answers can’t be interrogated, and therefore can’t be corrected; if we know that an agent is using the patient’s resting position to tell how healthy or sick they are, we can adjust the weighting of the model or provide a more mixed training data set.
→ Hundreds of AI tools have been built to catch covid. None of them helped. | MIT Technology Review
3: Solving hard problems in public
Back in May, Twitter’s META (ML Ethics, Transparency, and Accountability) team, led by Rumman Chowdhury, shared their research into Twitter’s image-cropping algorithm. Many Twitter users had been complaining that the algorithm demonstrated bias, treating images of women and non-white people in less-than-ideal ways. The META team shared the results of their research and the steps they were taking to address biases they found (including, in one case, deciding to give people control over cropping rather than trying to optimize the algorithm).
This week, as part of the DEF CON hacker convention, the Twitter team is running an algorithmic bias bounty challenge, where they are opening the image-cropping code to everyone to find instances of bias and offering cash awards to the winners. By running this challenge, they intend to “take this work a step further by inviting and incentivizing the community to help identify potential harms of this algorithm beyond what we identified ourselves”.
We love the approach here. It invites people in to the process of making things better, with the implicit understanding that the more you expand the circle of perspectives and expertise, the better your results will be. As Rich Harang noted on Twitter, “in one week they got 31 submissions from individuals with some solid results. All they had to do was make it easy, and ask.”
4: Automating politeness
In the shift to remote work during the pandemic, videoconferencing has become a primary tool for discussion and collaboration. But many of the problematic social dynamics of meetings only get amplified over video calls. Specifically, being spoken over or ignored is more common in video meetings, especially for those who are not the most aggressive people in the room. Cisco is trying to alleviate this issue by introducing features into their Webex software that are “aimed at promoting more equity and productivity in meetings”. Most notably, a feature called Round Table gives each speaker an allotted speaking time during the meeting. When they are speaking, everyone else is automatically muted, and if they exceed their time limit, they can no longer speak.
While the intent here is well-meaning, the heavy-handed execution leaves something to be desired. A forcibly automated system for politeness and equity is more likely to provoke resentment than consideration, and doesn’t leave much room for humans to override it in unexpected circumstances. Other software companies have also tried approaching this problem from a slightly different angle, like the Time to Talk app that tracks the gender balance of who’s speaking in a meeting, or Talk Time for Google Meet that just counts how long each participant is speaking. These tactics raise awareness and reveal patterns without being prescriptive about how people should use that information. The lesson here: sometimes a nudge can be more effective than a push, especially when dealing with the intricacies of social dynamics.
→ How Cisco is trying to stop mansplaining in meetings | Protocol
5: Getting real about climate futures
The impacts of climate change are readily visible and increasingly scary. Wildfires in Greece forced hundreds of people to evacuate by boat; Greenland is experiencing a melt event far larger than any seen there before; California’s Dixie fire has become the state’s second-largest in history, and earlier this summer the air in New York City, Philadelphia and other eastern cities became unsafe to breathe because of smoke from western fires carried across the country in the jet stream.
These are among the first signals that climate change is out of our control. These essays by Alex Steffen and Christopher Butler lay out just how screwed humanity may be if radical, coordinated change doesn’t happen very soon. We took two new ideas away from these pieces, and add one of our own.
First, as Alex lays out, this crisis is not going to be characterized by one or two difficult outcomes, like having to migrate away from the coasts or specific extinction events. It will instead include those and a lot more effects, all happening simultaneously and in increasingly unpredictable ways. People seek continuity and normality, and treat crises as things we need to fix so that we can return to that sense of continuity. Steffen argues we will not see continuity again. He does hope, however, that we can make the future even better (if quite different from) what we had expected: “If we succeed in accelerating change quickly enough, we won’t reverse the catastrophes the last five decades has saddled us with, but we may well snap forward into a global boom of sustainable prosperity and systems ruggedization that not only enables us to be largely successful within discontinuity, but leaves billions of people better off than they are now.”
Second, as Butler posits, achieving this change is far beyond the ability of individual decisions like going vegan or buying an electric car. Significant systems-level change is required to make things like tax and economic policy that will, in turn, make individual change more effective and more desirable. This will take a lot of political will to act in the face of those who profit from the current state. People are, generally speaking, far more willing to take radical action to mitigate an ongoing crisis than they are to prevent a possible one.
What we would add here is that, psychologically speaking, people who take action risk being blamed for the consequences of that action, whereas crises that arise out of inaction have no “owner”, no one to point blame toward, and therefore are just “the things that happen.” Taking big risks and bold actions invites scrutiny, where sitting pat and taking the default path often keeps your name out of the paper.
Creating a future in which people can thrive and the planet can continue to sustain us will require bold action, creative thinking, and international collaboration and agreement. We must continue to advocate for leaders who know this and are willing to act. More than home composting or using LED bulbs, this one action — making the scope of the crisis apparent to them and holding leaders accountable for their inaction — is the most important thing any of us can do. If you have a platform to reach people, now is when to use it.
→ When **it gets real | The Snap Forward
→ We only have eight years left | Christopher Butler
6: Making NFTs even ickier, now with strippers!
If it were possible to type while holding one’s head in one’s hands, this would be the piece that got written that way.
We’ve read the details of this particular NFT scheme a few different times, and while the mechanics are convoluted the underlying grossness is readily apparent. 3,000 NFTs representing “ownership“ of a hand-drawn picture of a stripper were issued, and some are now being traded, though as far as we can tell no art has been attached to the tokens just yet. (People are simply trading tokens called “Stripperville #1287” and so on.) Soon, NFTs representing ten “Stripclubs” will also be minted and then, well, things get murky. Here’s what their site says:
Simply put, you pay to work at your Stripclub and if you work hard and are the most popular Strip Club you can earn $STRIP in return. Each week the highest yielding Strip Club and all the Strippers who worked there that week will be rewarded with $STRIP Coin.
That’s hard to follow, but seems to imply paying into a system each week in the hopes of getting a larger payout at the end. So, gambling, but where the mechanics of who wins or loses are poorly explained at best, and with the underlying theme of overt female exploitation. (Yes, all of the strippers are women. It seems like alternate possibilities didn’t even occur to the creators.) Usually we would ask you to click the link to learn more, but in this case it’s probably best if you don’t.
One bonkers explanation
OK, we know, we promised we wouldn’t talk about the metaverse. But we didn’t promise that Jim Cramer would keep quiet on the subject. And thank goodness he didn’t, because this is the most hilariously unhinged explanation of the concept. In his retelling, it’s somewhere between talking to people about Shakespeare and getting your friends’ opinions on your shirt. No quote will do this justice, just watch the video.
→ Jim Cramer explains the metaverse | CNBC
To get Six Signals in your inbox every two weeks, sign up at Ethical Futures Lab