Future #56: Trust, creativity, & rockets

Alexis Lloyd
Ethical Futures Lab
8 min readSep 6, 2021

--

Is the present moment one of catastrophe or renewal? How do you imagine non-obvious futures? How might we think about privacy and trust in nuanced ways? This week, we dig into these questions, with links to some truly excellent reads. But we also know the world is a LOT right now, so we mad sure to include a bunch of fun stuff too, like wacky corporate inventions, NFT rockets, and an amazing conversation between GPT-3 bots.

— Alexis & Matt

What do we mean when we talk about privacy?

As we’ve covered here, recent decisions by a handful of large tech platforms are currently in the process of transforming internet privacy, largely for the better. Apple’s new consent rules mean that advertisers can no longer easily track your activity across multiple applications on your device. Google is in the process of removing third-party cookie capabilities from its Chrome web browser, which will have similar effects on the web experience. While these are just two of the most prominent changes, there are many more that are shifting how data privacy and behavioral tracking operates on the internet.

Though these changes are, on the whole, pointing in a positive direction, Benedict Evans points out that they are all predicated on different underlying assumptions about what we mean by privacy. Is tracking okay if it only happens on your device but isn’t shared? This has been Apple’s theory behind how they track and analyze activity on your phone, but when they applied this rubric to child sexual abuse imagery detection, it turned out that people didn’t think their phone scanning the content of their photos felt very private at all. What about first-party vs. third party? There is a common assumption that it’s okay (or better) for websites to track your activity on their site, but not for adtech companies to track activities across other websites. Why? Does this hold up to scrutiny? And does it inadvertently hamper competition by giving the advantage to web platforms that are already dominant, since they have access to far more first-party data than anyone else? Even consent is a tricky one when put into practice: While it seems uncontroversial to say “well, tracking is okay as long as there’s disclosure and consent”, in practice that leads to opaque legal disclosures and popups that nobody reads. Not exactly a recipe for empowering users.

So what do we mean when we talk about privacy? There isn’t one answer, but many answers to this question. They depend on the context, the relationships between people and between users and companies, and on the information itself being revealed or safeguarded. Rather than seeking a unified theory of privacy, we find it useful to sit in the discomfort of this ambiguity and do our best as practitioners to navigate it, designing the most thoughtful experience for any given context.

Ads, privacy, & confusion | Benedict Evans

Imagining truly new things

Our cultural visions of the future tend to take the present state and extend it in a straight line. What if cars but they fly? What if secretaries but they’re robots? What if movies but they’re streaming? But many of the innovations that reshape our world are not so obvious. As Nick Hilton puts it in this OneZero piece, they are “unimaginable technologies” versus “imaginable” ones.

We discussed this at length and struggled with the “unimaginable” framing — after all, someone had to have imagined something in order to invent it. Here’s where we landed: it’s about the possibility space, and how many steps removed it is from your current state. For example, let’s take the internet. The desire that shaped the internet was something like, “What if libraries, but everything, everywhere, at once?” (OK, we’re being a little simplistic for the sake of argument, but you get the idea.) So the internet was invented to fulfill an imaginable desire or set of possibilities. However, once it existed, the internet introduced a whole new set of capabilities. Those capabilities then generated a set of second order possibilities that led to less easily imagined outcomes like Twitter, Grindr, Bitcoin, etc.

This is the role of innovators and futurists everywhere: to find ways to leapfrog the current state and imagine the second-order possibilities that might come to be. This is why we track signals and trends — not to follow them down a one-dimensional path to their logical conclusion, but to ask “If this, then what?” If both Thing A and Thing B come to fruition, what new capabilities get created at the intersection of the two? Frameworks like STEEP analyses and Impact Wheels are scaffolding for our imagination, giving us ways to work past the current state and into multiple futures, many of which might be otherwise “unimaginable” from our present moment.

Imaginable tech vs. unimaginable tech | OneZero

It is the best of times, it is the worst of times

It is probably widely accepted that the last 18 months have been one of the most difficult in recent history. Millions have died, and millions more likely will, from a sudden and unexpected outbreak of a never-before-seen disease. It’s easy, therefore, to assume we as a society are in a period of collapse.

It’s also possible to look at the last 18 months and see communities supporting each other, the triumph of science in its ability to manipulate the building blocks of life, the difficult social reckonings leading to pockets of real change, and interventions by the US government halving rates of child poverty.

In this essay from Angus Hervey, we are reminded that the storieswe tell about events are what determine whether we’re in the Dark Ages or the Renaissance, rather than the events themselves. The reality of what’s happening is more nuanced than the narrative we construct about it retrospectively. For those living through history, it’s important to realize that collapse and renewal are intertwined, feed on each other, and are ever-present. This simple shift in framing — it’s possible to be both losing and winning — is both helpful to understanding our present, and can be used to frame more realistic versions of our futures. Utopias are attractive and easy to imagine, but if we’re to imagine possible futures, we must engage with both the positive and negative aspects of them.

Collapse, renewal, and the rope of history | Future Crunch

Bendy SkyMall

We both were lucky enough to get started in futures thinking and foresight practices at a time when that mostly meant “bizarre new gadgetry”. The first Kindle, the iPad, the Google Glass, and a whole slew of also rans and vaporware that promised new and exciting possibilities. Lately, innovations have gotten somewhat narrower (a decentralized, electricity-guzzling data store, programs that spit out weird images from text prompts) and more extractive in their intent. It’s been a minute since we could look at a new product innovation and not immediately start interrogating who it serves or how it could be misused.

That’s why we’re so excited to show you the new, floppy-screened seatback device that Airbus is testing. Based on new flexible OLED screen technology, the device would replace in-flight magazines, SkyMall catalogs, and even the seatback video display. The cover would have safety instructions printed on the inside — a simple and clever design choice guaranteed to dramatically increase the number of flyers who read them — and the device itself would serve as entertainment portal, e-reader, in-flight menu, and more. While coverage so far does mention its ability to accept payments for drinks and meals, we haven’t yet found out how Airbus plans on keeping passengers from taking these devices home.

Naturally, this is a ridiculous invention that is solving no actual problems, but it’s a lovely change of pace to think about “bendy SkyMall” rather than, well, [gestures at everything].

Airbus Wants to Replace SkyMall With a Digital OLED magazine that plays movies | Gizmodo

You kind of have to be a rocket scientist

As readers of Six Signals, you all know we are skeptical about cryptocurrencies and non-fungible tokens. But in the spirit of open-mindedness to all possible futures, we are always hunting for an application or a project that uses blockchain technology for creative purposes. This comes close.

Artist Tom Sachs has created a “rocket factory” that uses NFT technology. A participant begins by purchasing tokens that represent the three parts of their rocket: the nose cone, the fuselage, and the tail. When all three have been purchased the owner can burn the tokens for those three parts and mint a unique NFT representing their new, completed rocket. For an added fee during the minting, the creator can purchase a “launch option”, wherein their creation will be built as defined in the NFT and launched. Metadata about the launch is then added to the NFT, as well as a link to a video of the launch. If the crew of the Rocket Factory are able to retrieve the rocket, the owner is shipped that in a protective plastic case.

As an art project, it is both accessible and compelling enough to spur thoughtful debate. Like many applications of blockchain technology, however, we’re still left wondering whether this project could have been made (or would have been as compelling) if it used any other set of data storage methods or asset creation techniques.

Rocket Factory | Tom Sachs

Trust is not a user problem

At first glance, it seems like good news that the federal government has tasked the National Institute of Standards and Technology with developing a rubric for what trustworthy AI systems look like and how such systems might be evaluated for trust. However, as this essay by Os Keyes makes clear, the initial paper coming out of NIST seems to be approaching the idea of trust from a dangerously simplistic perspective.

Their focus is primarily on how users perceive the trustworthiness of a system, rather than evaluating whether the systems are worth trusting. Moreover, the paper assumes scenarios where a single user is actively electing to use a single AI system, neglecting to take into account larger context, non-users, or systems that are imposed on people without their consent. Furthermore, by positing perceived trustworthiness as the ultimate measure of success, “mistrust is not conceived as a matter of shortcomings or inconsistencies in algorithms’ broader designs or their impacts themselves but as a matter of a user perceptions. To fix ‘trust issues,’ the users’ perceptions would need to change; changes to the algorithm are necessary only to alter those perceptions.”

Finally, Keyes points out that while we want a more thoughtful approach to this kind of endeavor, the outcome may have been constrained from the start:

“It’s not just that NIST gets trust wrong; it’s that it can’t possibly get it right if trust is treated as merely a technical problem. Trust isn’t not technical, but it isn’t just technical, either. It is ultimately political. Giving the task of defining trust to a technical standards organization is ultimately a way of eliding those political ramifications. Trust is not just about how reliably an algorithm does its work but what that work is, who that work is for, and what responsibilities come with its development and deployment. NIST is not equipped to answer these questions.”

Standard evasions | Real Life

Two chatty bots

For ages, Alexis has wanted to create an art installation consisting of dozens of interactive AI bots and assistants all talking to each other in a cacophony of weird computational language. Basically, like this video, but more.

Two GPT-3 AIs talking to each other | Reddit

To get Six Signals in your inbox every two weeks, sign up at Ethical Futures Lab

--

--