Future #50: Computing as a funhouse mirror
Welcome back! We took a break from the last issue for a long Memorial Day weekend, and we hope you’re all enjoying the beginning of summer. This week’s issue is all about how reality is represented in computational systems: how our identities are reflected or distorted, how data is collected (or synthetically created), and how the software we use are modeled on societal frameworks. Read on to the end for some transparent screens from the future.
1: Reimagining computing metaphors
How do the computing paradigms we use reflect our social reality? And what happens when societal change outpaces those paradigms, leaving us to wrestle with metaphors that no longer reflect the way we work or think? Those are the central questions of this essay by Ben Zotto, in which he calls into question the desktop computing metaphors that we all engage with daily: documents, files, folders, etc.
Zotto points out that these concepts worked well in the 1980s and 1990s, when computation was largely centered around productivity, and was a natural extension of the ways in which people worked in physical space at the time — the desk, the paper documents, and the file drawers. But the internet radically changed what we use computers for. They are not just extensions for work and productivity, but for every aspect of our lives and cognition, and this changes what kinds of materials we create:
“Instead of creating documents — a sort of heavyweight work product — a lot of what we work with on our computers now are fragments: URLs and meme gifs that we copy paste between windows or chats, a PDF that we download to print out or fill in and email.”
In response to those changes, software has evolved from explicit organization models to opportunistic search and retrieval, but even those actions are grafted on to older concepts. So what would a computer that more accurately reflects our needs look like? Zotto proposes UX where everything is a fragment, where recall is strengthened or decays in a way that mirrors human memory, and where there is a deeper integration between a local computer and everything in the cloud.
We wrote in a previous issue about “addressable ideas and the connective tissue of the web”, looking at the way we document, connect, and store ideas online. This essay extends similar ideas to the computer itself, and makes us think that we may be in the early stages of a renaissance where we can reimagine our fundamental models for knowledge management and sharing, both online and off.
2: Who regulates your privacy?
In this newsletter, we often critique the power that massive tech platforms have and the way that power is often used to exploit or surveil their users. But what happens when a big tech company institutes a massive change that purportedly benefits users? We’ve seen this in two recent moves to better protect data privacy.
The first is that Apple is now making apps request user permission to track their data across other apps (you’ve probably seen these prompts on your phone). Mobile analytics firm Flurry has estimated that 96% of users have been opting out of tracking, which is a huge blow to the entire ad tech industry, and has been causing an uproar from advertisers, and of course, Facebook and Google.
The second is that Google is deprecating third-party cookie tracking in Chrome by 2022. To which you might respond, “but doesn’t Google also rely on these for their massive advertising business?” Well, yes and no. Google still has a huge amount of first-party data through use of all of its services (search, YouTube, Mail, etc.) So while this looks like privacy advocacy, it has been described by data broker Acxiom as “weaponizing privacy to justify business decisions that consolidate power to their business and disadvantage the broader marketplace.”
So on the one hand, it’s great that we’re getting more privacy protections, and that the norms around tracking may be shifting from opt-out to opt-in. But on the other hand, these changes have only come about because Apple and Google have decided the changes are aligned with their profit models. Issues like these had historically been managed by public discussions and development of standard protocols, not in secret in conference calls and board rooms. As this piece from The Reboot puts it: “The whims and spats of unaccountable tech companies affect huge swaths of economic activity and dictate in very concrete terms what people can and cannot do online. In such a system, privacy lies outside the purview of democracy, as do most of the important decisions about the structure and values of our communications infrastructures.”
→Power play: Big tech’s feud over mobile app tracking | The Reboot
3: Who owns your simulation rights?
Typically when identity theft is discussed, it refers to the theft of a person’s “metadata”. An address, a mother’s name, a first pet, a social security number, or a password can be used to take money or property from a victim. One’s face, voice, speech patterns — the things we typically associate with our actual identity — haven’t been nearly as easy to take (outside of Mission: Impossible movies.)
We’ve recently seen several examples of our identities being altered, copied, and disassociated with ourselves in ways that raise new questions. Most recently, users of TikTok noticed that a “beauty filter” was being applied across all users of Android phones and couldn’t be disabled. Users noticed softer, more consistent skin tones, and in some cases, jawlines becoming less angular. Complaints led to a “fix” but little in the way of explanation.
A softer jaw line is one thing, but recreating a whole person algorithmically? That’s been happening too. A young soccer player, Kiyan Prince, who was stabbed breaking up a fight was recently recreated inside the FIFA 21 video game. His father started a charity to combat knife violence, and sees Kiyan’s digital recreation as both a tribute to him and a way of raising awareness of his actions. Microsoft recently patented a method for creating chatbots from loved ones’ social media posts, emails, and other text, giving family members and friends the opportunity to “chat” with someone who has died.
These new possibilities raise all kinds of questions: there is undoubtedly a cathartic reason to simulate a loved one who has passed, but also raises questions about the subject’s ability to own their own identity. Will we see Do Not Simulate directives soon to protect one’s identity after death? What happens when a chatbot says something out of character — is the reputation of the deceased tarnished by the actions of their bots? And what’s to stop a living person’s identity from being simulated from their online speech, videos, and other publicly-available assets? What ownership do we have over our own personalities and how they’re portrayed?
→ TikTok changed the shape of some people’s faces without asking | MIT Tech Review
4: Targeting abuse requires humans + machines
Though there has been much discussion about online abuse in the past several years, most platforms still have woefully inadequate solutions to the problem. Susan McGregor of the Tow Center explains that this is the case because the issues can’t be effectively addressed by either humans or machines — humans can’t handle the scale, and computers can’t handle the nuance.
In her research, she has found that one part of the problem is the poor quality of the data sets that are used to train machine learning systems to detect abuse. That training data is typically created by scraping data from public accounts and then asking research assistants to tag abusive posts. Instead, the Tow Center and Brown Institute are running a project where they are “recruiting and paying women journalists to share and label their own Twitter conversations. With enough participants, we are confident that we can build a tool to better recognize the linguistic characteristics of the online abuse that targets women journalists, but may also help clarify the broader mechanisms of harassment used in these spaces.”
→ “Automatically” detecting online abuse requires an editorial eye | The Tow Center
5: Using synthetic data to understand reality
As we’ve discussed before, the current state-of-the-art around machine learning requires a large dataset of well-tagged inputs to “train” the model on what it should be looking for. These data sets can be flawed in terms of the data they include; the best example is how facial recognition algorithms tend to perform worse with non-white faces due to a lack of diversity among the pictures used to train these systems, which is usually a reflection of the lack of diversity among the developers of these systems.
One company is now generating “synthetic” faces to help build more diverse data sets, and hopefully, lead to AI that works better for all people. The faces generated are, by definition, well-tagged: they are created from specific inputs that then describe the face rather than needing to be tagged retroactively (i.e. if you ask for a Black woman in her late forties with greying hair and a septum piercing, then you know for sure the picture created fits that description.)
While the effort here is commendable, we can’t help but see one flaw: these faces are, themselves, generated by machine learning systems. Those systems still need to be initially trained on real-world data in order to produce synthetic facsimiles — which means that there can still be biases or gaps in the original training data that then get reproduced ad infinitum. While this approach is intended to diversify data sets, there’s a risk that it could actually end up reifying and further obscuring our existing blind spots.
→ These creepy fake humans herald a new age in AI | MIT Tech Review
6: Let the robots do the thinking
A short film from the Financial Times (yeah, we didn’t know they did that either) starts as a fairly typical exploration of the risks of data collection and public/private partnerships for data sharing, adds in the fear and anxiety of living through a global pandemic, and most interestingly, ends in place that is truly unexpected. It also stars Arthur Darvill, who we’ve loved since the Eleventh Doctor Who.
What’s most compelling about this short film is the complexity it introduces to the typical questions of individual agency and uniqueness versus algorithmic summaries and data-driven recommendations. In the face of a true crisis that affects everyone, how do we square individual rights with the safety of the greater community? Can someone act morally in defiance an algorithm, when that algorithm has vastly more information to justify its position?
→ We know what you did during lockdown | Financial Times
One transparent interface
One of our favorite forms of design fiction is the speculative interfaces that are designed for sci-fi films and shows. We’re also big fans of The Expanse, so we were very excited to see this gallery of UI examples from the show, covering everything from navigational heads-up displays to the ubiquitous transparent handheld devices.
→ The Expanse UI design | HUDs and GUIs
To get Six Signals in your inbox every two weeks, sign up at Ethical Futures Lab