Technology is Political and Design is Biased

Anya Kandel
7 min readJan 30, 2017

--

VR won’t save the world but maybe we can save it

We must look past the latent assumption that technology is socially and politically neutral. VR is open to influence and a venue to explore inherent cultural biases built into emerging technologies.

VR is a potential future holodeck for creativity

I never played video games growing up. I wasn’t among my friends who spent hours in chat rooms. I didn’t participate or contribute to creative, virtual worlds like Second Life. And I’ve never felt fully engaged when I use a screen to access information. Maybe because of this, I have also been deeply curious about the communities that develop and revolve around Internet technology (IT) and virtual worlds.

But, about a year ago, in my friend’s warehouse apartment, with a Virtual Reality headset on, I feel like I “got it” in a way I hadn’t before. Our experiences (and the programs) were interactive: we treated virtual environments like improv sets, painted in 3D and solved mysteries together. Beyond 3D video, the responsive nature of the experience was viscerally familiar and made the fidelity of the art irrelevant — my imagination filled in the blanks. And I was surprised by how collaborative and interactive the experience was with my friends in the room. VR that day was improv. It was real-time collaboration. It was a play space and design studio. VR was a rich, nascent holodeck for creativity.

I was converted — less by the experience of VR, but by the glimpse into the undetermined future of the behavioral norms and “culture” of VR (and AR) and how we will use it to communicate, collaborate and create together.

The way we experience the exchange of information and the culture of engagement through Virtual Reality and Augmented Reality are being determined right now by those investing in and building the technology. Companies have been using these technologies very practically for years, directing factory operations, doing surgery, training pilots and police, displaying information in cars, rendering full scale models. As venues for artists and programmers move from screens to space, investors and companies are looking to capture the market in creative software development, entertainment and design. Film and entertainment industries, for example, are exploring venues for storytelling, primarily with 3D video, and design and software companies are nudging toward the reality of building portable design studios in the shape of VR and AR.

But as the market turns its focus to emerging technologies like these, I have been focusing my attention on creative individuals and communities who are working at the edges of the medium and exploring the systems, frameworks and philosophies around how we engage with the technology. To name a few…

Luke Iannini is imagining whole new linguistic and programmatic dimensions for creating things in VR. One of his many experiments is Rumpus, which explores how users might program in three-dimensional space. This might be obvious to some, but conceptually this blows my mind! When I asked him how it worked, he reminded me that the way programmers visualize their work is personally unique. When he programs, he said, a standard 80-character limit for code doesn’t mean that the punch card is the metaphor that frames how he thinks. Instead, he has his own metaphors to visualize his work. So basically, I said, one day, if I want to code in VR, I can step into your brain and design from within it? “Pretty much.”

On the other side of the spectrum, Jasper Patterson and Dustin Freeman are experimenting with the application of the technology. They started a performance group called Raktor to use VR as a space to deepen connections between people rather than isolate them. They have been experimenting with bridging communities and people, and layering stories through improv. Drawing a parallel between their experiments and the beginning of cinema, Jasper reminded me that moving pictures started by filming actors on a stage: the notion that our imaginations would fill in the blanks between places, story lines and people had yet to be discovered. Now, they see themselves at the same precipice.

Three-dimensional language and a system for communication our brains might not know how to process yet — this is the potential future for VR, but only if we care about the ethical and cultural biases that influences the development of the technology.

The history of IT biases our behavior and feeds our assumptions

It’s easy to forget that in the not-too-distant past, the meaning and application of computers (and connected computers) was open and undefined. The history of the emergence of computers and IT is also an important reminder that the collective choices that we make about emerging technologies, based on the social context and creative influences of the developers and thinkers of that time, informs the social frameworks for how we expect that technology to behave.

Fred Turner argues that the IT industry maintains an ethos of communalism rooted in what he calls the “new commununalist” groups of the 1960s, who embraced the counterculture movement along with the advent of computer technology. These communities valued not only communal, non-hierarchical models of living, but were also influenced by new theoretical notions of networked communication and collaboration, informed by new developments in networked computing and cognitive science.

The Whole Earth Catalogue (founded by Stewart Brand) became a shared resource for communities to not only buy products, but to exchange information. The Whole Earth ‘Lectronic Link (WELL), which emerged out of the Whole Earth Catalogue, arguably helped to shape the social architecture of the Internet today as a digital space where affinity groups were formed. This new-communalist culture provided a fundamental ethos for startups in Silicon Valley and the emergence of collaborative Internet cultures through “network forums,” in which people come together to share ideas, information and commerce. [See From Counter Culture to Cyber Culture]

Thanks to this vibrant history of computers and the development of the Internet, we indeed created technologies that have provided a venue for the free and open exchange of information. But shared online behaviors initiated by communities like the WELL influenced not only the way we communicate online, they also affected our perception of what the Internet is supposed to be.

We have carried with us the latent assumption that Internet Technology promotes a fair and open society, while computer technology in general is inherently apolitical. This belief system is apparent in Big Tech companies today, who spearhead the values of both individual freedom and communalism, at the workplace and in the technology. We joke that every new app advertises itself as the thing that will “save the world,” but that ethos has a history, and it was born and nurtured here in the Bay Area.

Perhaps this history is one of many reasons why current politics feel so shocking to so many people in the US, especially in the Bay Area. Our assumptions about the inherent fairness built into the platforms that we use everyday to communicate have been disrupted, leaving people feeling deeply betrayed. While packet switching and IP gives equal nod to the data processed, the Internet is not inherently nonhierarchical and “equal access” to information has proved to be a deceptive assumption.

Let’s challenge our assumptions and bias equity in our designs

“Biased computer systems are instruments of injustice,” Helen Nissenbaum says, and freedom from bias should be a criterion through which we judge the quality of systems. Invariably, design is a process of making decisions, so everything we create is biased. What is important is our awareness around what we choose to bias and why.

Nissenbaum classifies “Bias in computer systems” in three parts: “Preexisting bias is rooted in social institutions. Technical bias arises from technical constraints or considerations. Emergent bias arises in the context of use.” While time moves forward, this process is far more cyclical than linear: Preexisting bias is rooted in historical biases that frame presumptions today.

How then might we, in a different context than the 60s and 70s, influence the culture and politics of emerging technologies like VR and AR so that socially and ethically aware frameworks are built into them?

Phoebe Sengers says that we can’t just observe and be aware of our assumptions; instead we must engage in constant, critical reflection so that we are always seeking to understand the forces that we are normally unaware of (like politics, race, economics, etc), which inevitably influences design practices.

Especially when technologies are nascent in their abilities, it behooves us to look beyond the pull of commercial opportunities and the weight of the market and political agendas to imagine future behaviors, frameworks, systems and approaches we intend to promote with these technologies.

To start, we can to learn from communities and creators like Luke, Dustin, and Jasper who are already questioning the framing of the technology, and building bias into their designs that consider ethical and social parameters. We can to look to groups who are creating space to celebrate women and stand up for the global majority. We can learn from the biases we have already build into the design and work past them. And indefinitely, we can stay diligently aware of the preexisting biases we hold about what (Internet) technology is supposed to represent to us.

In Sum

While ethos of communalism in Internet technology might persist, we can’t afford to live with the latent assumption that Internet technology is both socially equitable and politically neutral. And if recent events have thrown us enough to challenge this assumption, we cannot presume that technology will fix itself.

--

--