Can we stop technology from amplifying society’s inequalities? Perhaps, if we act now.

Nani Jansen Reventlow
Berkman Klein Center Collection
4 min readMay 11, 2018

Society’s existing power structures are reproduced through the technology we create. This means that systemic inequality and human bias are being amplified at a scale we are unable to fully comprehend. Examples ranging from “design flaws” such as cameras labelling Asian people as blinking to the use of algorithms in law enforcement that discriminate against the poor illustrate this on an almost daily basis. With the rapid development in self-learning technology, the negative impact of this will soon be apparent on a massive scale and difficult to roll back.

Aldous Huxley, the source of many of the images we have of technological dystopia, said that “technological progress has merely provided us with more efficient means for going backwards”. If “backwards” means reinforcing much of what is wrong with the world now, we are indeed well on track.

A common and incorrect assumption is that technology is neutral. The apps, algorithms and services we design ingrain choices made by its creators. When those designers are predominantly, male, privileged and white, this may pose serious problems for the rest of us.

It is no secret that our societies are built on a myriad of power structures, holding in positions of power certain groups while marginalising and suppressing others. Some of these structures manifest themselves outwardly, while others are less visible, but no less nefarious. A common and incorrect assumption is that technology is neutral. The apps, algorithms and services we design ingrain choices made by its creators. It replicates their preferences, their perceptions of what the user is like, what they want or should want to do with it. Their choices are based on the designer’s world view and therefore also mirrors it. When those designers are predominantly, male, privileged and white, this may pose serious problems for the rest of us.

Sometimes referred to as technology’s “white guy problem”, society’s advances in technology lay bare who wields most power when it comes to designing new features for our brave new world. In a compelling OpEd published in the NY Times, co-founder of the AI Now Research Institute Kate Crawford listed a number of examples of technology design betraying its roots and the existing bias in its designer base. The pervasiveness of “glitches” such as Amazon’s delivery service not offering its same-day delivery service in predominantly black zip codes and women being less likely to see ads for high-paying jobs than men should raise suspicion as to whether these are indeed isolated incidents. The work of amongst others Safiya Noble, who in her recent publication chronicled Google’s history of racism against black women, point to a more systemic problem.

What we are seeing, are not design hiccups that require a technological fix, but a reproduction of the existing power structures in our societies. And one that will reinforce itself at a speed and on a scale we do not quite seem to have grasped fully.

What we are seeing, are not design hiccups that require a technological fix, but a reproduction of the existing power structures in our societies. And one that will reinforce itself at a speed and on a scale we do not quite seem to have grasped fully. Unless we make fundamental changes in how we look at the technology we produce and especially how it gets produced and by whom, our digital society will be one that not only replicates the shortcomings of our offline one, but takes its dysfunctionalities to the next level.

As more and more of our lives unfold in the digital sphere and technology permeates society’s daily processes, these are crucial issues to address and we need to do so in a manner that goes beyond tokenism. Conversations about diversity and inclusion in the digital rights space should have an intersectional analytical framework, identifying how interlocking systems of power and access impact individuals at the intersection of historically excluded and under-represented groups.

As more and more of our lives unfold in the digital sphere and technology permeates society’s daily processes, these are crucial issues to address and we need to do so in a manner that goes beyond tokenism. This means that conversations about diversity and inclusion in the digital rights space should have an intersectional analytical framework, identifying how interlocking systems of power and access impact individuals at the intersection of historically excluded and under-represented groups. Diversity and inclusion in the digital sphere includes not only gender identity and ethnicity, but all aspects of the human condition, including sexual orientation class, religion, abilities and others. A first step is providing those people a seat at the table, including at the designing and engineering table, so we can collectively come up with authentic, self-informed remedies and solutions. That way, we can start creating a more enabling and inclusive digital rights space.

We need to act, and we need to do it now. This age of technological creation and innovation gives us an opportunity to do better and we need to seize it.

Nani Jansen Reventlow is the Director of the Digital Freedom Fund, which supports strategic litigation to advance digital rights in Europe. She is also an Affiliate at the Berkman Klein Center for Internet & Society at Harvard University and a human rights lawyer at Doughty Street Chambers in London. Nani tweets @InterwebzNani.

--

--

Nani Jansen Reventlow
Berkman Klein Center Collection

@systemicjustic_ Founder. @DoughtyStIntl Associate Tenant. @bkcharvard affiliate. Strategic litigation, social justice, human rights.