Time to redefine what it means to be in tech

Roel Dobbe
9 min readDec 4, 2017

--

Launching a cross-disciplinary graduate group to reflect on issues of society & technology

Cartoon by © Arend van Dam

In the last few months, a series of alarm bells have rung surrounding the various ways in which technological advances and the tech sector are helping to undermine our social welfare and democratic institutions. The critique extends from ways in which machine learning algorithms used in decision-making help to promote or reinforce discriminating biases, to illuminating how smartphone apps are destroying the development of younger generations, to more and more governments struggling to tame the increasing power that a handful of tech giants hold over our vital digital infrastructures as well as over the political agendas that could change these for the better. Another significant blow for the tech world happened in a recent congressional hearing, where the tech giants appeared on Capitol Hill and publicly acknowledged their role in Russia’s influence on the presidential campaign.

The skepticism is not confined to the public. There is also growing awareness among people in tech that some things have to change. Some insiders that have left tech in disillusionment warn for the detrimental effects of pervasive digital technologies on our attention span and ultimately democracy at large: “If we only care about profit maximisation, we will go rapidly into dystopia.” Some of them have united and are “dedicated to creating a humane future where technology is in harmony with our well-being, our social values, and our democratic principles.” Even Mark Zuckerberg himself offered a mea culpa, which, though not very specific, hints at a willingness to reflect on how his business may do a better job at serving society.

It is fair to conclude that the days are behind us in which there was broad belief that technology is a value-neutral means to bring about sustainable economic growth for all and solve the world’s most challenging problems without side-effects. There are clear political consequences and moral stakes to the use of tools that change how we think, make decisions and are connected to other people. Worryingly, the incessant stream of critique and warning signs is feeding a new cleavage in society, driving tech practitioners and the public into the extreme camps of dystopian thinkers versus defensive techno-optimists.

Historically, such rhetorical extremes have very rarely been productive in overcoming existential challenges to society. On the one hand, sketching dystopian futures in times of widening economic inequality breeds broad public fear and distrust in technology as a whole, taking away people’s agency to demand better alternatives. It shapes a new dominant paradigm, which says that tech is broken and inherently evil, and perhaps worth putting aside all together. On the other hand, the techno-optimist might try to convince us that these problems can be solved with a simple software update. Although often not intentional, engineers tend to have a fix-tech-with-tech reflex and natural faith in technology’s abilities to solve problems. This cultural phenomenon can prevent alternative perspectives from having a meaningful impact on the development of disruptive technologies.

Engaging engineers
Going beyond our dystopian worries and skepticism towards engineers’ capacity to engage with other disciplines, what is needed to provide a more constructive middle ground in which societal implications can be addressed? If anything, the current surfacing of societal side-effects of technology warrants a new way of designing and governing technological systems that includes voices more broadly and integrates critical thinking and perspectives from the appropriate social sciences, policy-makers, lawyers and ethicists.

In an effort to engage intellectuals in this critical debate, Cathy O’Neil, author of the well-received book Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, recently pointed the finger at academia for being “asleep at the wheel” while tech companies are reshaping our lives, arguing that we need more research to understand how and “to ensure that the same mistakes aren’t made again and again.” To the many scholars who have already been working on these issues, O’Neil’s blunt and exasperated tone (including claims such as “There is essentially no distinct field of academic study that takes seriously the responsibility of understanding and critiquing the role of technology,”) clearly hit the wrong button, leading to a slew of responses from professors trying to explain what is already being done (see for instance here and here).

With her closing line, O’Neil had a rather provocative wish for students: “If only they would scrutinize the big tech firms rather than stand by waiting to be hired.” As PhD students in Control & Intelligent Systems, Information Systems & Law, and Machine Ethics, we feel that is a blunt statement, but we hear her. But rather than turning to the easy mode of scrutiny and plain critique, we propose an alternative wish.

“Let’s redefine what research, design and deployment of new technology should look like rather than stand by complying with the status quo.”

Redefining tech
Scrutinizing the status quo is part of it — every good education should provide such critical thinking. A crucial and sometimes uncomfortable step is to look at ourselves and understand which questions we are not asking now. We need to understand our own reference frames as technologists and how they shape the design of our systems. From there, what perspectives do we need to add and how can we integrate ethics into our research and design contexts in a meaningful and productive way?

Retrospectively, what is it that Zuckerberg and others could have done differently to prevent the public’s support and trust from deteriorating? And looking ahead, what role should academics and educators take on now? What responsibilities do we have and how can we learn from these lessons to develop technology that supports human flourishing and democratic ideals? And what does it mean for educating the next generation of tech leaders waiting to make yet another dent in the universe?

To some, the solution is rather simple and lies in making sure our future influential tech leaders are educated more broadly, “to provide some understanding of how society works, of history and of the roles that beliefs, philosophies, laws, norms, religion and customs play in the evolution of human culture.” We think that integrating more perspectives throughout engineering education is critical. In addition, we think it is needed to build capacity among engineers to engage with other disciplines and the public in constructive ways.

Her bluntness and provocation set aside, we believe that O’Neil’s opinions about education and call for a countervailing force from within the tech fields to address issues of ethics and justice are critical and timely. We agree that many of our current societal struggles with technology can be “clarified by professional and uncompromised thinkers.” We acknowledge that, despite the engagement of many professors with these questions, current educational programs often lack the room to ask these broader questions, and it is mostly up to individuals to broaden one’s horizon. And yes, more often than not, engineers depending on jobs in the tech sector are careful not to rock the boat too much. That said, we believe the current rate of technological change and its societal impact cannot be countered with the popular engineer’s argument that “societal issues and the way our technology is used should be left to social scholars and policymakers.” Too much of that mentality has brought us where we are today.

Efforts and initiatives so far
In response to the growing worries around the development and use of “artificial intelligence”, some engineers have taken initiative to discuss ethical dilemmas and help develop an ethos across the field. Recognizing the ongoing development of AI, the One Hundred Year Study on AI will bring a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues. The large tech firms have joined efforts in a Partnership on AI to address “new concerns and challenges based on the effects of [..] technologies on people’s lives.” A week ago, a new institute called AI Now was launched at New York University with the mission to produce interdisciplinary research on the social implications of artificial intelligence, and to act as a hub for the emerging field focused on these issues. Founding director Kate Crawford has long promoted a more cross-disciplinary approach to developing and deploying AI, stating that “artificial intelligence presents a cultural shift as much as a technical one”. And recently, many efforts are coming up to address the role of ethics in engineering and computer science research and education.

As graduate students, we find that there are plenty of people who share these worries and some who are engaged across disciplines. However, there is little space and attention in our graduate programs and daily research realities to engage in such discussion and extend our thinking as students, tech developers and future leaders of the field. This motivated some of us to organize a Technology & Society Forum to “pry open, for critical analysis, the Pandora’s box of social forces contained in our smartphones, apps, and tablets.” And some of us ran a reading group, which culminated in an article titled Automating Us — the entanglement of people and machines. Being engineering scholars, our main insight was that more cross-disciplinary interaction and research was necessary, and we reached out to scholars from our own and other fields to “meet us at the boundary.”

A new cross-disciplinary group for critical and constructive reflection
The subsequent insights and interactions across campus have now cross-fertilized and inspired the formation of a new cross-disciplinary group called Graduates for Engaged and Extended Scholarship in computing & Engineering (GEESE) — cross-campus constructive and critical reflection on society & technology. GEESE will develop community among graduate students and postdocs interested in working at the intersection between engineering and the social sciences and humanities. Some of the aims we want to contribute to are:

  • Stimulating cross-disciplinary debate, scholarship and collaboration across campus between engineering, social science and humanities
  • Developing cross-disciplinary thought around new technologies, societal/political implications to disseminate to the public
  • Promoting the teaching of critical thinking in engineering curricula
  • Promoting engaged and responsible scholarship among graduate students as it relates to research, design and integration of new technologies
  • Instituting structural opportunities for graduate students to do cross-disciplinary research on the societal impact of technology

The aims of the group are ambitious, and our activities will become concrete as we start building a broader community coming semester. We are looking for people who want to think critically and constructively and engage at the boundary, to join us and help shape GEESE. We want to start with a better understanding of the needs that students have around addressing the societal impact of their field:

  • What are the questions you want to be discussed?
  • What is currently missing in your education or research context?
  • And with what fields would you like to interact to broaden your perspectives?
  • If graduated, what do you wish you had learned more about to deal with the ethical issues of developing and deploying new technology today
    (Please leave your comments below the blog)

Though the group aims to engage graduate students and postdocs, we also encourage undergraduates to share their views and needs. As teachers, your input is critical to reimagining what an engineering or computer science education may look like.

GEESE is about culture
A final note on our name: GEESE. We wish to reset some of the common ideas about this amazing animal. Geese are incredible team workers (think of their V-formation flight), brave navigators (exploring various continents without a GPS), skilled communicators, devoted caretakers (no injured goose is ever left behind), and fierce protectors of their young. They also naturally acquire an overview of how various environments can benefit them. In other words, a great inspiration for our cross-campus engagements!

Changing a culture is perhaps one of the most challenging and slow-moving processes we face in life. We believe the socially involved melting pot at Berkeley is an ideal place to start broader conversations that can extend to the public. “Wouldn’t it be cool,” an engaged social science professor once told us, “if one of the features of Berkeley Engineering was that it really tapped into these issues? Could it become part of the Berkeley signature of what it means to be an engineer who graduated from Berkeley?”

We look forward to engaging with you!

— — —
GEESE is currently organized by:

Roel Dobbe — Control, Intelligent Systems & Energy (EECS)

Nitin Kohli — Information Management & Systems (School of Information)

Thomas Gilbert — Machine Ethics & Epistemologies (Sociology)

Sarah Dean — Optimal Control & Learning (EECS)

Maxim Rabinovich — Statistics & Machine Learning (CS)

The team is growing and will organize activities to build community in Spring 2018.

Want to join our mailing list (for Berkeley students)? Or have suggestions for the GEESE initiators? Please send an email to Roel Dobbe at dobbe@berkeley.edu.

And please leave your ideas and answers to our questions here below!

--

--

Roel Dobbe

Assistant Professor in Technology, Policy and Management at Delft University of Technology. Safety, sustainability and democracy challenges in AI Systems.