A Journey to Unlearn: Speaking at UC Berkeley’s I School

Aekta Shah PhD
BerkeleyISchool
Published in
5 min readNov 14, 2023
Dr. Aekta Shah speaks at UC Berkeley School of Information.

Unlearning will be one of the greatest tasks for humanity as we march steadily towards an AGI future.

As I walk out onto Bancroft Street on this November day, the atmosphere on the UC Berkeley campus is charged with the energy of fall. There is a game tonight and a musical performance on campus, promising a Monday evening buzzing with activity. Students, in a mid-day lull, transition from classes to various afternoon pursuits. The air carries the scent of recent rain, clinging the remnants of Halloween decorations to sidewalks, light poles, and buildings.

For me, returning to the UC Berkeley campus elicits a mix of emotions — nostalgia intertwined with both positive and painful memories. However, as I step onto this familiar ground, I recognize it’s the beginning of a new chapter — a chapter marked, once again, by a theme in my life that keeps coming up: unlearning.

On this day I’m speaking to about 200 master’s students that are a part of the School of Information’s Master’s in Data Science (MIDS) program. The MIDS program is a cutting-edge degree program that engages early to mid-career professionals globally in the space of information technology, data science — and of course — AI. The program’s strength and uniqueness lies in its commitment to developing future tech leaders who are not only adept in their field but also have the tools to build with responsibility, inclusion, and ethics at the core. As someone deeply engaged in the development of responsible and safe AI, I find the program’s commitment to diversity, equity, and inclusion in the context of tech design to be one of the most encouraging that I’ve seen.

It’s true that most leading CS, engineering, ML, or AI training programs that I’ve come across (to date) do not center responsibility, ethics, let alone diversity or equity, as core parts of the curriculum. I did not receive any of these as a part of my own core training, I had to seek it out on my own. And, despite my best efforts to expand my own knowledge base, I’ve also had to spend a lot of time “unlearning” a lot of the things that I’ve been formally trained to know or do — while reconnecting with my own deep sources of cultural knowledge and lived experiences.

It is for these reasons that I decided to center my talk to MIDS students on the theme of unlearning. I spoke about how, in my own journey building technologies and leading the development of responsible and safe AI, unlearning has been a constant companion. I’ve had to work against deeply entrenched notions that engineers or designers know what constitutes “good” tech design, and bring to the surface the critical role that user voice plays. I’ve had to let go of normative ideas of “success”. True success, it turns out, often rises from the ashes of deep failure — and we have to learn how to fail before we can learn how to succeed. While also recognizing that it is a privilege to have the safety and security to fail without catastrophic consequences for oneself or loved ones.

And we all have to unlearn that knowledge isn’t just confined to ivory tower institutions or industry “leaders” — but springs just as powerfully from our ancestors, indigenous knowledge systems, and the everyday lived experiences of those that will never walk in the halls of a big name university or big name tech corporation.

For me right now, unlearning means dismantling my own self-or society-imposed “limiting beliefs” of what I think is possible, whether it is in the field of Responsible AI, or in my own life.

Dr. Shah speaks to data science students about ethical, responsible, and safe AI.

The power of unlearning became real as students began to share their own experiences with each other in small groups, with some brave enough to reveal challenges and vulnerabilities inherent in their own journeys — and times in their lives they had felt “voiceless”- with the whole room.

Discussing vulnerable topics such as these in a professional setting might seem odd, but I believe that for the leaders building AI — it is this uncharted territory of deep self-introspection that is more important now than ever. For, each one of us carries learned behaviors and beliefs with us that we are not aware of — and we risk coding our own shortcomings and biases into the technologies we build. And, as the rise of Generative AI has shown us, the world we know today will not be the world of tomorrow. It is for this reason that I believe:

Unlearning will be one of the greatest tasks for humanity as we march steadily towards an AGI future.

In the spirit of vulnerability, this experience helped me to unlearn the idea that if I shared my voice — my true voice — my experiences, and my expertise as leader in Responsible and Safe AI - on a stage at UC Berkeley -that it would be met with resistance or retaliation. Instead, I was met with full support and true enthusiasm, from start to finish, from the MIDS Immersion leaders (Drew Paulin, Denise Simard, Siu Yung Wong) and Assistant Dean (Academic Programs, Equity & Inclusion) Catherine Cronquist Browning, and the students. And for this, I extend my deepest gratitude to the UC Berkeley I School and the MIDS program, thank you for providing me with a platform and opportunity to unlearn.

Dr. Aekta Shah is a researcher interested in building responsible and safe AI. She was most recently a senior AI researcher at Google, where she developed and directed a responsible/ethical AI program in Google Search
and led the xAlphabet program to advance trust and safety in product design for global and emerging populations.

--

--

Aekta Shah PhD
BerkeleyISchool

x-Google | Stanford PhD | aekta.net I write, teach, speak widely about AI and its potential to improve life for all. Words and opinions are my own.