the interview: Brandie Nonnecke, PhD

Justine Humenansky, CFA
the table_tech
Published in
8 min readOct 5, 2021

Brandie Nonnecke, PhD is Founding Director of the CITRIS Policy Lab at UC Berkeley where she supports interdisciplinary tech policy research and engagement. She is a Technology and Human Rights Fellow at the Carr Center for Human Rights Policy at the Harvard Kennedy School and a Fellow at the Schmidt Futures International Strategy Forum. She served as a fellow at the Aspen Institute’s Tech Policy Hub and at the World Economic Forum on the Council on the Future of the Digital Economy and Society. Brandie was named one of the 100 Brilliant Women in AI Ethics in 2021. Her research has been featured in Wired, NPR, BBC News, MIT Technology Review, PC Mag, Buzzfeed News, Fortune, Mashable, and the Stanford Social Innovation Review.

We’re so excited to feature you and your brilliant work! To get started, it would be great if you could tell us a bit about how you became interested in the policy aspect of technology?

Absolutely. I was at Iowa State University and had just finished my undergraduate degree, which I did in design. I was working on a website for a center that was doing work in Sub-Saharan Africa and, specifically, the poorest district of Uganda. I started to review some of their research and noticed that even in the poorest district in Uganda, almost everyone had cell phones. I decided to go back to school and do my master’s and to go to that village to understand why these individuals were investing in cell phones when they didn’t even have readily available access to potable water or nutrient dense food sources. The roads weren’t paved, but they had cell phone towers. So, I became really interested in telecom policy issues, and did my PhD in telecom policy.

I like thinking about the role that technology plays in our lives. It is pervasive, and the role of policy is incredibly important. It’s really important to not only think about building the next transformational technology, but also to think through the appropriate policy strategies that need to be implemented by the public and private sectors to better ensure that we’re designing and deploying these technologies in ways that maximize societal benefit.

I totally agree, it’s so important to consider the societal impacts before it’s too late. You did such incredible work during your PhD at Penn State, what was your postdoc at Berkeley focused on?

I loved my postdoc. I did it with the The Center for Information Technology Research in the Interest of Society (CITRIS). Before, I had been thinking a lot about telecom and governance tech policy, but in my postdoc, I got to actually develop some of the tools that the regulators try to regulate. We worked in collaboration with Governor Newsom, when he was lieutenant governor, to develop a platform called the California Report Card. We crowdsourced priority policy issues from Californians and then had other Californians rate those policy ideas. It was fascinating. By crowdsourcing the ideas and then implementing something called collaborative filtering (reviewing each other’s ideas), really unique ideas made their way to the top relative to traditional qualitative methods. That’s important because the tendency when we do qualitative research is to try to quantify it and use an emerging consensus around an issue as an indicator that an issue is important. Especially when you’re talking about policy issues, sometimes the most prevalent issue is not the most important issue, it’s just top of mind for people. We also experimented with tweaking the algorithm as to what types of ideas would be redistributed to participants, and we could see how the tweaks influenced people’s behavior. We were able to prime them to give certain responses and to stay engaged on the platform longer.

That’s when I really realized how much power technology can have in manipulating people’s behavior. We could show with statistical significance that we could influence people. And we’re just academics, the Internet platforms are more sophisticated at doing this. Now, much of the research that we do at the Policy Lab is focused on policy strategies to mitigate the manipulation of individuals by large tech platforms.

Wow, that’s really fascinating. Switching gears a bit, you were a fellow at World Economic Forum, Aspen Institute, Schmidt Futures and the Harvard Carr Center for Human Rights Policy. What did you learn from those experiences? Were there any commonalities among their approaches?

I was working on different things at each of them, but I think something that is consistent across their approaches is that they are all multi-stakeholder. The work that I’ve done with them combines viewpoints from industry, government, civil society, and academia. That’s really important with tech policy issues. You can’t create tech policy in a vacuum, informed just by policymakers or just by engineers. There are a lot of people really invested in making sure that we course correct, in many ways, with technology. This makes me really hopeful about the future.

You’re the founding director of the CITRIS Policy Lab at UC Berkeley, can you tell us about the genesis of the lab and the work that the lab does?

Of course! As I mentioned before, I did my postdoc with CITRIS and I loved it. The people are great and the issues we explore are really impactful. CITRIS is uniquely positioned, I think, within academia to be able to be a bridge between those who are building the technologies and those who are thinking about how they should be governed. We formally established the CITRIS Policy Lab to leverage this capability in 2018. CITRIS is one of the largest organized research units within the University of California system. We have about 450 faculty across 4 of the 10 UC campuses. The CITRIS Policy Lab taps into this extensive network to support interdisciplinary tech policy research across a variety of areas, such as computational propaganda, which is the use of AI and automation to influence public opinion on social media, to inclusive AI and digital identity.

Wherever there’s an emerging tech policy, a thorny issue, we’re there.

Inclusive AI is probably where we do most of our work right now. We work with the California State Government to develop a report with recommendations to guide the California AI strategy. That report is being reviewed by the Office of Governor Newsom. I’m co-chair of the UC Presidential Working Group on AI where we’re helping to establish best practices and guidance for how the university itself should guide its own procurement development, implementation, and monitoring of AI within its provision of services. We don’t often look at universities as businesses, but they are, and they use AI in their operations. So, we’ve launched a working group with the University of California Office of the President to provide recommendations to all 10 campuses within the UC system on the responsible use of AI.

As an academic group, we do research that we’ve published in peer-reviewed academic journals, but we’re very conscientious to make sure our work is getting into the hands of decision makers, which means translating our work into something that’s useful for them. We’ll write policy briefs and memos for the public sector and the private sector. We also participate in briefings before the California legislature in Congress and we’ve testified in hearings. We review draft legislation quite frequently.

What policy issues do you think are the most pertinent over the next two to five years?

The first one is AI governance strategies. The EU has introduced the AI Act, and that’s going to set a global standard that will have spillover effects on the US. The US is obviously moving forward on this path with the National AI Initiative Office within the White House Office of Science and Technology Policy. So, AI governance is a huge focus. I also think digital ID systems are more important with the rise of contact tracing. Everybody’s also talking about platform governance right now, and we’re seeing legislation being proposed that will require greater transparency in platforms, like the US Social Media DATA Act, the EU Data Governance Act, and the EU Digital Services Act. I’d say AI, digital ID, and platforms, and those sometimes overlap.

You’ve worked on so many impactful and important initiatives, what work would you say you’re the proudest of?

I got to review the California Privacy Rights Act before it was put on the last ballot. For the last three years, I’ve seen a troubling trend in which platforms are becoming more restrictive with the data they’re willing to share with third parties for research purposes. The role of platforms in society and their effect is too great for there not to be accountability measures. The EU is introducing legislation that’s going to compel the platforms to make data available for research purposes. I reviewed a draft of the CPRA and added language that platforms should make their data available for public interest research. It struck out in its current version, but you can see what I wrote, it’s there. They actually changed the wording to expand the scope beyond public interest research, to encompass research more broadly, especially health research.

One of my proudest accomplishments is making sure that there are mechanisms in place to mitigate the inadvertent effect of data privacy laws on restricting access for independent research.

Super interesting. What advice would you give to policymakers and technologists about how they can effectively work together?

There are programs emerging that are really effective at providing exactly this type of training. The Aspen Tech Policy Hub brings in technologists and teaches them about the policy process. UC Berkeley is hoping to launch a certificate of tech policy that will be made available to STEM and non-STEM majors. You have to provide training to both sides so that they can speak to each other. You can’t make policy without the technologists and the technologists need to know about policy, since it dictates how they can develop and deploy their tech. That’s why fostering a culture in which we can all work together is important. I think both sides see that they need to be working with each other and learning from each other. If technologists don’t work with the policy makers, there’s going to be legislation and regulation proposed or mandated that completely misses the mark. If the technologists don’t give input, they will be held to some implementation that may not even be feasible. It’s costly for everyone involved because companies will have to fight it through lawsuits and technologists will struggle to actually operationalize the mandates. In the end, it costs everyone more to not be involved in the policy process.

Yeah, I totally agree. Any advice that you would give to people in the table community that might be interested in working at the intersection of technology and policy?

My advice would be that we need everybody at the table working together collaboratively to solve these emerging tech problems.

Don’t think that because you don’t have technical expertise, you don’t have anything to contribute. You do. You can understand or comment or provide feedback on how the technology may impact a certain group or draw upon your experience in other areas. Everybody should take a place at the table with confidence about the immense value that they can bring.

We’re in the position we are in with so many thorny tech problems precisely because there weren’t enough different stakeholders at the table. Get in there, get your elbows on the table.

Connect with Brandie on Twitter. Join the table, a community highlighting women in enterprise and deep technology, to receive interviews, insights, and resources right to your inbox.

--

--

Justine Humenansky, CFA
the table_tech

if it’s not a dao, why do it? former ballerina. currently @ rabbithole