In the age of growing cyber-threats, how do philanthropic organizations have our back?

Interview with Eli Sugarman, Cyber Initiative Program Officer, the Hewlett Foundation

Roya Pakzad
Humane AI
8 min readMar 8, 2018

--

In November of last year, the Center for a New American Security (CNAS) was a gathering point for prominent voices in technology companies, academic institutions, and government agencies who sat down and discussed the implications of the AI revolution on global security. CNAS is far from being the only think-tank in the US that researches issues related to technology policy. Think tanks and academic institutions carry a heavy weight of responsibility for researching the implications of emerging technologies. Financially speaking, many of these groups count on the generosity of philanthropic foundations. The Hewlett Foundation is one of the leading organizations in this field, having maintained its own Cyber Initiative program since 2014.

I sat down with Eli Sugarman, a former foreign affairs officer at the US State Department, whose passion for international security brought him from Washington DC to Stanford University’s Law School, and now to the Hewlett Foundation as the program officer of their Cyber Initiative program.

Eli Sugarman, Cyber Initiative Program Officer, the Hewlett Foundation

Roya Pakzad: What are some of the applications of AI in the context of cybersecurity?

Eli Sugarman: I think artificial intelligence and automation are going to change the world in many ways. When it comes to cybersecurity, one of the real challenges is that there are so many ways that your organizations can be compromised, that somebody can get into your networks and steal your data, surveil, or commit other malicious activities. So what you need to do is to detect them and prevent them from accessing the sensitive information.

When automation and AI comes [into the picture], it’s really hard if you just rely on people to do all that, because you can’t. When you have a complex organization with hundreds of thousands of endpoints and devices, you have to find an automated way. A specific example is the US Defense Advanced Research Projects Agency’s [DARPA] competition called the DARPA Grand Challenge. What they did is a “Capture the Flag” competition where you had to program your AI bot to attack a network, capture something, and exfiltrate it, while also defending and patching your own system from attacks at the same time. It was a purely automated offensive and defensive capture the flag.

So, I think one thing we will see is more and more of the basic and mid-tier offense and defense automated. And I think that is going to be a huge sea change which leads to all sorts of unintended consequences and escalatory dynamics [between machines’ attacking each other] that is hard to predict. So, that’s one example that I think how automation is going to change cybersecurity.

Roya: I’m glad you brought up DARPA, because it shaped a question in my mind about the role of US governmental agencies regarding emerging technologies. Thinking about certain policies and data de-regulation, for example with respect to Net Neutrality and data encryption, I can see a partisan pattern in US policymakers viewpoints on emerging technologies which is sometimes unproductive. The Hewlett Foundation funds some large think-tanks in Washington DC. For your Cyber Initiative program, how do you try to make sure your funding will lead to productive conversations and non-partisan policymaking?

Eli: There are a couple of points. One is that we are certainly non-partisan. The second thing is we try to find the smartest people in the best universities and best think tanks from different views and fund both of them at the same time to offer different approaches into the policy conversation. So if you fund different perspectives and purposefully do that, you know in theory you get a better outcome because each of those views have to take into account the other and explain why one is better.

The third thing is [cyber policy] issues are not as partisan as taking a “Democrat position” or “Republican position.” You saw that recently with the debate in the US Congress about a certain surveillance law where you had a mix of Democrats and Republicans supporting and a mix of them opposing the specific intelligence program called FISA section 702 [link].

So, in my mind there is a lot of education that still needs to happen to understand these issues. But how do you work through those tensions and trade off to find the right policy solutions? We honestly don’t know, but by being very open and transparent and funding different points disagreeing with each other, we are trying to engage in education.

Roya: When you look at the rapid advancement of technologies, it is obvious that governments and policymakers are behind in coming up with timely policies. You mentioned that there is still lots of education that needs to be done. How does your program try to address this?

Eli: What we are trying to do is to find the resources and tools that policymakers need. So we’ve done surveys to ask policymakers what education would be helpful and tried to support the creation of that technical assistance. Some examples: we fund Stanford University and the Hoover Institution, a non-partisan think tank, to do what they call boot camps for congressional staff. They come for four hours for three days. The staff members receive non-partisan tailored educational sessions on cybersecurity topics. Likewise, we fund the Woodrow Wilson Center in Washington, DC to do something similar. So, for example if you want to know about encryption? Here is four hours with the former deputy director of the NSA who explains how encryption works, why it matters for national security, why it matters for businesses, and the complexity of the issue, so that when you go back to your congressional office you then have some knowledge, you have been given some resources that you can read and share with other people and also you know some experts to call to help you with your jobs. We found these bootcamps and training sessions really effective.

We also found there is a new class of written products that need to be delivered. If you are a busy senior government official you don’t have time to read a full book. We’ve funded an interesting project at the Carnegie Endowment for International Peace called cyber analogies [Introduction to Understanding Cyber Conflict: 14 Analogies] where it gives you a 10–15 page chapter that says cybersecurity is similar to this other thing you know very well or different that helps you to understand it. So how is cybersecurity similar or different than drones? Or counter terrorism? Our grantees are looking to find creative ways to tell stories and educate in ways beyond saying here is a 100 page report and go read it.

Roya: As you know better than me, the effectiveness of policies and regulations are heavily dependent on constant measurement and evaluation. The field of cybersecurity is very new and we are seeing rapid advancement in it because of proliferation of AI applications. But in some ways these changes and their impacts need to be measured. What can foundations do in providing fair, comprehensive, and non-partisan measurement and evaluation that leads to better policy making decisions?

Eli: I think that foundations can do a lot to promote data driven and empirical research. An example: another problem in cybersecurity is that you want your security researchers to want to do research and find vulnerabilities of a system. [But] when they do, companies threaten them with legal action and say you have broken the law by de-compiling my code and finding this vulnerability. People talk about that being a problem, but nobody ever tried to quantify it. So, we funded a project at UC San Diego to basically try to quantify how much legal risk security researchers face, which then helps answer the question of what we need to do about it.

So oftentimes we assume we have problems, but they are not actually being proven based on data. That’s a huge problem when it comes to cybersecurity issues. If you want to talk about how to measure how many cyber conflicts happened, there are not even agreements about what people are measuring or who is tracking what.

If you look at data around cyber attacks, it’s really uneven. I think the whole lack of data and empirical research is a big problem. The real question is, at the end of the day, do the decision makers actually want that? Do they care? Sometimes yes, sometimes no, and that’s a whole other conversation.

Roya: When I think about cybersecurity initiatives, I still feel that people are not fully engaged in conversations and decision making process. How can we improve their awareness and engagement?

Eli: Yes, the engagement is really uneven. So one of the big problems is that there is no real effective global governance system for the Internet or cybersecurity. You have ICANN, you have certain standards and bodies to manage technical aspects, but in terms of policy and real rules to keep people safe, there is no real global governance system. The answer is that it’s not really clear who decides on what, when. You have government, you have the United Nations based on what member states want. It’s a huge mess.

Generally speaking, Internet users are not fully aware, are not fully engaged, and don’t always have a voice.

Roya: Has the Hewlett Foundation done any projects for public education?

Eli: A little bit. But our focus is not much on public engagement. Because to do that well across a broad group of people it costs a lot of money. So we try to spend our resources on experimenting and figuring out new ways of doing some really impactful research. Our focus is on the education of influential elites: Senior government officials, C-suite executives, journalists. Because it’s more manageable based on our size as a grantmaker.

If you think about millions of dollars spent on public awareness campaigns, for example to educate people that they should wear seat belts or not do drugs, they must have cost tens of million and hundreds of million dollars. I guess the closest we’ve done is funding UCLA to basically do a bootcamp for Hollywood screenwriters and show runners. They brought cyber experts to LA to help educate them so they depict these issues in movies and TV shows more accurately. And that’s [working] a bit indirectly help educate people.

Roya: What are some of the challenges in your job?

Eli: Personally what’s hard is to say no to people. There are so many worthwhile groups and organizations that want to do impactful work and sometimes you just have to say no because we don’t have unlimited money.

More substantively challenging is that the community of the people who care about cyber issues is pretty big, and also diverse, and also very siloed. Trying to find ways to build trust and bring those different communities and stakeholders is really hard. Because they are operating typically in a separate world with their own vocabularies and their own social networks. It’s really complicated and difficult.

Roya: And your final message to the readers?

Eli: These issues [broadly defined as cyber threats] are obviously not going away. They are going to be more important and more global. It’s great for people to think more and more about these issues and find right way to get involved. Just don’t be passive and feel like you don’t have a voice.

We wrapped up here. This conversation was part of the interview series for my newsletter Humane AI. I will continue talking with both policy and technical experts in the field of ethics of AI in future installments. Tune in to know their opinions about many issues including computational disinformation campaigns, history of technology and machine learning, AI for social good, and much more. To subscribe to the newsletter, click here.

--

--

Roya Pakzad
Humane AI

Researching technology & human rights, Founder of Taraaz (royapakzad.co)