Augmented Intelligence: The Intersection of Human & Machine Intelligence in the Security Space.
The Full Interview with Chris Hurst and Munish Puri
This article is summarized on our blog at https://www.stabilitas.io/. The full article below includes a discussion on training and career advice for young analysts.
Chris Hurst: We’re excited to interview Munish Puri today. This is one of several interviews we’re doing with a single goal: to help corporate security teams, analysts, GSOC operators, Chief Security Officers, better understand some of the changes in the space and help them solve some of the problems they’re facing.
Munish’s Bio. Munish has experience as an analyst, trainer, and consultant for startups in security and intelligence, corporate security, risk consulting firms, and industry organizations. Munish is an alum of the Fletcher School and a board member of Women 2.0. It’s a real pleasure having you with us, Munish.
Chris Hurst: I first ran into Munish three years ago after briefing the Fostering Innovation Committee at OSAC. Members of the OSAC RISC team mentioned Munish as a thought leader in the commercial security space.
Munish Puri: Thanks Chris, this is really cool, I’m excited about it!
CH: Let’s start on a personal note: Fifteen years ago you transitioned from a pure business role to a research role in think tanks. Why did you jump into the international political space? What were your early motivations?
MP: That’s interesting. That was really early in my career, really in the beginning, and this is going to be a common theme, but it was people. I made most of my career decisions based on people, with and for whom I would be working and that was where that came from. The intellectual reason was that I was looking at policy analysis and the intersection of political and security issues — nuclear nonproliferation — but I really I got started because there was a very unique think-tank based in Berkeley California working on DPRK issues and issues in South Asia. It was really fascinating to me that they were outside the grid of Washington yet shifting the thinking and the conversation on it. I was introduced to them through people at UC Berkeley at the Asia Pacific Economic Cooperation and talked to them about what I was interested in and started to find people with the same goals. That’s pretty much the way I have navigated my career afterward too.
Tech + Humans
CH: One of our core beliefs at Stabilitas is this intersection of humans and technology working together as one. So you’ve been working with cool tech as an analyst for more than a decade, on both sides, as a commercial analyst and then again as an analyst on the client side. What do you think most commercial firms — intel vendors — fail to realize about their client’s challenges?
MP: That is a really good question. I think there are a few aspects to the answer. The first is that as an outside organization you’re part of a team. Your client has several resources that they’re using, and that’s not to say that you’re competing — I think that’s the wrong approach — but you’re part of a team, and they’re going to come to you for certain aspects. From a vendor standpoint you want them to come to you for a lot of different aspects, but that comes from your clients knowing when they should come to you, and trust you on that decision; so that’s the first thing, being part of a team. In terms of value that outside groups can provide for in-house, I think it really comes down to finding what they cannot. That’s really where the value is. So if you’re summarizing something or assessing something that your clients are also looking at and providing an alternative perspective that’s somewhat valuable. What’s truly valuable is finding what they cannot. This is what I call “The Blindside Hit.” It is what separates the better outside consulting firms and vendors from the best; the ability to find something that’s going to impact an organization that that organization might miss. Now, why might they miss it? It’s not because their analysts aren’t good and it’s not because they’re not asking the right questions but whether your capabilities, your network, or your technology (ideally the combination of those three) allows you to identify something that might hit that business that they might not be able to catch.
CH: Can you give an example there?
MP: Sure, so let’s say there are deep web or dark web domains that a vendor might be monitoring, or certain political developments and organizations are really laser focused on concentric circles. They start with “what’s going to directly impact us, that might name us, or know of us” and on the dark web this can be in the political sphere. It doesn’t really matter what space we’re talking about, they identify something that has hit other organizations. So if you’re working in financial services, what has affected some of those other institutions? That’s an easy one though, right? You would think, who else is in my industry? But other kinds of organizations like yours, high profile organizations that have same brand and risk appetite, that’s something that an outside organization can spot that isn’t necessarily in their industry. Something that has hit them that could be an attack on their brand or reputation. It could be a targeting of their executives in a particular kind of way that that organization represents. For example, with U.S. interests or Western interests an outside organization could say “here are the other companies that are in that same space” and a particular financial institution might not think of those.
CH: This is kind of the reverse of the last question, but what do you think client security teams fail to realize about the tech that’s available to them?
MP: Two aspects there. The first about data, and the second about human resources and training. So the first is about which data do you have access to? How do you have access to it? Who owns it? Those are all good questions, but that data is not useful without the partnerships to find and explore that data. That’s not as much about the tech rather utilizing the tech, having your own internal data, identifying who those partners are that are going to get you those sources and allow you to explore that data. Because really, insight from data (I am not a data scientist but the ones I have worked with are really good at understanding that that comes from discovery) is about knowing what the patterns are, and with analysis sometimes you don’t always know what you’re looking for.
So that’s one, and the second is not so much around the tech but around how people use the tech. Of course you need the budget to get the tech, but also the training that comes with that. The really critical miss (sometimes, not always) is connecting methodology and technology. Connecting when an analysis should be using this specific tech in their workflow is very important.
CH: So it sounds like you’re saying, say a vendor, whatever it is, crisis communications or intelligence, can’t just deploy a solution and walk away without ensuring that the training is there through some means?
MP: Yeah and in that training I think of it more in terms of longevity. People think of onboarding and that’s training and we’re done. Training to me is a sequence, there’s a longer term set there. Whenever I’ve tried to do training, people were like “okay let’s do two days and we’ll be done.” My preference was always: I would rather do two to three hours at a time and spread it out over several months because people’s ability to retain, and deploy what they’ve learned, is going to increase when it’s spread out over time in smaller pieces. The other piece I would say on training is really spacing it out for good retention.
CH: You’re known for training analysts and scouting tech. In our interviews with analysts, one of the most common challenges is showing ROI. Can you give some concrete examples of analysts really knocking it out of the park — really providing quantifiable ROI to their firms?
MP: The first thing I will challenge is that ROI is really hard in security intelligence. Demonstrating the return on investment, where the investment is and on minimizing surprise or mitigating risk is really difficult to demonstrate. That’s the first thing I would say. You can do it, but you have to have very clear metrics around which you’re measuring. The idea of ROI is worth it. What’s the payoff of the program and resources that we’re putting into technology? That is a good question to try and answer. How you answer it varies.
One example I can think of is when I worked with an analytical team and there were protests that were targeting the organization. This was in the earlier days of social media so the example isn’t going to be particularly compelling for the audience, but it’s what came up as a part of it and what the outcome was. We really thought about if we’re able to provide some insight, where could we take that in the organization to actually have an impact on the way we address and assign resources for the security. The organization already had an internal tiered system of where they think there’s going be a threat, how much prep have we given those sites, and how much extra security do we have to deploy to those sites, so there was already a framework in which it could fit but say we think this is assigned incorrectly or there might be an opportunity here. So that was a really clear part of it. If they’re able to get that information what can they do with it? Where does it connect? Knowing that before you start that endeavor is pretty important. Not just where you can have an impact but really what other partners what other stakeholders can we take this to? And of course everyone thinks vertically, CSO 100 percent has gotta be there, but in this case it was a multi stakeholder effort to deal with these protests because it was about brand reputation, labor relations and other aspects. What we did helped them identify in real time for all the sites what was happening in each protest location. Again this isn’t hugely amazing now, but it was geolocated around those particular sites. We were looking for two specific things: comparing the analytics of the volume in frequency and based on the deviation, where we’re drawing attention. Not “there’s a lot of chatter here, and a lot of chatter there” but where’s the deviation from what we expect the baseline? Which ones don’t have prep and which ones don’t have the ability to surge resources? Lining those up so we were just surfacing, knowing there’s a lot of heat here or there and understanding where they’re not resourced and are not able to have that resilience is key. That was one component which was done with a mix of analytics and people. The second was protests went on for a couple weeks. Questions we asked were: where do we see this as compared to the last day, the last two days, the last three days? There was a heuristic that a human would have a difficult time overcoming, which is the availability heuristic: that’s what it was like yesterday so it must be what it’s like today. That allowed us to see that location was really hot yesterday and compare it to the last three days. For example, this location is continuing to build up, etc. It was about the deviation. I’m making that point Chris, because that’s really hard for humans to detect, BUT when it is brought to them in a way they know what to do with it, they can say “hmm there’s something weird about that.”
CH: So we’re talking about training, in what way is a Munish-trained analyst most different after their training with you?
MP: I don’t know that it’s anything any particular or a skill that they’re going to learn and that sounds like I’m putting myself down but it’s not. The two things I would like analysts to come away with is, number one, that their curiosity is rewarded. This is because I think that’s what separates the excellent analysts from the great ones, that curiosity and being able to drive that somewhere. Every analyst is curious, but really being able to figure out how to do something with that is special. The second is that serious work does not require us to take ourselves too seriously. I really try and use humor and be personable and show them who I am and get to know them as people, because as much as we’re working in a world of hard security and intelligence, there is an important compassion that I think analysts can bring to their work that separates the best from everyone else.
CH: You’ve recently spent several years at a major bank as a senior analyst in Threat Intelligence and Technology Innovation. Before the call we spoke about the misconception that A.I. is replacing humans in the security space — when the truth is that humans are more effective working with machines — Augmented intelligence — than humans alone.
Can you give a concrete example of “humans and machines” working really well?
MP: First of all, l I’ll say that the notion of augmented intelligence or intelligence amplification, I got it first from J.C.R. Lickliter who is a computer scientist and mathematician in the 50s. That’s when A.I. was really coming up and I am surely not the first to make that observation. His idea was that learning as we would know it now, A.I. can and should be a powerful tool for humans. It’s not that one will replace the other or that one is better than the other. Understanding that computers are good at one set of things: volume, scale and computation and humans are good at another, creativity and intuition is really important. It’s a good question of where I’ve seen it done well, I don’t know if I’m at the point yet where I can say consistently, across cases I’ve seen it done well. The reason for that is because the machinery has to be built around human capability. So it’s not “let’s build a machine to do something and find humans who can work with it and put those together,” but really what is it that humans are trying to do and how do we build, or automate or semi automate aspects of that? I’m saying that because I realize that the vision of augmented intelligence comes from that intertwined capability. If you think about a double helix, the DNA of the technology and organizations, it is using the best of human capability and the best of technology.
Let me give some basic examples that I’ve seen it evolve over time. First I’ll use examples in the consumer products space, and then I’ll talk about a few examples in the security intelligence space. There’s two that immediately popped in my head. The first is around directions and mapping and the second is around weather. They are both data driven, both things that were really human based before. Maybe you had a farmers almanac, but the weather was still delivered through a weather person telling them in auditory and visual fashion ”this is what to expect.” So let’s start with the weather, with which there is a ton of data and there is an analyst briefing the audience: this is what to expect. Effectively there is one decision: do I need to do something different than what I did before or what I do every day, do I need to bring an umbrella? Do I need to wear a coat? Anything else? That’s effectively the decision you’re trying to make. We had the advent of the weather channel and insurance driving that and a lot of data coming around it but now most people have an app. Some of the apps are so granular, down to “do I need an umbrella when I leave the house today, is it going to rain when I get to the location that I’m going to? What time is the weather going to change in a way that I need to pay attention?” Really specific granular kind of decisions.
With directions it was the same thing. It was the people who knew locally who you would talk to about directions that was always better. Then slowly technology got better and we had written directions and automated directions, and Yahoo maps and MapQuest came along. The innovation there was that everyone had something they could immediately see, not only how to get there but the different routes. Again it’s granular. Where should I stand in the metro or the subway car to maximize my exit? At the front middle or back?
I know those two examples are really basic. But if you’re in the security intelligence space, a lot of your customers are using those apps and that sets their expectations about forecasting in intelligence. Two really accessible places where I have started to see it innovate and lean forward is automated reporting around hazardous events, weather risks and hurricanes. So forecasting the direction of a storm and if it is going to overlap with one of these sites, automatically an alert is sent out to all the users in that area. The human analyst is simply watching to see that has happened and that all those things have executed. All that is automated. There’s no processing or writing the thing, none of that happened. What they’re watching for is has it changed from the current pathway (either worse or better), and has everyone who got that alert seen the alert. Those are the two things that the human is watching for, and if they haven’t, why haven’t they? And then there are certain individuals that they would follow up with and manually reach out and say “how come you haven’t seen this?” I know that’s a bit long winded but I’m starting to see some of those elements.
CH: So the last question was kind of a positively framed question about humans and machines working well together, but of course there have been failures in the space, can you give an example of a failure there?
MP: There are a lot of them (laughs) Failures are fine, I think it’s when we don’t learn from them that can be problematic. So let me give you an example of where I don’t believe the failure has been learned from. I can’t remember who said this but I want to attribute this properly, “humans are the ultimate exception handlers.” I think that’s a really potent way to think about what humans should be doing, handling those exceptions. So going to the cyber security space for a moment, really thoughtful sophisticated adversaries know that their attack methods are going to be mapped and investigated forensically in a certain way. They will repeatedly fail at a certain attack, intentionally, to set the algorithmic baseline that they are not a real threat. Then when they come in and offer the real exception of something that’s different, it’s not perceived as a sincere threat. That’s one example, the second is DDoS as a diversion tactics. I’ve seen some security incident event management software, in the cyber realm, handled these as “oh that’s part of the DDoS traffic,” not knowing that the adversary is really bearing what they’re doing as part of that. The adversary knows how a machine would read it, what is the algorithm looking for, and that gets us in this cat and mouse game. I think that’s where organizations really need to be and have to push themselves to figure out what can machines do that humans aren’t really good at and vice versa. Let’s focus on those kinds of things. I know that’s hard to do, so I’m not saying this is easy by any means, and failures aren’t a bad thing, it’s the failure to learn that is.
Things I wish I knew as a young intelligence analyst
CH:I want to move on here to a portion of the conversation that could be useful to younger security professionals. At Stabilitas we’re founded by military vets here, we think a lot about mission and about continuing mission after public sector work. It’s interesting to me that you have experience across four sectors: education (Berkeley research), non-profit (Institute of Peace and Conflict), think tank (Brookings), intelligence providers and more recently corporate experience.
Many younger professionals are looking for impact. Abstracting from the institutions themselves, can you describe where you feel like you had the most impact, and how impact looks different in the various sectors?
MP: Really good question and a personal one. I think impact is very personal, and so I might only be speaking to a portion of people here. Across the sectors I think what is different is policy. Policy is about thought leadership, thinking articulately about complex ideas and communicating simply about complicated dynamics. So that’s where influence comes in. That, and the ability to convene. So impact there is difficult to measure, but important, and in a personal sense, I would also say intellectual depth. It doesn’t mean you have to be a specialist on a subject, but really understanding the ecosystem of players is important. From a think tank perspective and a policy perspective, you can get to know government, NGO , intelligence, private sector. It really gives a good perch of different players in the space. Getting to know what some might call the “stakeholders” but others would call “Actors” in particular domains, understanding that topics from all those different perspectives, and being able to convene the right people thus having a constructive conversation around it is pretty powerful.
In intelligence and corporate security I think the impact there is being able to adopt and adapt the best from the public sector, from the intelligence community, and from the corporate environment. That’s challenging. I would not by any means say that I have been able to do that, there have been times where I have been able to do that with help of people around me and learning from them, so in that environment. I would say cultivating a wide network is one of the ways to really feel like one has an impact. That’s what it was for me, I had to be able to think broadly in order to analyze narrowly, and act specifically. That came from having a wide network. I have also worked in technology, at startups and with tech companies. There I would say it’s a lot about meeting your customers needs, to a certain extent anticipating them, but building them a really solid product is key. This goes too for service providers, providing a really awesome service, having your customers come back and say they made a decision in an informed way that they wouldn’t have made otherwise. For me I will say where I have had the most impact has been when I have embraced fully who I am. I am an analyst, a trainer, a consultant and a technologist. When I have been able to find that mix is when I felt like I have had the most impact. That sounds egotistical and I hope it doesn’t come across that way. I am doing the best that I can because I am the best version of myself. Cracking jokes with analysts, talking about technology and geopolitics for a a group that I know is going to go back and take that to their organizations is pretty meaningful. So for me it’s a lot about the people.
CH:I think that there is a broader lesson there, whether you’re starting an organization or finding your place in one, finding people who are in roles where they can do the things that they are good at, or for oneself, finding the thing where your strengths come together.
Let me ask you about education. You have training from the Fletcher school. If you were advising a young professional with career aspirations to be a leader in the corporate security space, what advice would you give them about getting the type of training you’ve received?
MP: I am very grateful for the opportunities I have had to learn in a formal environment. For me, starting as an analyst was one of the reasons I went to work in India. This was because I had been doing analysis around South Asia and needed to understand better how that environment was on the ground. I was reading, reporting, and talking to people but going and being there was critical. My biggest suggestion would be to get that international on-the-ground experience. As an analyst one of the pivotal moments for me was when I was part of an election monitoring team and seeing what was happening on the ground from an operational perspective and security perspective. Then reading how it was reported and the gap between those was so important. I know for professional operators and professionals like yourself who started their careers on the ground that’s a reality. I guess I am speaking specifically to the audience that will grow up as analysts primarily, to go oxygenate your analysis with experience.
CH: Just one more question on the education bit. Was there a course that really makes a difference for you that you would recommend younger analysts to consider?
MP: First thing that pops into my mind: Negotiations. One of the reasons I chose Fletcher is their focus on international negotiation and conflict resolution. I also took classes at the Kennedy school. I take that with me every day, into my personal life in my professional life. When you get into management and program management or accessing data or anything, negotiations is really incredible. It’s an art and a craft to understand what kind of negotiator you are, and how to become a better negotiator and I really think everything is a negotiation. So it’s helped me incredibly, personally and professionally, so getting training on that was such a huge asset.
The Future
CH: So before we wrap I want to ask you a little bit about the future. At the beginning of the interview I mentioned the RISC team at OSAC making the intro to you. The quote was, “this guy really knows the future of our space.” You’ve been speaking for years on the future in our space. I’m going to ask you a hard question first. Where have you been wrong?
MP:That’s a really good question and I think people should always ask that question. What I would say and what I always try and say is, it’s hard to be wrong about the future, so there’s no cost in being wrong. There isn’t that level of accountability. I am a huge subscriber to the Philip Tetlock school of good judgement and being accountable for our own forecasts, so I appreciate the question “where have I been wrong”? There are two rough categories, there are things that I have missed and there are “bad calls” or calls that weren’t correct. These are things that I missed in addition to what I think a lot of the community missed. Specifically I’ll say around the Arab Spring. I was watching closely how the the forecasts were changing, about what the next countries would be and there was this ever changing list of “these will be the next three countries to fall.” I really bought into the notion, and if I can think about it I can identify some of the cognitive bias that came up, that Libya and Syria would be fine. They had such as strong security apparatus and such strong person at the center. I’m not happy to say that I was wrong about that but I did learn from it. It wasn’t about the subject matter expertise or the regional expertise, per se, but more about where I was getting the information, and what I was paying attention to cognitively, so that is one thing that has always stayed with me. That’s an example of that category, where I have missed something.
The second is where I have just been wrong, made a wrong call and there are the obvious ones around Brexit or the U.S. election. I would say for me where I feel like I’m wrong is when I’m putting a lot of emphasis on the wrong dynamics, not so much my call but my sources and the data that I’m looking at.
With the U.S. election example, a lot of emphasis was on the data and the polling and even if I was trying to hedge against what are the qualifications (538 did a good job with that, and tried to say “this is our margin”). Still, I was over reliant on that. What did I learn from that? And as I told you earlier, failure is not worth something unless you learn from it. What I learned there was that I had not been thoughtful about where I was going to use the data. I kind of just wanted to take it in as much as possible and hadn’t really created a space to say “now I’m going to go do my own analysis” and matching those two. People who know me know that I’m a huge advocate of using data for analysis, but as a qualitative analyst as well there was another perspective there (not picking on Nate Silver, I have deep admiration for him) it was more like “where is the perspective that 538 is coming from?” They would put their data together in a particular way and he is very open about it, but that’s something where I think about making the wrong call. It’s the portfolio of things I have put together and I try not to focus too much on it. Please tell me you’re going to ask me what I got right? (laughs)
CH: (laughs) Munish thats a good question. What did you get right.?
MP: That I will qualify this by saying it’s not so much that “I called this” but more in my world “trends” that I’ve spotted. There are two aspects here. One is related to what we’re talking about the use of machine learning and A.I. to assist and augment, amplify human intelligence. I was fortunate to work at an organization, a startup, where I had the opportunity to really explore that, and really understand what my value was a as a qualitative analyst and work with data scientists. I knew what to ask and they knew how to ask it. I had already seen that starting to emerge, but that was really going to change the way that people were going to expect information and the second also related to information, is that data would become an asset. I never made that specifical call but identifying that data can and should be valued like money. Our own personal data, the data that corporations build and gather, all of that data in aggregate has intrinsic value. I remember working several years ago on a project looking at financial inclusion and using data analytics for financial inclusion when there isn’t a lot of data so it really brought up this question “what is the value of data? How do you build that?” So that’s one of the things I started to pay attention to early.
CH: With that in mind — how will corporate security look different in 5 years from now?
MP:I think there’s probably two things that I would say, things I will hope that are different. One the use of visualization as analysis as an analytical tool, so the best briefers use auditory and visual and a lot of the analysis will be using data visualization. The second is that there is this division in physical and corporate information security, that I feels like it creates blind spots for the adversaries of corporations. Adversaries don’t think “what’s my physical security angle what’s my cybersecurity angle?” Adversaries just look for weaknesses and they exploit them. Setting up people who are in charge of security with different lanes creates those blind spots, so an integrated security organization, and related to that is analysts that are more of hybrid analysts, and I know there is the term All Source. I don’t mean it in source but I mean it a bit in domain. So there aren’t going to be cyber threats, intel analysts and physical security intel analysts but there are going to be analysts that understand what are range of threats. How do they assess risk from geopolitical to natural hazard to information and technology? They will really understand how to put together analysis that looks at all of those, because companies need to be concerned about all of those, not just one at a time.
CH: Just wanted to say thank you and I hope that this has been helpful for our audience.