Focusing on What Matters an Interview with Justin Berman CISO of Zenefits
Today on the show we have Justin Berman of Zenefits as our guest. Justin is currently the Chief
Information Security Officer at the company and we chat to him about what this role looks like in
the contemporary climate. For Justin, cyber security is a communal undertaking and this
community extends beyond your own company. The better the communication within
departments, companies, industries and even globally, the higher the wall of safety can be built.
We chat to Justin about how he got into the field, his approach to risk, his advice for the practice
at large and get some insight into his hopes for the future. Justin also breaks down his take on
the different roles of CISOs and how they fit into a staff as well as the centrality of this position.
All this and more, so tune in!
KEY POINTS FROM THIS EPISODE:
Justin’s studies, consulting work and path to his current role at Zenefits.
Calculating risk return for defense and attack and how Justin approaches this.
Why better general security at other companies benefits everyone.
Justin’s approach to defending against advanced persistent threats.
Why security needs to talk more about the less sexy sides of their work.
The hottest new strategies and technologies according to Justin.
The role and appropriate time for automation within a security protocol.
Zenefits' ambition for their security and how far this extends.
The role of CISOs in the conversation about security within a company.
Cultural change at companies and how this leads to sustainable security.
The difficulty in hiring currently within the security sector.
And much more!
LINKS MENTIONED IN THIS EPISODE:
Justin Berman Website — http://www.justinbermanphotography.com/
Justin Berman on Linkedin — https://www.linkedin.com/in/jmberman
Justin Berman on Twitter — https://twitter.com/justinmberman?lang=en
Zenefits — https://www.zenefits.com/
FS ISAC — https://www.fsisac.com/
Phantom — https://www.phantom.us/
Equifax — https://techcrunch.com/tag/equifax-hack/
FULL TRANSCRIPT OF INTERVIEW
Andy Anderson: So, I saw you just made the switch from New York to SF.
Justin Berman: I did.
I just did the reverse so how is your move going?
Good so far. I actually have my stuff finally. It took a while to get out to the other coast, but it’s nice to be actually fully moved in. I mean, SF is like a lot the same in the sense that it’s the other really big, or one of the other really big urban areas, so it feels kind of like one more city, but ultimately I feel like the … I mean the weather is objectively better.
You picked a good time of the year to do it. I will give you that.
And also I resonate a lot with the culture in SF. I like New York. I also like nature a lot so for me that was a really important vector.
Yeah. Walk me through, because I think it’s important to how you view the world now, is walk us through your pathway to your role now. What did that look like?
I went to school actually for computer engineering not for cyber security. I started as a developer. I disliked building other people’s ideas all of the time. It felt really … and maybe that was in retrospect now when I see developers or engineers, they get a lot more say in what they do, so maybe it was the company I started with, but that happened to kickstart me to wonder what else was out there.
A friend of mine was working in an ABSEC consultancy called Aspect Security. They just got bought by UI. He invited me in, and he was like, “We think that we can turn developers into really smart ABSEC engineers.” Then I started doing what everyone starts doing, which is like pen tests and code reviews, and then I from there moved up through the complexity scale in application security.
So, you know, I did pen tests and code reviews, and then I ran them. Then I ran sets of them at a time, and then I built out computer based training offering for the consultancy. Then I built out a strategic advisory services for the consultancy specifically focused on ABSEC, and then eventually I got to the point where it was like, “I’ve consulted for a long time, and I feel like our clients keep making the same mistakes. I don’t understand why, and I feel like it must be our advice.” If it was only a couple of times that it happened I’d be like, “Oh, that’s a client problem.” But if clients consistently fail to implement what you suggest, there’s a gap there in what you’re suggesting.
I decided I really had to go in-house to actually understand what it was like to run in-house. I went to a hedge fund, started as an architect, but ended up running their security architecture function. That taught me a lot about what the difference between being in-house and being a consultant is like. It taught me that being a consultant is my retirement choice. It is way easier to be a consultant than it is to be in house.
You should do that.
Then when I left the hedge fund it was for a head of security role, or a VP of info sec role at a startup here in New York called Flatiron Health. They were great. I got the chance to build a team absolutely from scratch from a really early point. Luckily for me the first startup I worked for cared about security, and they cared about it enough that they hired it very early. I was only the 60th employee there.
They also gave me a lot of freedom, because I tend to approach problems from a very deep analytical mindset, and so they gave me a lot of freedom to build the things that I thought were necessary, and to hire the team I thought was necessary. So, that also taught me a ton about hiring when you don’t have hedge fund money to throw at people, and how you make the right decisions there, and how you build a team that is as efficacious as possible when you don’t want to scale to a huge size.
Then from there I spent three years at Flatiron, really building out the program and the team there, and when I left it was because I felt that I had accomplished most of what I wanted to. I had created a relatively stable security organization that was moving from a transformative, grow a lot, change a lot, change the company a lot, towards a more stable operational world. I just know about myself that I find the 200% change interesting and the 2% change a little bit more monotonous so I tend to optimize, and I think I’m better at working within that transform this, build this from scratch world.
That was why I went over to Zenefits, because Zenefits really needed … they’ve done a lot of work to transform in a lot of ways at the same time, and part of what they wanted was to transform the way they approach security. They’re willing to put the money into it. They’re willing to put the resources behind it that make a difference.
That was a great answer, really. To pull one thread out of that sort of thinking, you sort of mentioned not kind of having hedge fund money.
Most people who are not a hedge fund have to make this sort of risk return calculation. One of the interesting things that I thought you brought up in your talk was to do that calculation for both the defender and the attacker.
Walk those who weren’t able to sit in kind of how you do that, and what goes into that sort of calculation.
Sure, so I think a lot of what I think about when I think about that kind of cost math is scalability. On the defender side first, what I look for is … what I try to know is lots of different paths. When I think about my adversaries, I think about that each one of them has a set of different paths they’re gonna take to try and achieve a set of objectives that they have against my organization, whether it’s stealing money or data or DDosing us or whatever. Any one of those I can evaluate, and when I look at a lot of them at once, or try ideally all of them at once, I can look for commonalities.
So, for the defender, you look for the commonalities of where is a part of those attacks that are being used against you that is across many different kill chains. If you can defend against that, that’s a scalability question. You have essentially … you can save a lot of money by saying, “I’m going to focus on this thing.” As we talked about, phishing is super common as a delivery mechanism, and so because phishing is so common, targeting that is very scalable for a defender.
On the attacker side, the same thing is what to think about. They want attacks that scale, and they also want attacks for which … or rather, there’s actually two aspects that I think an attacker cost [inaudible 00:07:01] should consider. One is they want to choose attacks that scale effectively, reliably as well. They can use phishing because everyone has email and everyone uses email, so their targets are ubiquitous. Likewise, they want an attack that you know it’s not gonna hit everyone all the time. You’re gonna have a 1–5% click-through rate on phishing or whatever, assuming you’ve done any training in your organization, but the other side of that kind of attacker cost thing is for us to consider, for a defender to consider, is what’s expensive for them to change?
If you think about phishing, phishing is a very known, understood, and you can build tools around delivering phishing attacks, but if phishing attacks no longer work, like let’s say for some reason they’re just totally ineffective, then they have to figure out a whole new mode of delivery of their original. They have to move toward something like watering hole attack like we talked about in the talk, or some other target or delivery mechanism. That’s expensive for them, so when I think about the math, to create the highest return on investment in risk reduction, I’m looking for the places where I can break lots of attacker chains at one time, and it’s going to be difficult for them to actually change to then bring me back into their target set.
I mean, and tied with that is sort of another, sort of a corollary of making yourself look different than the crowd, right? Like thinking about yourself as what is your system, what is your infrastructure look like relative to somebody else’s? The old joke about the two hikers and putting your shoes on. They’re gonna attack somebody. It’s just who’s that gonna be? Who’s the slowest person in that pack?
I think that’s … you know it’s interesting. I agree with you, but I think this is part of the reason that we need more intelligence sharing, because I want to raise the bar not just for my own organization, but if I have enough intelligence sharing spread throughout a wide enough community, then what I’m doing is forcing the attacks to pivot. If you imagine for a second a world in which everyone somehow impossibly immediately shares all intelligence that they get. That means every time an attacker uses a piece of malware, once it gets detected, the entire world is immunized against it. Then they can’t use it anymore.
The same idea works for phishing or any other part of an attack. At the same time as I agree with you completely that I just don’t want to be the slowest hiker when the bear’s coming, at the same time what I really want is the network effect of helping everyone, I don’t know, put a tree in front of the bear, because ultimately … or put a tree there for us to climb, because ultimately, I want to make it expensive for attackers to make mistakes, a lot more expensive than it is right now.
You must have been listening to Paul Vixie or something. In practice, how are you trying to do that, or implementing that in practice? Is there a way that you’re kind of finding to share with the community or benefit from that?
I think that there’s formal and informal channels for that. The formal channels that exist for that is like Facebook has thread exchange. There’s the ISACs in the world, like FS-ISAC for financials, and HS-ISAC for healthcare services. Those have, in my past, certainly been effective channels for information sharing.
When you’re a startup, it’s interesting, because usually the ISAC members that contribute the most are these huge banks or these pharmaceutical industry companies or big hospitals, so their overlap of interesting information won’t necessarily be that great, but it’s certainly a way for you to promote that information exchange effectively.
The informal channels end up being the highest value though, where you’re getting beyond something like sharing IOCs, like caches or stuff and towards like, “Hey, we see bad guys doing X thing. We see an uptick in targeted spearfishing against our CEO that takes this form.” I wish there was some way to scale that more, or rather, I don’t know the right way to scale that more effectively, but for me, it’s just the personal relationships I have with other CSOs and CSOs.
When you sit on a Slack channel with a lot of other CSOs every day, and they are inherently, when something comes up, and you trust these other people, and you have the conversation, that immediately spreads that knowledge to that group.
Yeah, I mean it’s … I was having a very different conversation with some ex-professor at University of Washington thinking about kind of threat, threat modeling, and sort of trust models, like how you … I mean, he’s got a whole system with I think it’s Trident is the open source system. Are you familiar with that at all?
I’m a fan.
Where it’s essentially how you create a community where you have trust. That typically breaks down at some sort of … once you start to hit triple digits, right, using communities, and even getting that big is pretty hard, so it’s a really interesting model where basically you don’t let somebody in unless three or four people vouch for you. You have to say how well I know them. It was fascinating, but I’m still looking for it. That’s why I keep asking.
So when you find it …
It’s a hard problem, especially right now, because there’s so much belief that … if you think about defense-based, they would be right in saying any data that they release that is non public in nature has the potential to allow say like a foreign state to make different decisions about attacking the US government or something like that. But in our corporate space, there’s the people like Facebook, or Google, or Microsoft, or Merc, or Citigroup, that have that massive target set that are like, yes, they legitimately should be careful about what they share and what they don’t, but then there’s all the rest of us that are being attacked day in, day out, by often frequently common adversaries. Those common … you’re more of a target of opportunity than you are the target, so we should feel freer to share that information than we probably do right now.
It goes back to this culture of, I have to protect everything I know. I have to protect everything that anyone else knows about my security program, because if bad guys know about it, then they’re going to be able to attack me more effectively, when in reality, if an adversary has decided that you’re important, they will research enough about you to have an idea about your security program anyway.
Yeah, I mean, if you truly are the target of a APT, what are you gonna do?
I mean, if you’re truly the target of an APT, then unless you exist with enough resources to … either enough resources or a small enough attack surface than an APT will be successful, because ultimately if an APT really wants to come after you, and they are willing to spend the resources on it, they’ll just trick you into hiring the wrong person. When are you ever gonna defend against that? Background checks are ineffective against spies.
Did you, I don’t know, I’m sure you’ve followed it with more detail than I have, but I was listening to the interview with one of the Anthem guys who was talking about that attack, and essentially realization that that was one, that it was likely the Chinese, right? That in fact there were two APTs going on, likely from the Chinese. He’s like, “We blocked one, and we didn’t realize that there was a second.” That level of … I mean, being in the space that you are with sort of some very sensitive data, how do you think about kinda that level of adversary? Don’t feel like you need to share anything that’s-
No, I understand.
I think that the honest truth around thinking about defending against APTs for organizations that are not remotely at the resource levels to be able to do so effectively is you will take the best effort … you’ll make their lives harder, and that should be your goal, like to make yourself just … this gets back to the slowest hiker versus bear thing. Whereas I can immunize myself and help immunize the community against commodity everything, it’s much different to try and stop customized, targeted activities against you, because frankly they rely on a level of research about your organization that is not scalable. That’s an accepted choice by that group if they’re actually gonna target you.
A thing I would … maybe the really blunt way of saying this is the problem I have with focusing on and talking about APTs for everyone that’s not a Citigroup is oftentimes, it is techno-fetishism. It is like, “I want to believe that I’m important enough that I have to do this right now,” and that all the tools and technologies associated with it are cool, and interesting, and popular, and so I’m gonna focus on this because of that technology aspect of it. As opposed to recognizing that you may still have significant gaps in vulnerability management. If you can’t defend against a Day 2 attack, what exactly do you think you’re gonna do about a zero-day?
We’re in the media space, although we’re a little bit different than some, so we recognize that Road & Track puts the Ferrari on the front of the magazine. They don’t put the newest Escort.
It’s like, sweet car. It’ll get you there.
But that’s a great point. What we need … ultimately, I think part of the issue with the way security people talk in general is the fact that they want to talk about the Ferrari, because it’s sexy, and not because it’s important. If they were able to distinguish between the attractive and the important, then you would see the Escorts on the cover, because the people that we would be lauding are the people that delivery security day in, day out reliably and stop the majority of the things. That’s ultimately where most security programs need to be aiming rather than aiming like, “I can stop the Chinese,” because no you can’t.
Yeah, and just to truly beat a dead horse on this analogy, the one thing that I think is interesting is when the technology, you know the newest carburetor is developed for a Ferrari, and then slowly migrates down to a mass productive, available technology. How does that process happen? As you sort of look at the landscape and think about technology that’s out there, what’s … we like to, as I mentioned, we like to talk about what might work, or what is working. Where are you sort of … What technology, what sort of new strategies are you seeing that is cool and interesting?
Cool and interesting, so for me, I think the areas that I’m the most fascinated with right have to do with how I can … Ultimately, it comes back to a team efficiencies thing. I love orchestration stuff. This is my area of technological interest. The reason I care though has to do, I think, with practicalities. I’m sure some people would argue that I’m not practical enough about it, but I think about tools, like Phantom Cyber’s famous. There’s lots of competitors for Phantom, and when you can take all of the grunt work that gets done day to day, the work that is ultimately automatable, that doesn’t really require a person’s intellect and experience, and then do the automation behind them. Orchestra those things to happen in a way that is stable and reliable. Not only have you increased the quality of your security program, because you know a thing that you need to happen is going to, but you’ve also freed all of these really smart people to work on much more interesting problems.
Smart, expensive, hard to find people.
Right, and so for me, if I had to choose a single area of focus, it’s not the latest AI or ML stuff, it’s how do I scale my teams more effectively right now. For me, scaling my teams more effectively is about finding ways to take the work that is ultimately more grunt work and requires less of their intellect, and automate that.
Yeah, and that’s such a … there’s always a tension there, but it’s really interesting. It’s like if you’ve done something three times, four times, five times, you should be thinking about how do I automate this, because then once it’s automated, you know, consistently it’ll be done the right way. But then you always worry if nobody’s actually doing any of this, then we forget what we actually did, and a year from now, oh, whoops, we left … you know, something changed, and the decisions we made in that.
Totally, but what you just said is exactly why I focus on how do I create signals around stuff that I automate? I don’t just mean this server didn’t fall over signals. I mean if I’m say … what’s a good example of automate? Oh, a great example of automation is in the ABSEC space is, I want to automatedly run a bunch of stuff in the tool chain for deploys. I want to run static analysis tools, or dynamic analysis tools, or custom developed linters, or whatever else in that tool chain automatically.
If a new vulnerability emerges, that awareness … or that’s a piece of intelligence for my program, and that is like, oh, wait. I have to make sure that either we can detect this. I have to test, and thus I have a signal about whether we do detect this thing or not. If we don’t, we have to then do something about it.
What’s key is you have to build signals beyond just, the thing is still running, into the process.
To sort of expand on that, and something that I heard you talking about, was thinking about how security is a part of the culture, but also the lifeblood of what you’re doing. Walk me through where you see security fitting into the core business at Zenefits, but in general.
At Zenefits I think I have this amazing advantage, because my CEO, Jay Fulcher, is incredible and really gets it. I think a lot of our executive team really gets it, but ultimately the posture that we want to aim at at Zenefits, and the way we fold security into that conversation, is that security becomes a competitive advantage for us, no just, oh … We don’t want to have the conversation with customers that’s just like, “Oh, we’re safe. You can trust us to hold your data.” We want to have the customers that’s like, “This is what it takes to be safe in this world, based on you’re trusting us with your payroll. You’re trusting us with your HR data. You’re trusting us with your employee’s background checks,” all that kind of stuff.
We want to have the conversation with customers that is more around how do we build … I guess this isn’t so much with customers, but internally, the conversation is, “How do we build security into the DNA to the point where it makes sales easier, where it makes us win deals that we wouldn’t have won otherwise, where it ultimately, in some ways, allows us to box out our competitors because they have worse answers?” I think that that sounds very almost monopolistic in nature, but it’s not. It’s an admission that we do security well. Our competitors should have to do that. We want to raise the bar around it.
The second area of really changing the conversations with our customers is not stopping at the point at which you say, “We’ve built a security program. We built safety,” but how do we build capabilities for our customers that keep them safer? The difference between … an example is, companies are typically not liable for when one of their customers loses their passwords, but if you take the time to build a relationship with a customer, you find when their accounts have been taken over. Then you have that relationship with them, where you reach out, and you help them recover their account and everything else, that’s a much different conversation to have with your customers. That’s taking security and making it a selling point, as opposed to simply having security be an expectation.
Yeah, and I mean maybe … it seems like over the last kind of couple of years, the awareness of potential sort of issues around cyber security is … maybe it’s the election hacking, or Equifax, or whatnot. When my grandmother, who is 94, is asking me about Equifax, I know it’s permeated in the culture.
It definitely has.
It sounds like you’re lucky enough to be in a place where the very top of the C-suite gets it, but it’s as interesting as always to study the relationships that work as well as it is to study those that are broken. What do you think in that relationship? How do you sort of talk about cyber security as an issue, as a budget item? Those conversations are happening for CSOs everywhere.
Sure, so I think one of the biggest challenges that CSOs have right now is that they’re still kind of in the majority shoved into this place of being seen as purely a cost center for the organization, similar to how most organizations see corporate IT as a pure cost center. When you talk about it, you can’t allow yourself to fall into that mindset, where someone is setting a budget for you and you’re falling within it. I think there’s a level of proactivity that you start with, or you have to move towards, and yes, you’re gonna see pushback from the C-suite, or you’re gonna see pushback from other managers or whatever else. But if you as the leader of an entire security function start by saying, “I’m gonna define the budget that we should have,” and then I’ll acknowledge that, to myself, I’ll acknowledge I might not get everything I want, but I’ll at least-
You’re thinking hedge fund money, right, to start. You go there.
I’ll at least define the budget that I think is really necessary. To your point about hedge fund money, it’s really interesting. Different companies have such different cultures about setting budgets, where sometimes you have to build in padding, just because the CFO no matter what is gonna cut. Luckily, the CFO we have at Zenefits is very … He likes to be really precise, I think, about what is necessary and why. He’s not just taking a flat cut. He’s trying to understand the business impact to the firm of different projects that are gonna happen, so that for me is great, because that allows me to have a conversation with him. I happen to also be lucky that he cares about security, period.
To come back to your thing about culture, changing culture at the firm, or changing culture at any firm, really, is a lot about reinforcement more than anything else. You have to reinforce with your C-suite, this is important, this is why this is real. The, here’s how I’m showing you this is real. Here’s what happened to, you know, our competitors. Here’s this breach and the impact that it had.
Some of the breaches that have happened recently, like Equifax is a great example, actually. Equifax when it happened cost the world something like $4 billion of market cap. It might have been more than that. It might have been more like $6 billion of market cap or something like that. That’s like, whoa. A third of investor value disappeared off the face of the planet, because of this breach, and the fallout from it.
And the CEO goes, and the CSO goes, and yeah.
That’s actually … the CEO and the CSO going thing, well the CSO going thing is expected almost.
But the CEO going, I think that certainly get people’s eyes open.
Right, and we see a lot more pressure on CEOs to take responsibility for security, and that’s certainly impacting their view of it. I can tell you that Jay definitely feels a degree of personal responsibility for it, and thus he cares and interacts more about it. Convincing a CEO that their job is at risk is a very dangerous position to be in, because you coming to the CEO and saying, “You should care about what I do, because if I do it badly, you could get fired too,” is a really hard conversation.
But I think that the biggest part … It’s also important to recognize that some culture is built from top down and bottom up. The bottom up part especially is repetition, repetition, repetition, and building in this … I think building in the opportunities for security to positively affect the company’s culture. Instead of just security being a no factory, security has to be a, “We’re going to move forward together, and we’re going to do these things,” or, “How do I make it easier for you to move faster while we’re safer?”
A great example of that is like where security has the opportunity to shift away from token based 2FA towards push based 2FA is amazing. I mean, one of the most powerful pieces of security tech that you deploy in any organization is two factor, or for that matter, is like single sign on. You want to make a bunch of people in a company happy? Tell them they have to remember two passwords instead of 200, and they’re gonna be like, “Oh my god, my life is so much better right now.”
Eventually it will fade into the background, but it fades into the background as part of the culture of the company. Like, “Yeah, we don’t have to remember all these passwords,” but it becomes like a … if someone wants to buy a tool where someone has to remember a new password, they’re like, “Why? Why can’t we just use this thing that we use to sign in?”
Ultimately, that’s cultural change that happens, and that outlasts anything technologically that you will build.
So much good stuff. I feel like we could sit here for an hour, right, but we’d have to start buying beers, because you’ve been chatting all day. I’ve been firing stuff at you. What do you want to talk about? Here’s an open platform. If you have a small soapbox to stand on, right?
If I have a small soapbox to stand on right now, it is about the fact that hiring is so hard right now, and it’s so hard for a variety of reasons, part of which is because we are not open minded enough about people right now. We all want the same set of people. We all want the person that’s done this security thing for five years. I’ve seen job descriptions for technologies that haven’t been around for five years in which the expectation is 10 years of experience. The open mindedness towards non-traditional hires has to increase, and for that matter, the willingness to train has to increase.
Maybe the broader version of the soapbox is one of the biggest challenges that security has, and one of the reasons I think people leave a lot, because 18 months is the standard tenure for a CSO. I would argue it seems to be even shorter than that for your really good security engineers. One of the reasons I think that’s true is because we have promoted a new generation of managers who are predominately technologists so quickly, and not invested the same level of management training in them that companies would absolutely invest in other places.
So you have under experienced managers trying to retain the most sought after staff on the planet, or some of. Data scientists are probably just as in demand as us. You also have managers that don’t know as much about how to manage, and so they’re creating a negative opportunity for these employees. They don’t know how to create a career path for people. They don’t know how to navigate some of the complex social and psychological issues that staff have, so for me, one of the reasons I think I am even successful is because I care a lot about management. It sounds weird, but at the same time as I put a lot of emphasis on being an effective technologist, and being a technology leader, part of being a good leader, period, especially when I have staff reporting to me, is just deeply caring about people.
On the one hand, we have all these semi-inexperienced managers that have a lot more churn, because they just don’t know how to navigate some of the issues and create paths for people so that they can see the longterm, this is how my career’s gonna go. Also, at the same time, we have some of most in demand staff on the planet, and so they are constantly being offered a chance to make incrementally more, or a lot more, so you have, that by itself is just one of these awful circumstances, or awful confluence of circumstances. I think companies have to start investing in their security leadership the same way they would invest in other business leaders.
That was great. I mean, you’re preaching to the choir here. That transition from player to coach is such a challenging one, and as you become more senior, it’s less your own sort of individual sort of work and contribution, and more how do I get the best out of all of these people and that’s not an easy transition for a lot of people I think.
I totally agree about that. I think there’s also such a … this is true across lots of jobs, but security is just one more of those, but we still largely teach people that the way to move forward in your career is to manage people. I deeply think that some people like that, and some people don’t, and we have to create a path for people that do not want reports, that is just as rich and rewarding and complex, and compensated as the people who have hundreds of reports at the end of the day, because ultimately the impact that a really phenomenal engineer has can be just as high as what a director or a CSO has.
Yeah. They’re the rockstar player, the Michael Jordans, the Steph Currys, right? As important as Steve Curry.
In those cases, he makes more money. Anything else? Anything you want to pitch? Anything you want people to know about?
Yeah, come join my team. We have really smart people, and I’m expanding it significantly. I’m always happy to talk with smart people, even just to learn from other people. Zenefits is hiring.
Yeah, and we’ll include some links and stuff to your site and any of the positions you have and whatnot there so you can share that. Having watched … taken the half hour it took to get you into this interview, and seeing how nice you were with all the people who came up and had questions, I can vouch that that’ll be a good experience for them.
Cool. Thank you so much.
This was awesome. Really terrific.
Originally published at cybersecuritydispatch.com.