It’s Time To Do Infosec Differently: A Manifesto

TProphet
8 min readMar 26, 2015

--

Update, one year later: I am pleased to announce the launch of PCPursuit, a company born from this Manifesto. We are backed by Mach37, the top information security accelerator in North America.

Two days, two conversations with really smart Seattle hackers about essentially the same thing. Information security is a multi-billion dollar industry. Unfortunately, most of the time the products it sells aren’t used effectively and amount to just so much closing the barn door after the horse has already left. Logs, alerts, endless haystacks in which to find discarded needles that can belatedly tell you that you’ve already lost.

Sure, there is stuff out there that can really screw you. For example, yesterday I watched Falcon, a fairly brilliant hacker — one of the smartest guys I know — walk through a theoretical x86 assembly language attack which attacks not the operating system, not the BIOS, but the CPU itself. It does so undetectably and cannot be stopped by any modern antivirus software. In fact, in order to even begin to stop it, you’d effectively need antivirus software that could perform dead accurate branch prediction. It’s sort of possible today, if you don’t mind a 90% overhead. Basically, if the attack is proven (and if this attack doesn’t prove out, another one will) and you’re running anything important on x86 architecture today, just stop. Game over. You’re completely screwed.

But I’m not talking about that. You know, the minor makes-everything-completely-fucked exploit that renders the rest of what I’m going to write almost irrelevant on x86. I’m talking about almost the entirety of the information security stack being reactive. This doesn’t work, it’s not tenable, and it actually gives people a false sense of security because we forget that actual humans are involved who need to get real work done.

Sure, in IT, we try to be proactive. If you know me, you know I write a column called Telecom Informer in the fictional character of a Central Office employee. So, I’ll put myself in those shoes for a moment just for purposes of illustration. At our fictional Central Office, we employ a number of consultants from a Chinese telecommunications equipment manufacturer. These folks are shady, and they’re quite possibly up to no good. So we put them into a special domain, watch it carefully with an IDS, and have an “everything closed by default” security policy where we need to specifically open up permissions to specific resources for a specific business reason. We even locked down the USB port on these users’ machines. They didn’t have access to the wireless network. So, when we got done building this, it was really secure. The risk was mitigated. Our information security team patted themselves on the back and congratulated themselves on a job well done. They went off to have a beer to celebrate.

A week later, after finally jumping through all the IT hoops to gain network access, our Chinese consultants found themselves unable to actually configure any of the equipment we bought because we’d locked them out of being able to access it. And there wasn’t any way to grant network access, because the secure network they were on didn’t allow it. But that’s OK, this vendor is used to dealing with situations where they are unable to gain direct network access. The equipment is able to be configured via SNMP and they have a handy software tool where you can do a scripted install. It just needs to be run on a computer with administrative permission and access to the correct network. So, facing the pressure of a deadline, the manager who purchased this Chinese equipment — granted proper permissions on the network — does the only logical thing. He installs QQ at the urging of his Chinese colleagues, downloads an MSI from some random guy in China, right-clicks and then clicks Run As Administrator. And it works! All the gear is magically set up, configured properly, and runs beautifully. Because it’s buried in the CALEA functionality, nobody even notices the backdoor that the manufacturer opened in this switch. Including our biggest customer at the Central Office, the federal government.

Now, anyone in infosec can relate to this story. “Ugh!” I can hear you already saying, “What a stupid user! How could he even be such an idiot?” And yes, everything about this situation is entirely stupid, and it’s also entirely real. I mean, think about it. In order to make things, you know, actually work, the NSA — an agency that really should have known better — apparently failed to use features of MSSQL, SharePoint and Active Directory that would have prevented the Snowden leaks. Now, to be entirely clear to the Intelligence Community, nobody has leaked anything to me, I’m not in communication with anyone leaking classified data, and I’m just going based on what I’ve read in the open press. You’re already spying on me and put me on the SSSS list at the airport, so you can easily verify this. Please don’t hurt me. The only reason I bring this up is because if the best experts in the world at security — the people tasked with keeping the most powerful country on Earth’s secrets secure — can’t get this right, nobody can get it right. The architecture of information security is fundamentally flawed, because it fails to take into account the human element. People actually need to share data with each other in order to get work done, and nothing like needing to get actual work done is a stronger motivation to violate security policy.

How do you fix this? Well, normally a manifesto like this from someone like me would be accompanied by a product announcement, or the launch of Yet Another Infosec Startup (the name YA.IS might even be available for the right price). However, none of us at Cuddli are actually all that good at hacking so we’re making an adorable bunny-themed dating app instead. I throw hacker parties, Pinguino makes art and Vudu dresses up in a pirate costume. None of us are smart enough to solve these problems. And we’re especially not smart enough to solve that crazy x86 ASM attack I saw yesterday. That was ridiculous. I mean, there was one slide — showing the vectors for staged attack that can force a register reset and jump — where I just involuntarily said “fuck!” It was like a punch in the gut. I’ll miss you, x86.

But if you’re reading this, maybe there is something you can do to fix this. As it turns out, social graphs are really interesting, because they can tell us a lot about information security. There are lots of new ways to generate internal social graphs at a company and I would like to encourage a wave of research into socially adaptable security models. These could make a lot more sense. Think about it. By the time I left my last job, I had physical access to buildings from California to China, access to the source code of some of the most sensitive products at the company, and access to the SharePoint team sites of teams I hadn’t been on in a decade. Now, I actually needed access to all this stuff at one time or another, so it didn’t make sense to shut off my access even though I only rarely needed it. Unfortunately, permissions are binary. On or off. There was no actual way for information systems to know whether the timing was appropriate. Or was there?

If I had an email conversation with the manager of a lab where I had some stuff deployed prior to flying to Redmond from China, then I showed up in Redmond, badged into a building, and badged into the lab managed by the person I’d had a conversation with, there would be a number of strong data points justifying that my visit was probably legitimate. This would be particularly true if my phone logged onto the wireless network with my domain account when I entered the building. Meanwhile, if my badge were used to enter buildings that I didn’t normally go into (even though I had access to virtually IT facility and building in Redmond by nature of my position), my phone didn’t log in, and it was an unusual time of day, a flag might be thrown. If someone who doesn’t look like me is then observed on camera removing multiple laptops from multiple offices, it might warrant a further conversation.

Social graphs are the key to proactive security. In an enterprise, most business activity is remarkably predicatable based on social graphs. But the whole IT security model breaks down the moment that Julie from Accounting needs something from Frank in Marketing, and the information security team never assumed that this sort of business relationship would happen. You know how these things end: the computer in the corner that is used to run the scanner, which happens to be an IPSEC boundary machine because the ancient driver won’t run except on Windows XP, becomes a departmental file server too. Sarah, the intern, knows how to set up file shares so before you can say “Everyone: Full Control” the company’s unaudited financials are there for the taking. But Erika the cafeteria pasta chef suddenly retired and bought a yacht last week, and nobody could figure out why. Well, congratulations Erika, definitely don’t want to be a wet blanket.

Information security, as an industry, has done a great job of making itself a checklist item. Antivirus, CHECK! Firewall, CHECK! IDS, CHECK! The problem is that none of this stuff actually works or prevents anything. Target and Sony had full and extensive logs of just how badly they were pwned, which served no purpose except to get their infosec teams fired after the fact. And these weren’t bad people. For the most part, they were doing everything right, and following the latest prescribed industry standards. They were just drowned in a combination of information overload and the harsh reality that in a profitable business, people need to get actual work done. So they download the dodgy unsigned MSI from the nice Chinese person on QQ, and run it as admin. Or shuttle data back and forth on thumb drives. Or have a departmental Dropbox that they don’t tell the IT department about. You can’t blame them. They’re just trying to do a job, and a job that the infosec industry is just getting in their way of doing, not helping them do.

So, X86 is fucked. Game over. Assume everything is rooted. It already is, anyway. And it shouldn't actually matter! The next wave of information security shouldn’t be about security technology. It should start with the social graph. And I, for one, can’t wait to see what the smartest people in the world go out and build.

About the author: I’m the founder of Cuddli and previously worked in a variety of senior global IT roles at Microsoft. I’m interested in technology that keeps people and their data safe without slowing business down. Feel free to reach out if I can help you.

--

--

TProphet

@CuddliApp and @PCPursuit founder, @Seat31B blogger. @RSMErasmus MBA. World citizen. Every day, my life continues to amaze me. // Opinions are my own.