Your security policies are sh!t

dunnhumby
dunnhumby Science blog
4 min readAug 14, 2018

by Troy Cunningham

Security policy is tricky business, and I’m convinced that many of us are doing it wrong. What I really mean to say is that many of our security policies are probably a bit rubbish. I’d like to use stronger words, but I’ll try and be PG-13 on this. Let me just caveat that this is my personal professional opinion, and it doesn’t necessarily reflect the opinion of my business or businesses I’ve worked with.

I’ve been mulling over this topic for a bit, trying to think about how I can express myself on this. Also trying to figure out if I’m just a bit crazy, and if it’s just me that feels like this. Thankfully, I’ve been to a few sessions with security colleagues in other companies, the government and some academics, and they’re mostly singing the same song.

Recently I attended a Nimbus Ninety event with the catchy title of “The Humans Are Coming — Securing the Human,” and it has given me a lot of ammunition. I feel like the event was hours of me just nodding my head, thinking “geez, these folks really get it.” Rules of the event are that I can’t name names, but I will borrow and paraphrase many of the things said at it to make some of my points here. I’m standing on the shoulders of giants. If you were a speaker at the event, or a panellist, then all I can say is mad props to you, and I hope you continue to share your genius with the world.

The first speaker at the event basically laid down the law by quoting General Douglas MacArthur: “Never give an order that can’t be obeyed.” The hidden wisdom here (which I admit was NOT obvious to me) was about undermining the authority of an order, or the authority of a policy, and what that does to all other policies. You see if you make a policy that’s so onerous that it’s in every meaningful way easier and better for individuals to disregard it, then you’ve given your people the opinion that they know better than you. And I hate to break it to you, they might be right. If they think that policy is fundamentally incorrect, what will they then think of some other policy? What about the policy makers?

It doesn’t take much searching in an organisation to see the disconnect between policy makers and the people the policy is applied to. In my experience there’s often a lack of understanding of why policies exist. If you ask people in your organisation they’ll often point out holes and inconsistencies in your policies. About a year ago, someone in my organisation asked me a question regarding a new acceptable use policy I wrote — he said “What are you afraid of? What’s your biggest fear that this policy addresses?” It was a great question, and an honest way for someone who was affected by my policy to cut through the BS of my long-winded documents and get to the heart of the matter. I think he understood what my fears were after I explained it, but his query sharpened my mind on policy writing. He was a smart guy, and if the purpose of the policy wasn’t immediately obvious to him, then I was definitely doing something wrong with my policy.

Here’s the thing. We’re all people, and at the end of the day, most of us are fairly motivated to work and to be productive. No matter who you are, if there are non-sensical barriers between you and what you need to get done, you get frustrated. It’s often said that the measure of a mature organisation is in the three P’s: People, Process and Policy. But without the people, you’ve got no organisation, and if your processes and policies (and controls) aren’t working for your people you’re doing it wrong.

Let’s take a fairly industry common problem — privileged access for software engineers. Generally, at some point or another software engineers will need to innovate to improve a product, which will mean trying new code, libraries, packages and languages. It’s just part of the job. But trying new things can come with risks. Years ago, I worked somewhere where a developer’s testing environment (running on his laptop), inadvertently brought down a section of the network. With an experience like that, it seems obvious to establish that “Policy X will not permit privileged access to developers,” and end it like that. Threat neutralised! But without any other processes or technology to support that, those people will just leave, or find loopholes because they can’t do their jobs, which means the business has much bigger problems, like a growing shadow IT estate or a decline in highly-skilled worker pool. Generally people don’t want to break the rules, but neither will they abide by them if the rules are a barrier to accomplishing the work they are paid to do.

A better answer — and I’m not saying perfect answer, would be more detailed, and comprehensive. “Policy Y will constrain developers with privileged access to a safe (separated) network.” That’s not incredibly detailed, but Policy Y is already more complex than Policy X, and it has dependencies. The business must have an environment ready to support such a policy. The many organisations that set themselves up to enable something like Policy Y, enable their people to work safely, and efficiently. Getting the job done needs to be easy — it needs to be “on rails” as it were. Policies and controls should reflect that, and processes should get people started on the right track in such a way that there’s no incentive to derail or get around said policies and processes.

I feel like I’ve said my peace on the matter but I’ll leave you with one last quote from the event: “The weakest link in security is not humans, but poorly designed security systems.”

--

--

dunnhumby
dunnhumby Science blog

dunnhumby is the global leader in Customer #DataScience, empowering businesses everywhere to compete and thrive in the modern data-driven economy.