Managing a quarterly security review

Ryan McGeehan
Starting Up Security
9 min readAug 14, 2024

I like an approach that combines my favorite quarterly review practices I’ve been exposed to. Here’s the general meeting structure:

  1. Assurance: How do our existing mitigations look?
  2. Projects: How have ongoing projects gone?
  3. New assurances: Do we change how we track existing mitigations?
  4. New projects: What do we build next?

The rest of the essay breaks each topic down further and aims to prepare you to moderate a quarterly review of a security team’s work.

(20 Min) Retrospective on the previous quarter.

Discuss key results from assurance objectives exceeding their thresholds.

  • Example: We paid more bounties than expected and need to discuss our approach with product teams—otherwise, there is nothing else to discuss.

Discuss key results from project objectives.

  • Example: We just finalized trufflehog integration into CI/CD.

(20 Min) Decide on proposals for the next quarter.

Add, intensify, or keep existing assurance objectives.

  • Example: We need to ensure trufflehog is integrated with new CI/CD changes, rotate all secrets it finds, and escalate to quarterly review if >0 are found.

Create project objectives.

  • Example: We need to decide on EDR and deploy it with IT.

Objectives generally follow the famous OKR structure. I’ll avoid general OKR discussion, but you can find some of my other writing here. In the most simple terms, an OKR is a directional statement (the Objective) and testable, specific, actionable outcomes (Key Results).

OKRs already have quite a bit of literature around them. This essay will be more opinionated on their usage in security.

Next, we’ll discuss assurance objectives and a few models for guiding a team towards project objectives.

What is an assurance objective?

An assurance objective has opinionated key results that support long-standing mitigations. It helps keep promises and build a program that freezes particular risks over the long term.

Think of an assurance objective as the opposite of a project that chases a shiny object. Assurance objectives echo the commitments required by your previous projects. They are the ongoing care and feeding left behind from a project.

Assurance is knowing that a risk remains mitigated.

A simple non-security example: Your family gets a dog. That’s a project. You know… getting the crate, bed, food, leash, and collar. You do all that stuff once as a project.

The assurance objective is feeding the dog 100% of all days, getting 100% of vet shots, working with a dog trainer 100% of the lessons, walking 100% of the mornings, and ordering a bag of food monthly. That’s the ongoing commitment that the project required.

Now, let’s use a security example. We’re not adopting dogs; we’re mitigating risks. Let’s say you’ve manually ensured you host no public S3 buckets with sensitive data in Q1. In Q1 of the following year… is that still true? Has this changed when you weren’t looking?

An assurance objective commits your team to maintaining your risk expectations, quarter after quarter, forever.

First, you mitigate risk. Then, you build assurance that it is still mitigated.

You see, we continually build things to mitigate risks. We also need to simultaneously maintain and improve a baseline level of assurance with routine tasks, metrics, and operational standards. That takes commitment, and these commitments build over time as we engage in mitigation work.

Otherwise, it can get ugly out there. A team can easily repeat the project work of its predecessors when they no longer trust (or even know about) their mitigations. It’s a bunch of nothing-building—a series of short-lived outcomes.

Let’s interrupt the cycle of endless projects and build assurance. We do this through quarterly reviews.

How do we create an assurance objective?

Assurance objectives often target a desirable state that should be maintained permanently.

Let’s say that all endpoints must meet an endpoint security standard involving EDR, MDM, and some configuration policies. In particular, we want coverage above 95% of our endpoint security standard. Maybe we phrase the assurance objective this way:

Objective: Maintain our endpoint security standards.

A policy like this is quite tricky to implement. The numerator ( # of managed hosts) is easy to gather. Just list all assets from EDR or MDM. The denominator is tricky because it’s a known-unknown. You know there are unmanaged hosts, but not how many. How will you determine how many hosts you don’t know about?

Craft a variety of key results that can support this:

  • Total endpoints: Build / Run a playbook to gather a forecast of total known endpoints from DNS, VPN, DHCP, etc. These numbers will conflict, but you can create an interval estimate (high/low) with the data.
  • Total Standard Hosts: Gather hosts that meet the standard from EDR, MDM, etc. This should be a single number. (# of managed hosts )
  • Ratio report: # of managed hosts / # of total endpoints = % endpoint coverage standard
  • Improve the playbooks: This is the time to add or create ideas to improve the forecast of the denominator. Improve a script, add new management telemetry, etc.

Now, the assurance objective is written. Assurance key results should not be large projects (with caveats). They are generally operational tasks, runbooks, or metrics / KPIs like those mentioned.

The caveat that requires critical thought is to Improve the playbooks. If you’ve found a way to eliminate the toil of the assurance objective, then it may introduce project work to make assurance more efficient. It’s just that this would take a project spot in the next cycle. Or, for example, if the business has changed. IE, an M&A that has disturbed how endpoints are tracked, a new OS to support, etc. That will emerge as project work after identifying it under the guise of an assurance objective.

How do assurance objectives impact quarterly reviews?

Project work from previous quarters permanently increases the assurance objectives you’ll maintain. However, the actual work involved doesn’t necessarily increase linearly. There’s some consolidation, subtraction, and demotion of assurance objectives.

Last quarter's assurance objectives are reviewed at the beginning of the review. Don’t enumerate and discuss each one by one. Instead, only surface any key results that signal that assurance is lost. For instance, a metric that has left the allowable threshold (<95% of laptops are unmanaged) should be discussed.

Or, if an assurance metric needs to be lowered to capture its objective: “CSO says they only panic under 90% so maybe we can decrease the threshold?”

Adjustments could include changing a metric threshold or a new perspective on gathering metrics. Otherwise, all the greenlit assurance objectives are mostly skipped at the review unless someone needs to bring one up.

A quick story…

This story inspired me to lean very hard into assurance objectives.

A consulting client of mine was using a popular CSPM. They reviewed the findings/fixing progress from the quarterly review. This was also the first quarter with a new engineering lead, who threw a wrench into the review.

The security team mentioned how their standards for S3 buckets were being met, and the engineering lead needed clarification. “I don’t see some of the bucket names we use daily in your report.”

Long story short: A large data science organization used separate AWS and GCP accounts to pull data from the production IaaS, including the highest-risk data they were concerned about. Unbeknownst to the security team.

Common story!

The punchline is that security tooling wasn’t connected to the unknown IaaS accounts with risk.

From then on, an assurance objective was created: All of Cloud IaaS meets the cloud security policy.

  • 100% of all known IaaS accounts connected to CSPM
  • Complete runbook with IT, Eng, and Finance to flesh out high-risk shadow eng/IT.
  • Security observes 100% of monthly infrastructure lead meetings.

As a result, there is growing assurance that no IaaS accounts are around to surprise anyone. We don’t have to assume that they’re tracked down and brought up to standards—regular objectives cover this.

It's not just a policy… it’s regular work.

What are example assurance objectives?

Assurance objectives are highly opinionated and generally act as sniff tests for the risk areas and domains in which they are implemented. Here are some examples, but you shouldn’t consider these as copypasta. Instead, consider risk expectations you don’t want to slip out from under you.

  • Incidents: Set a threshold of zero. All near misses and incidents are reviewed during a review, and objectives are considered to avoid them. (We had one incident to discuss.)
  • Endpoint Security: EDR, MDM, and standardized configuration. Set a high threshold for standard coverage across the corporate fleet. Regularly run a minimal hunt for systems living outside of metrics. (We have 93–95% coverage to our endpoint standard. Our hunts have yet to find hosts beyond our telemetry).
  • Secrets Management: Set a low threshold for the number of credentials found in source code, Slack, employee credentials in leaked data, or otherwise unmanaged outside of a credential manager (production or client-side)—Hunt for credentials where you don’t have reliable automation. (We found no credentials this quarter).
  • Bounty Program: Set a threshold for anticipated bounty volume, per report rewards, and SLA’s on responsiveness. Discuss making payouts, thresholds, and SLAs more aggressive after long stretches. (We had one crit that exceeded our payment threshold).
  • Vulnerability Management: Set a threshold for unmitigated vulnerabilities meeting a certain severity and use the review meeting to coordinate escalation. (One critical vulnerability has not been fixed because of a partner agreement that makes this a breaking change)
  • Infrastructure: Set high thresholds for coverage of cloud security tooling (CSPM, etc.), low thresholds for unintentional network/server exposures, and accounts with unfettered, privileged access. (We have not found any new infrastructure accounts)
  • Identity, Auth, Authz: Require coverage of applications over SSO, low thresholds for unused accounts, and out-of-band authentication outside of SSO. (We found one marketing tool that isn’t in SSO and paid for with a personal expense account)
  • Offboarding: Set low thresholds for “spooky” terminations where access may not have been thoroughly revoked, employees may have misused access or were somehow tipped off to their termination. (A former employee harassed an employee over Slack after they were terminated)
  • Threat Intelligence: New prominent or categorical threat actors or TTPs. Especially in organizations where spam, fraud, or they-are-actually-persistent-to-you threat actors are common. (A new fraudster’s preference for campaigns is to phish with .png images)

Consolidating and automating assurance objectives

Assurance objectives often start from small projects and collect like dust. First, you start tracking MDM coverage, then EDR, etc. These eventually pollute your quarterly review with metrics.

As you become more sophisticated, we collapse similar assurances into grouped standards (like “Endpoint Security Standard”). The particulars of the program are pushed towards the individuals directly responsible for each mitigation, and you’d trust them to escalate any quality issues with the underlying standard or key results that support the assurance objective.

You’re not trying to spam the leadership group with thousands of small assurances—try abstracting the objectives and letting teams manage more granular assurances their way.

You may already use some methods of dashboarding and reporting metrics. If you already use a compliance tool like Vanta to collect evidence, you can extend it further with custom integrations and custom tests. This plugs assurance objectives into the same place you manage other compliance issues, and you can build thresholds as tests.

The next quarter’s objectives

After reviewing any outstanding assurances from the previous quarter, immediate attention should be paid to repairing them before engaging in new projects. Remember, new projects will require their own assurances, so it’s best not to build on a cracking foundation.

I have a few models I turn to when evaluating new work. If intuition is failing you, these are models you can use to help brainstorm what the next quarter will look like.

Risk, Governance, Trust: I view all security work as falling into one or more of these three categories. A new project should reduce risk, satisfy a compliance or legal need, or build trust with customers. Some companies find themselves highly unwillingly indexed in some and not others.

Security Work Model: A security team must operate efficiently without burying themselves with toil. You can catch up with the business, improve your capabilities, smoothen or eliminate operations toil, or take action on your learnings from incidents.

The Jaded Manager Model: One also needs to look at work from a practical standpoint. Some security work is simply more interesting than others. Maybe you aren’t staffed for the highest priority things. You may need to punt on some work while an engineer is on leave or while you wait for a candidate to join. Maybe you have objectives to actively recruit a team, build a pitch, influence a budget and headcount, or speak at a conference. This meta-work goes into a security team that violates the previous two model suggestions but is still critical.

Resources and hiring

Assurance objectives will seem additive to existing work. They will seem overwhelming and beyond your current resources. This demonstrates that you need more resources. We’re in security, so let’s be real; this is always generally the case.

Being transparent about the work you maintain and your building plans is a crucial part of asking for resources. Operational work, keeping the house together, often gets ignored along with the high-praising project work.

Showing the passing/failing assurance objectives elevates all the shadow efforts into a healthy security team. There’s no better way to illustrate where the budget can be applied and eliminate assurance toil.

Conclusion

Build valuable assurances after projects are completed and keep them going long-term with quarterly reviews.

Ryan McGeehan writes about security on scrty.io

--

--

Ryan McGeehan
Ryan McGeehan

Written by Ryan McGeehan

Writing about risk, security, and startups at scrty.io

No responses yet