Ensuring Alert Readiness: Lessons from Schrödinger’s Cat

Adan
4 min readSep 2, 2023

--

Schrödinger’s Cat and Cybersecurity Alerts

Schrödinger’s cat is a thought experiment. Picture a cat inside a sealed box, accompanied by a mechanism that might harm it based on a random event. Until we open the box, we’re uncertain about the cat’s fate. The cat is theoretically both alive and dead at the same time until we verify. In the world of cybersecurity alerts, a similar dilemma arises. If we don’t regularly test our alerts, they remain in a state of ambiguity: they might be working perfectly or not at all. We only uncover the truth when an incident occurs. But can we wait for an incident to happen to discover the reality?

According to IBM, the global average data breach cost in 2023 reached USD 4.45 million, marking a 15% rise over three years. These figures underscore the undeniable importance of investing in cybersecurity and alerts to detect attacks before it is too late. Also, as threats evolve, the importance of alerts, especially those related to post-compromise scenarios, becomes paramount. As noted in the Mandiant article Revisiting Traditional Security Advice for Modern Threats:

Recent attacks teach us that while the initial exploits vary dramatically, attacker’s post-exploit operations are much more consistent. This means that we have a more consistent post-exploit and secondary stage detection experience.

But what’s the point of investing in alert systems if they end up in a state of ambiguity?

Various factors can cause an alert to become ineffective over time. Perhaps the log that triggered it isn’t sent anymore, or someone inadvertently altered its parameters. Maybe operational changes or software updates have affected how systems report logs. Investing in cybersecurity alerts is crucial, but the commitment to regularly test and maintain them is equally vital. Let’s look at some strategies to test and maintain our alerts.

Ensuring Alert Readiness

1. Automated Testing

Automated end-to-end threat detection testing is ideal. It simulates suspicious behavior, waits for an alert to trigger, and then confirms its functionality. For specific alerts, this approach might be straightforward. For instance, to test an alert that will trigger when a request to the cloud metadata service happens, a script can query the cloud metadata service and then check if the SIEM has an alert for this. We can create a script that does this or use a tool like Threatest, a Go framework, which can facilitate such tests by allowing users to design and execute various scenarios.

2. Semi-Automated Testing

While full automation is ideal, it’s not always possible. We can use a Semi-automated testing approach where tools can assist in triggering alerts, with manual verification needed afterward. These are examples of tools that help us emulate offensive attack techniques:

  • MITRE Caldera™: A platform aimed at automating adversary emulation and assisting manual red teams.

If you are looking for a course that includes how to execute scripted attacks for offensive technique emulations, look at this.

3. Manual Testing

Manual testing provides a hands-on approach, allowing for meticulous alert examination. However, it comes with its challenges. One of the primary concerns is its time-consuming nature, which often means that tests are conducted less frequently. This infrequency poses a significant risk: Alerts might not be tested as regularly as they should be. As a result, they could drift into that ambiguous state of being both working perfectly and not at all — the situation we aim to avoid.

At its most basic, manual testing can involve individually simulating the conditions for each alert to determine if it triggers as expected. This method ensures that each alert functions correctly without complex engagements.

We can also undertake more complex engagements that will help with alert testing, such as:

  • Assumed Breach Testing and C2 assessment: Based on Mandiant's insight that post-exploit operations are consistent, this test begins normally from a compromised device or user and performs different post-exploit techniques. The purpose is to test the ability to detect and alert on the activities an attacker might perform after the initial compromise. The starting point and objective to reach might differ and must be defined before the engagement.
  • Red Team Engagements: Testers emulate threat actors, offering real-world alert testing scenarios. In a red team engagement, normally, only a few organization members are aware of the engagement, so the alerts, the people, and the procedures are tested.
  • Purple Team Engagements: A collaboration between offense (red) and defense (blue) teams to improve security controls. In this engagement, the red team will emulate different tactics and techniques and will work closely with the blue team to ensure all the tactics and techniques are detected.

Conclusion

Just like we don’t know if Schrödinger’s cat is alive or dead until we check, we can’t be sure our cybersecurity alarms are working unless we test them regularly. As cyber-attacks become more costly, it’s crucial to ensure our alert systems are always working well. This means running checks, ranging from frequent, quick, and fully automated tests to deeper, more time-consuming ones. But we need to be sure our alerts will work when needed. It’s similar to testing your smoke alarm — it’s better to know it works before an actual fire. Remember, it’s not just about having alerts; it’s about ensuring they function correctly.

--

--

Adan

Cyber Security Engineer interested in Pentesting | Cloud Security | Adversary Emulation | Threat Hunting | Purple Teaming | SecDevOps - https://adan.cloud/