False Positives and Alert Fatigue: What Can You Do?
Tracking the effectiveness of your marketing campaigns is one of the most important elements in today’s race to the top of the business mountain. Questions continually pop up, like “Who’s reading that marketing email we paid for? How’s that new discount trending? Are the blogs topical enough for accountable engagement?” Etcetera, ad nauseam.
In fact, current numbers show 56.1% of all Internet websites are using web analytics for generating reports on user behavior, page views, and their overall engagement of the site. These statistics also offer detailed user technical metrics such as OS type, geo-location, browser type, etc., with Google Analytics being the most popular, catering to around 30 million websites.
While marketing teams and other legitimate users will implement Google Analytics (and other URL tracking/analytic systems) to measure stuff like their return on investment (ROI), cyber fraudsters are also using the service to track technical markers — such as browsers, countries, and visitor operating systems — for tweaking their phishing campaigns and making malicious domains more visible to their targets. This is a different style of attack, separate from the spam/malware world, and very hard to detect.
Because the domains used in these systems like Google Analytics are “trusted”, they are attractive carriers for phishing URLs. In essence, URL tracking systems use parameters to pass through various pieces of information for managing advertising campaigns. One of these parameters is typically the final URL that the ad service should redirect users to after they have clicked on the tracking link. For Google Ads, this is the adurl parameter. By replacing adurl value with a phishing link, threat actors can easily subvert a legitimate Google Ads tracking URL and use it in attacks.
As a result, piggybacking on a domain is appealing to cyber criminals because it not only increases the odds of making it past spam filters, but also for ease in creating their infiltration campaign. By editing an existing URL, the burden of setting up their own redirect is removed, and they are able to take advantage of infrastructure already in place to execute their mal-intent.
Now is where things get even more complicated, and it all starts with alerts like this, “We’ve identified new leads in your WebLeads for Google Analytics”. Of course, I’m referring here to automated detection services.
Many organizations are making use of automated detection to monitor incoming and outgoing traffic. What is happening is these systems are often blocking or alerting about legitimate websites in Google, flagging them as being malicious. And the bigger the company, the more frequently it occurs. Here are some examples of the types of alerts that are occurring:
While the approach of “over-caution” is often the safest route, the real problem is it gets more and more difficult for the SOC and/or IT team to ascertain what is safe and what isn’t; the separation between legitimacy and illegal activity within a business’s everyday web traffic environment becomes blurred.
Ultimately and inevitably this leads to alert fatigue, where the steady influx of web traffic alerts, whether benign or legitimate, desensitizes the security teams to the point where serious threats may be ignored as they get lost in the noise. A classic case of “cry wolf”.
Another, and perhaps bigger problem, is a rift of mistrust that can occur between an organization’s IT and SOC teams, particularly when the SOC team is an outside, independent hire. Because a SOC is specifically trained to recognize real threats and an IT team generally doesn’t specialize in this particular discipline, what seems like inaction on the part of the SOC team can cause panic within in the organization and an unmanageable wave of communication as users worry about every alert they come across.
Frustration mounts as time is wasted on both sides because the alert problem just doesn’t seem to get fixed. The organization wonders what they spent their money on in bringing in a SOC to handle the alerts, and the SOC team is overwhelmed by the alerts and the worry caused by them. Meanwhile, this creates the perfect storm for attackers to swoop in unnoticed.
Ideally, an effective whitelist filter across the board that recognizes legitimate traffic is the best course of action, but as it stands today it simply can’t be done. The truth is, there’s currently not much that can be done about this, other than enforcing proper cyber hygiene such as password protocol and rotation, MFA and strong communication lines spiced with a good deal of patience.
What’s next, then?
Per Mosegaard, the CTO of Zeroguard, has a unique take on the state of AI today. He explains, “Think of AI like teaching a five year-old child. You instruct the child to separate all of the documents with shiny, pretty pictures on them from a big pile of documents. The rest are to be put into a box which you will lock later on. Well, one of those shiny, pretty documents contains sensitive financial information on it. The child will naturally put it with the rest of the pretty docs, not thinking twice, doing what they are told.”
Ultimately, AI detection platforms have to evolve into having a dependable and accurate way to fine tooth comb embedded code that filters out false positives in regards to legitimate AI tracking analytic sites. In the meantime, meticulous manual white-listing exhaustively along with a concerted effort to share the findings to the white hat community for archival purposes can somewhat alleviate the alert fatigue, mistrust and overall security engineer burnout.
If only a business analytics program did this; combined real-time, relevant security alerts specifically tailored and filtered for an organization whilst still providing the valuable marketing analytics, all in one neat, tidy, intuitive and easy-to-use package. Perhaps I’m aware of one…