Is AI the Answer to Data Overload?

Guest blog by Canadian Cybersecurity 2018 contributor Mike Redeker, VP and CIO for Canadian Pacific Railway

CLX Forum
CLX Forum
3 min readDec 18, 2018

--

We are suffering from data overload. We get more and more alerts and data, and all of it must be converted into actionable information. Artificial intelligence lets us identify the anomaly out of the mass of the alerts. It lets us bring the important data to the surface.

At CP, we’re beginning to explore AI to rationalize the large amount of security data we gather on a regular basis. Once we have rationalized the data we are using the results to improve our decision making process. For example, CP might take data from its entire track network and the surrounding areas to highlight where historical patterns do not match. AI can identify these anomalies, and flag it for further attention.

At many organizations, the volume of information about threats and attacks is so vast that cybersecurity experts cannot begin to cope with it manually. Using machine learning and AI offers a way forward, and it frees up analysts to work on other critical tasks. AI makes it possible to correlate data from attacks that might originate from many different countries over time. AI can also pay big dividends by reducing the time needed to review threats. Previously, companies might take weeks to detect an attack. Now it’s possible to identify and respond to a breach incredibly quickly, within hours or even minutes.

While this sounds ideal, it’s also important to consider that some cybersecurity companies may be moving towards machine learning and AI in response to customers who believe that it will solve all their security problems. This raises the chances that machine learning algorithms, if not used carefully, might introduce new cybersecurity risks into the systems they are supposed to protect. In rushing to get products to market, there’s a risk that cybersecurity personnel could use AI information that still contains anomalous details. If this happens, there’s a strong possibility that the resulting algorithms could ultimately fail to identify threats or attacks. Learn from your on going testing, you are never really done!

In addition people must also help determine how an AI handles the information it generates. For this to happen, there must be a degree of transparency, and people must know how decisions are made. In the initial stages of using AI to assess cybersecurity risks, it’s important everyone involved understand why certain traffic is flagged. When users understand how the system makes decisions, they gain a greater degree of trust. With greater trust comes increased accuracy, and in turn, increased security. For example, an email security system powered by AI might initially flag suspicious messages. Once users can confirm that these messages are malicious, the system can then move to quarantine the messages so that they never reach an inbox.

Right now, we’re in a transitional stage in how we can leverage and use big data in our security practices. First, we need to understand the data: what will it tell us? what can we get from it? The next step is to train the machine learning model. This lets us scale what we do, so that we can work effectively with the big data. If we take a subset of the data and learn from it, we can then apply it back to and develop the AI. This is the best way to get meaningful intelligence out of AI. But none of this can happen without human input. AI might be at the core of our cybersecurity efforts, but there are people on each side, either carrying out attacks or training the AI to defend against them.

What are the limits of AI? Where can it take us? Right now, we are in the infancy stage. The art of the possibility is only limited by our imaginations. And with people involved at all stages, we’ll be able to leverage big data, and AI, to best advantage.

Download the CLX Forum book, Canadian Cybersecurity 2018: An Anthology of CIO/CISO Enterprise-Level Perspectives: http://www.clxforum.org/

--

--

CLX Forum
CLX Forum

The Cybersecurity Leadership Exchange Forum (CLX Forum) is a thought leadership community created by Symantec.