AI in Cybersecurity: Defenders’ Dream or Nightmare?

Ridge Partner Yousuf Khan’s thoughts from his recent MIT AI Conference panel

Ridge Ventures
RidgeVC
4 min readNov 28, 2023

--

The “AI in Cybersecurity” panel at the 2023 MIT AI Conference

2023 is almost over and AI is still very much the talk of the town. Cybersecurity sits at the forefront of these conversations. Some are wondering who benefits most from the ongoing AI explosion: bad actors, or the companies in their crosshairs.

On a panel at the MIT AI Conference, I tackled this exact topic alongside RSA CEO Rohit Ghai and O’Reilly Media’s Steve Wilson, as well as moderator Rami Elkhatib of Acero Capital.

Here are three quick key takeaways from our talk, including my prediction for what’s to come.

Gone phishing

Phishing attacks have long been a problem for companies but generative AI makes these emails even more difficult to catch.

An October report from Egress found that nearly 75% of AI phishing detectors couldn’t determine if an email was generated by a chatbot or a real person. The report highlights the fact that many of these solutions are built on large language models (LLM) and aren’t as accurate if emails don’t meet a certain length (about 250 characters). This is problematic considering nearly 45% of phishing emails are below the 250 character threshold.

The volume of phishing attacks hasn’t risen, but their sophistication — thanks to generative AI — is unprecedented. Even the tech titans have their hands full. So far in 2023, Microsoft has failed to detect 25% more phishing emails than last year.

Garrett Hamilton and Colt Blackmore, co-founders of Reach Security, know this problem well and we’re proud to be early investors in the company.

Now for the good news

Fortunately, AI can be used to thwart phishing and other types of fraud.

For example, AI improves internal training that in turn boosts detection of phishing campaigns. This includes solutions that automatically identify at-risk users and create custom training modules. Individual behaviors improve while organizations collectively become better protected.

AI also enables LLMs to process security alerts and translate priorities more effectively. Security teams are understaffed and overworked. Data is abundant. LLMs use AI to sift through the data, standardize it, and determine its relevance to security.

AI’s positive impact on shorthanded InfoSec teams is evident. Microsoft recently introduced its Security Copilot solution, powered by generative AI, and preview customers are already saving up to 40 percent more time on security tasks (composing complex queries, helping with summarizing security incident reports, etc.).

On the horizon

Let’s end with some predictions, shall we?

Security of an AI stack is vastly different than software so we will see different companies and solutions being created to protect these. I believe this will culminate in the formation of three new “stacks”:

A) Data Security stack

  • As data spreads it often becomes scattered, disorganized, reproduced. Companies need to secure their data by preemptively stopping leakage.
  • This is not a new problem but one that has grown in complexity. This was our thinking when we led the investment in Theom.ai, a solution that provides a holistic view and workflow for prioritizing data security issues.

B) Trust and Compliance Stack

  • Increased AI regulations require companies to build and deploy compliant AI models. This opens up a big market opportunity for solutions that ensure organizations are safe and meet regulatory standards.
  • As a former CIO and CISO with responsibility for data security and compliance, my view is that a fully developed trust and compliance stack will be required by enterprises.
  • When it comes to issues of data compliance, one issue that will grow will be data residency. Peter Yared saw this early when we first connected and he started InCountry. Data residency will have to be solved using a software stack covering every issue and we are excited to be part of the InCountry journey. Companies like Horizon3 led by Snehal Antani are using AI to move companies off outdated methods and uplevel their security posture.

C) Model Security stack

  • It’s never been more important to stop bad actors from harming AI models and their overall functionality. Companies also need to prevent attackers from messing up how these models operate, as well as secure model data and the proprietary info within it.
  • I’ve used the analogy of how issues, tools, processes, and disciplines are created on the back of major technology shifts. AI is one such example of this. Securing an AI platform is far different than a web application. The threats are similar but will become vastly different and companies will need to adopt new solutions to protect themselves.

Are you a CIO, CISO, or founder and want to talk more about security solutions for AI? Drop me a line!

--

--

Ridge Ventures
RidgeVC

Fast, flexible & founder-focused early stage venture capital fund. Backing experienced founders redefining how the world interacts with data and code.