Generative AI Will Change the Cyber Landscape

Oleg Parashchak
Forinsurer
Published in
5 min readApr 18, 2024

Lloyd’s report about transforming the cyber landscape explores how GenAI could be used by threat actors and cyber security professionals and highlights its potential impacts on cyber risk.

Major leaps in the effectiveness of Generative AI and Large Language Models have dominated the discussion around artificial intelligence. Given its growing availability and sophistication, the technology will inevitably reshape the cyber risk landscape.

Due to the rapidity of advances in AI research and the nature of the highly dynamic cyber environment, analysis of the consequences these tools may have for cyber perils has been limited

Beinsure Media summarise the key highlights from Report about Large Language Models landscape, the transformation of cyber risk, the considerations for business and insurance and the ways in which Lloyd’s will take action to develop solutions that build greater cyber resilience.

Lloyd’s has been exploring the complex and varied risks associated with AI since developing the world’s first autonomous vehicle insurance in 2016.

Lloyd’s is committed to working with insurers, startups, governments and others to develop innovative products and intelligent policy guiderails that can support the development of this important technology and create a more resilient society (see Artificial Intelligence Becomes an Unexpected Risk for Insurance).

Artificial Intelligence and Large Language Models

Approximately 6 years ago, a seminal paper was published by Google Research introducing a novel algorithm for encoding, representing, and accessing sequential data with complex structure. This machine, dubbed ‘Transformer’, would underpin almost all language, vision, and audio based generative machine learning approaches by 2023.

Early models (circa 2018) were limited in their capability, but progressively scaling up the computing power and dataset size resulted in rapid advances, culminating in the release of ChatGPT, a web portal interface to GPT-3.5, which was released less than 18 months ago (Nov ‘22), to considerable public interest and concern (see How Can AI Technology Change Insurance Claims Management?).

Since November, notable events include the release of GPT-4 (March ’23) — exhibiting similar capability to humans across a battery of benchmarked tasks, Google’s Bard (March ’23) and completely open-source equivalents by Meta (March, July ‘23).

AI model governance, financial barriers, guard-rails

The rise of powerful generative models brings with it tremendous opportunity for innovation, but also introduces significant risks of harm and misuse.

It is fair to ask why despite the claims of advanced capabilities of these tools, few material impacts on the cyber threat landscape seem to have occurred. The answer has so far been that the industry focus on safety has prevented widespread misuse, as well as economic considerations (see How Does AI Technology Impact on Insurance Industry?).

“AI Safety” is a term without a consensus definition, referring to several related and interlinked areas, which can be classified in three broad categories:

  • (A) Autonomy and advanced capabilities — calls for oversight and control of “systems which could pose significant risks to public safety and global security”
  • (B) Content generated by the models — potentially leading to issues with privacy, copyright, bias, disinformation, public over-reliance, and maybe more
  • © Malicious use of the models — leading to harm or damage for people, property, tangible and intangible assets

The first sense of AI Safety (A) has received increasing attention in 2023, with governments allocating resources to understanding the risks involved. The UK government has created a ‘Frontier AI Taskforce’ consisting of globally recognised industry experts, tasked with advising and shaping policy pertaining to the creation of powerful models.

However, despite the growth in interest and investment, the nature and extent of the risk posed by these systems is still unclear.

Due to the lack of quantitative or even qualitative information on this topic, it will not be considered further in this report, but is an area which it will be important to monitor as the situation develops.

Research labs, commercial enterprises, and policymakers have focused on understanding safety in the sense of (B), which related mainly to issues around bias, privacy, fairness, and transparency. All serious issues, that are rightly being the subject of active research, and that also have potential consequences for policies beyond Cyber or Tech E&O, but that are however beyond the scope of this report.

The remaining pressing concern is the question linked to ©: “How can the risk of harm or damage arising from human actors intentionally using these models maliciously be mitigated?” Broadly, there are three mechanisms which have underpinned the safety apparatus curtailing malicious use of Generative Artificial Intelligence technology.

Key elements of model governance for enterprises and research groups producing LLMs include:

  • Output artifacts of the model training processes (known as model ‘weights’) are kept secret and not released to the public. Possessing only model code and having access to the computing hardware is insufficient to run the models. The weights of these models are kept as closed as possible to create commercial and regulatory moat, and to prevent misuse
  • Model training and inference (serving requests) takes place on private computing infrastructure, with internal details opaque to end users
  • Setting and following rules for monitoring AI models in areas like quality, performance, reliability, security, privacy, ethics, and accountability
  • Application of governance principles throughout the entire lifecycle of AI models: training, analysis, release (if applicable), deprecation

The key outcome of model governance is preventing the public from having oversight-free access to emerging disruptive technologies until adequate safety controls can be enacted: technological, regulatory, legal, or otherwise (see Key Benefits of Innovative AI & Machine Learning Technologies).

Costs or hardware requirements for training and running large models

The process of training, fine-tuning, or performing inference with large generative models is computationally intensive, requiring specialised computing hardware components.

Inference tasks on these models have less exorbitant requirements but still have until recently required access to a datacentre, with prohibitive costs for most threat actors.

However, recent developments have driven these costs down, as will be discussed in the next section.

The consequences of this have been the inability of the public, including small research labs or universities, to train or run their own large models, restricting them to much less capable versions. All access to ‘frontier-grade’ generative models has been through the large labs (OpenAI, Meta, Anthropic, Google), and is subject to their strict governance, oversight, and safeguards.

……………..

FULL Report — https://beinsure.com/generative-ai-changes-cyber-insurance/

--

--

Oleg Parashchak
Forinsurer

CEO & Founder – Beinsure.com and Forinsurer.com → Digital Media: Insurance | Reinsurance | InsurTech | Blockchain | Crypto