Summarizing the OWASP AI security for LLM

Rachana Gupta
Securing AI
Published in
3 min readFeb 22, 2024

Like everyone else I am also feeling overwhelmed by AI content these days. There are so many trainings, models, point of views around AI that my head is almost exploding. But I try to read atleast few by OWASP, ISO, government regulations etc..

Here the original OWASP document : LLM AI security

Source: Above link

Large Language Models (LLMs) encounter significant challenges. One major issue is that their control and data functions can’t be easily separated. Another problem is that LLMs don’t always give the same answer when asked the same question, which can affect how reliable they are. Instead of just looking for specific words, LLMs search for meaning in what they’re asked. This can lead to mistakes called hallucinations, where the model creates incorrect information based on its training. There are ways to make LLMs more reliable and secure, but they often come with trade-offs in cost and how well the model works. Using LLMs also increases the risk of cyberattacks for organizations. Attackers can use LLMs to create new malware or phishing scams more easily, and to make fake videos or audio recordings. This makes it harder for organizations to defend themselves. However, not using LLMs also has risks, like falling behind competitors, being less efficient, and making more mistakes. It’s important for organizations to understand these risks and benefits to decide how to use LLMs effectively for their business goals.

Checklist for security:

Adversarial Risk:

  • Scrutinize competitors’ AI investments for potential business impacts.
  • Update security measures like password resets to counter GenAI-enhanced attacks.

Threat Modeling:

  • Anticipate “hyper-personalized” attacks with Generative AI and LLM-assisted Spear Phishing.
  • Assess risks from GenAI targeting customers and ensure detection of malicious inputs.

AI Asset Inventory:

  • List AI services/tools, including data sources and sensitivity levels.
  • Incorporate AI components into software inventory and conduct red teaming for risk assessment.

AI Security and Privacy Training:

  • Engage employees to address concerns about LLM initiatives.
  • Provide training on ethics, legal issues, and GenAI-related threats, including spear phishing.

Establish Business Cases:

  • Building solid business cases is crucial for understanding AI’s value and risks.
  • Examples include enhancing customer experience, improving efficiency, and enabling innovation.

Governance:

  • Corporate governance ensures transparency and accountability in LLM usage.
  • Identifying knowledgeable AI owners is crucial to prevent digital process disruptions.
  • Establishing AI responsibilities, documenting risks, and enforcing data policies are essential steps for effective governance.

Legal:

  • Legal partnerships are essential for addressing AI’s undefined legal implications.
  • Actions include reviewing warranties, updating terms, and addressing intellectual property concerns.

Regulatory:

  • The EU AI Act, expected in 2025, will be the first comprehensive AI law, while the GDPR indirectly impacts AI through data rules.
  • US AI regulation varies across states, with ten having passed or pending laws by 2023, and federal agencies closely monitor hiring fairness.
  • Compliance steps include understanding country-specific AI laws, reviewing vendor compliance, and ensuring fairness and data protection in AI-based hiring tools.

Model cards and risk cards are important for explaining and managing Large Language Models (LLMs). Model cards give basic information about the model and how it’s used. They also talk about things like how the model was trained and how well it performs. Risk cards talk about possible problems with the model, like biases or privacy issues. They help people use the model safely and responsibly. These cards are made by the people who create the models and help everyone understand and trust them better.

--

--