A First Glance at the EU AI Act: Assessing Risks from a Security Perspective at Trendyol

Gökçenur Yazıcı
Trendyol Tech

--

In our previous blog post, we explored AI Security and delved into the details of our Risk-Based Artificial Intelligence Security at Trendyol. As AI continues its rapid development, the EU’s landmark AI Act has come into effect. On March 13, 2024, the European Union Parliament (after some discussions and debates) approved the world’s first comprehensive set of rules governing the use of artificial intelligence (AI). This landmark decision, which received a broad consensus of 523 votes in favor (out of 618), places Europe at the forefront of setting global standards for AI regulation.

The EU AI Act: Defining AI and Risk

The definitions provided by the EU AI Act for “AI” and “risk” are the cornerstone of the process. The Act delineates these crucial concepts as follows:

AI: A machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

Risk: Combination of the probability of an occurrence of harm and the severity of that harm.

We went into the details of risk in our previous blog post on Risk-Based Artificial Intelligence Security at Trendyol. To briefly examine the risk-based approach of the EU Artificial Intelligence Act:

The EU AI Act’s Risk-Based Framework

The EU Act categorizes AI systems into four risk levels:

  • Unacceptable Risk: AI systems posing a clear threat (e.g., social scoring, voice assistants promoting dangerous behavior) are banned.
  • High Risk: AI in critical areas (e.g., infrastructure, education, safety) faces strict regulations.
  • Limited Risk: Less transparent AI (e.g., chatbots, AI-generated content) requires transparency measures.
  • Minimal or No Risk: Most AI systems (e.g., video games, spam filters) face minimal restrictions.

Understanding Risk Levels

The AI Act is a very important guide for us to understand risk levels. To delve deeper into the AI Act, it is crucial to understand both unacceptable risks and high risks.

Systems classified as “Unacceptable Risk AI Systems” are strictly prohibited due to the potential harm they pose.

  • Behavioral manipulation is aimed at influencing choices.
  • Exploitation of vulnerable groups based on age or disability.
  • Biometric categorization for sensitive classifications.
  • Social scoring is based on personality assessments.
  • Predictive policing.
  • Real-time biometric identification (with specific exceptions).
  • Wide-scale facial recognition and data scraping.
  • Emotional-inferencing systems without medical or safety purposes.

Article 5: Prohibited Artificial Intelligence Practices goes into greater detail.

Developers of artificial intelligence systems defined in Annex III are required by the legislation to undergo a rigorous documentation process for accurate assessment and classification of their products as “High risk”.

  • High-risk AI systems are those that are integrated as safety components into products subject to specific EU regulations, which are listed in Annex II.
  • AI applications related to the scenarios outlined in Annex III are considered high-risk, with a few notable exceptions.
  • The legislation also mandates that developers of high-risk AI systems must provide detailed documentation on the system’s design, development, and testing processes. Additionally, these developers must ensure that their products comply with all relevant safety and data protection requirements outlined in the legislation.

Key Points of EU AI Act for Security Perspective at Trendyol

Throughout the evolution of AI, we will undoubtedly experience both security challenges and advances in artificial intelligence. While navigating this landscape, numerous critical considerations emerge.

Our primary focus has been on decreasing the risks associated with existing processes and implementing mitigation strategies. To demonstrate this, we can use an example process. Generative AI systems such OpenAI GPT, Google Gemini, and Anthropic Claude, have the potential to transform content creation and design. However, the EU AI Act will have a substantial impact on the development and application of generative AI. Given these consequences, it is critical to identify our risks and apply mitigation strategies. To detect and manage systemic risks connected with our models, for example, it is imperative that we do thorough self-assessments. Within this framework, it is critical to comprehend and recognize risks.

High-risk generative AI applications may be subject to additional regulatory regulations, limiting its use in specific circumstances. Secure approaches, such as using encrypted or anonymised datasets, must be developed to disclose training data without compromising intellectual property.

1.Checking the results of anonymised training data is critical for maintaining anonymity, which eliminates any connection to real people and considerably reduces our risk.

2.Labeling AI-generated content as altered is required to avoid the spread of misinformation.

3.Identifying the extent of data content during live operations is critical.

4.Masking is effective in limiting hazards when using large language models such as GPT-3 via API since it uses Regex to restrict data and prevent it from being shared with parties.

Establish procedures for reporting major incidents as well as conducting frequent tests and evaluations. To safeguard our models from unwanted access and modification, we must put strong cybersecurity protections in place.

Finding the Right Balance:

Finding a balance between specificity and flexibility is critical. Overly strict standards can inhibit creativity and slow growth, but overly wide norms might highlight flaws. Mitigating high-risk scenarios and implementing security measures via well-thought-out strategic plans promote progress while protecting against potential risks. Furthermore, promoting good communication within the team promotes a culture of security awareness and creativity.

Operating Within a Framework:

Understanding the present level of AI compatibility is critical. Maintaining an updated inventory and evaluating the framework’s applicability are crucial for fast growing businesses like Trendyol. Tailoring processes and identifying threats using targeted inquiries unique to each project or domain ensures a proactive approach to security.

Adherence to Defined Security Standards:

The EU Artificial Intelligence Act emphasizes the significance of balancing innovation and security. A dynamic, risk-based security architecture is constantly evolving to encourage responsible AI development and deployment. This includes conducting regular evaluations and audits to identify potential vulnerabilities and assure compliance with regulatory obligations. By taking a proactive and adaptable approach,we can navigate the complex landscape of AI technology, mitigating risks while seizing opportunities for growth.

For further insights into Risk-Based Artificial Intelligence Security at Trendyol, visit: https://medium.com/trendyol-tech/risk-based-artificial-intelligence-security-at-trendyol-64bbdb972722

To delve deeper into the AI Act, explore: https://artificialintelligenceact.eu/

We’re building a team of the brightest minds in our industry. Interested in joining us? Visit the pages below to learn more about our open positions.

--

--