Risk-Based Artificial Intelligence Security at Trendyol

Gökçenur Yazıcı
Trendyol Tech
Published in
7 min readJun 6, 2023

As artificial intelligence (AI) continues to advance and permeate various industries, it brings with it both tremendous potential and inherent risks. AI, in its simplest terms, refers to the ability of machines to simulate human intelligence and perform tasks that typically require human cognitive abilities. While AI offers numerous benefits, such as automation and improved decision-making, it also poses risks that need to be carefully managed.

Photo by Steve Johnson on Unsplash

In this article, we will explore the AI risks, delve into the details of the risk-based Trendyol AI Security Framework.

Risks of AI in Security

Artificial intelligence (AI) has quickly advanced and found use in a variety of fields, including security, chatbots that are powered by AI, e-commerce, and many more.

While AI significantly improves and streamlines a number of industries, it is crucial to handle the risks and challenges that come with its adoption. These risks include the possibility for decision manipulation, incorrect outputs, misinterpretation of personal data, manipulation, and algorithmic bias. We can ensure the safe and ethical deployment of AI technology to maximize their benefits while reducing any potential drawbacks by being aware of and proactively mitigating these risks.

Let’s examine the risks:

  • Incorrect Outputs: AI may produce incorrect outputs due to various sources, including biased data, errors in algorithms, or incomplete training data, leading to erroneous results.
  • Misinterpretation of Personal Data: AI can misinterpret personal data and lead to unintended consequences such as disclosing sensitive information. For example, a chatbot may reveal personal data if not secured properly.

“AI should only support human decision-making, not replace it.”

  • Bias Based on Social or Cultural Factors: AI-powered systems may reflect bias based on social or cultural factors in the underlying data or algorithms, leading to unfair treatment of certain individuals or groups.
  • Manipulation: AI-powered ads and content moderation can be manipulated to favor certain outcomes or views, creating inequality and suppressing groups.
  • Collection and Use of Personal Data: AI-powered systems can collect and use personal data for various purposes, which can raise privacy concerns.This data can include personal information such as names, addresses.

“Ensure AI study on Personal/Sensitive Data doesn’t discriminate based on ethnicity, gender, religion & is tested before use.”

  • Data Security: AI systems can increase data risks. For instance, an AI medical diagnosis tool can collect sensitive patient data vulnerable to hacks. Likewise, an AI fraud detection tool can collect financial data that may be misused.
  • AI-Powered Phishing& Vishing Attacks: AI-powered phishing attacks have already started. Also Artificial intelligence (AI) is being used to recreate the sound of family members’ voices in order to scam people out of thousands of euros.
  • Writing Malicious Code: Manipulation of chatbots or other AI tools is certainly possible and with enough creative poking and prodding, bad actors may be able to trick the AI into generating hacking code. In fact, hackers are already scheming to this end.

It must be remembered that being aware of risks and taking the steps needed to manage them is critical for the effective continuation of our attempts. Understanding the potential risks presented by AI technology enables us to identify potential vulnerabilities and put in place the required security controls. By doing this, we may mitigate the damage that these risks might cause.

Trendyol AI Security Framework

The intended use of the AI Security Framework is to assess and manage the security risks associated with the development, deployment and operation of artificial intelligence (AI) systems. The AI Security Framework was designed by the Infosec Team to help organizations identify potential risks and vulnerabilities specific to AI technologies and implement appropriate security measures to mitigate these risks. Within the framework, there are 5 Main Domains, 10 Control Domains, more than 30 Control Questions and Recommended Methods To Comply With Controls related to these questions. The framework is risk-based and has been developed in accordance with the company’s risk methodology.

It is designed to be convenient and easy to use within the company and consists of 5 sections.

  1. Identification of Processes: The framework includes the necessary controls and control questions for AI security. In this step, you should analyze your organization’s processes and workflows to determine which controls are required and how to implement them.
  2. Determining the Status: The framework provides recommended methods for complying with controls. In this step, you should assess the current status of your organization for each control and determine how effectively the control is being implemented. Based on this assessment, an appropriate status can be assigned to each control.
  3. Risk Scoring: The framework provides a risk methodology. This methodology evaluates and scores risks based on probability, severity, or impact. By applying this methodology, you can identify and score potential risks associated with your AI systems.
  4. Mitigation Plans: Focusing on risks with high scores will be necessary based on the risk assessment. In this step, you should determine the measures to reduce the risk score and create a risk mitigation plan. These plans should include the steps required to mitigate potential risks and implement security measures.
  5. Regular Monitoring: The AI Security Framework is constantly updated. Depending on changes in your systems, security threats, and advances in technology, you may need to update the framework and its risks.
Trendyol AI Security Framework How to Use Chart by writer

When we look at the control headings we have presented under the Identification of Processes, the 5 main domains we mentioned and their contents are generally as follows:

  1. Safe: It consists of three controls under the main domain Safe. These are Explanations and Documentation of Risks, Responsible Decision-Making and Responsible Design,Development, and Deployment.
  • Explanations and Documentation of Risks, enhancing the safe operation of AI systems involves providing explanations and documentation of risks based on empirical evidence of past incidents.
  • Responsible Design, Development, and Deployment, ensuring the safe operation of AI systems requires implementing responsible practices throughout their design, development, and deployment stages. By adhering to these practices, the risk of endangering human life, health, property, or the environment can be minimized. This involves considering potential hazards and taking appropriate measures to mitigate them, as well as prioritizing safety in system functionalities and behaviors.
  • Responsible Decision-Making, the safe operation of AI systems is contingent upon responsible decision-making by both deployers and end users. AI should only support human decision-making, not replace it.

2. Valid and Reliable: Accuracy is defined under the main domain Valid and Reliable. Accuracy and robustness contribute to the validity and trustworthiness of AI systems, and can be in tension with one another in AI systems.

  • Accuracy, the purpose is to ensure high-quality, representative data for training and inference. It involves improving data collection and cleansing, establishing reliable ground truth sources, monitoring system performance, incorporating user feedback, communicating uncertainties, and minimizing biases and errors. The goal is to maintain accuracy, transparency, and fairness in the AI system’s outputs.

3. Privacy-Enhanced: In this domain, it is aimed to ensure the compatibility, confidentiality and protection of personal/sensitive data.

  • Privacy in AI, this includes signing contracts and NDAs for third-party data sharing, restricted sharing of unmasked data, storing data internally, implementing robust data governance practices, avoiding discrimination in AI studies, transparent communication and consent, staying compliant with privacy laws and industry standards, collaborating with legal teams, and providing mechanisms for privacy rights and redress.

4. Secure and Resilient: This domain include enhance the trustworthiness, reliability, and robustness of their AI systems, protecting sensitive information and ensuring the systems can operate securely and effectively in various scenarios.

  • Identify, this include tracking and updating asset information, establishing a comprehensive asset management system, safeguarding intellectual property, and implementing software asset management practices. The goal is to ensure accurate asset information, protect intellectual property, and effectively track and manage software components.
  • Protect, the purpose is to manage identity and access control, protect data security in AI systems, and ensure compliance with legal and regulatory requirements.
  • Detect, Detecting and responding to anomalies and events within a system or dataset and implementing security continuous monitoring (SCM) for AI systems is to ensure the ongoing security and integrity of the systems. Anomalies and events can signify abnormal or unexpected occurrences or patterns that deviate from the norm, including data points, observations, or behaviors that significantly differ from the majority or expected behavior. By monitoring and addressing these anomalies, organization can mitigate potential risks, prevent security breaches, and maintain the proper functioning of our AI systems.
  • Respond, purpose is to develop a response and recovery plan for security incidents or disruptions in AI systems.

5. Accountable and Transparent: This domain supports the provision of meaningful transparency, access to appropriate degrees of knowledge, according to the stage of the AI lifecycle and the role or expertise of AI actors or people interacting with or using the AI system.

In conclusion, it is critical to continually update and adapt the framework to handle fresh security risks as artificial intelligence systems evolve. The impact of AI on several facets of our life must be acknowledged, and risk-based investigations have to adapt to keep up with these developments in technology. We may ensure the continuous productivity and security of AI technology by remaining proactive and paying attention to advances in the AI landscape.

We’re building a team of the brightest minds in our industry. Interested in joining us? Visit the pages below to learn more about our open positions.

--

--