Generative AI: a blessing or a curse for security?

Dealing with Security issues brought by AI

Akshata Shelar
Globant
7 min readSep 26, 2023

--

Co-Author: Parijat Sardesai

Generative AI is a type of AI that generates content, like text, images, videos, etc. using patterns and data it has learned from training.

Nowadays, Generative AI platforms are gaining massive popularity. We can see rapid adoption of this cutting-edge technology. Its primary reasons are easy to use, faster development, high-quality output, etc.

These platforms analyze our content for better suggestions and self-training. This can result in data privacy and security issues. Some common problems can be data breaches, deep fakes, and a rise in phishing attacks.

This article explores common security concerns associated with Generative AI platforms. We will also understand some of the best practices to avoid security breaches. Additionally, we will discuss how these platforms can help us enhance application security.

Security Concerns with Generative AI

Generative AI is already shaping the future across various businesses. Embracing this powerful technology can bring about various data privacy and security implications. Let’s discuss some common problems.

  • User data collection: The most prominent example of Generative AI is ChatGPT. It’s a single point to access information and guidance on various topics. ChatGPT’s privacy policy indicates that the platform collects user data, including IP addresses, browsing activities, and browser details. This data can potentially be utilized to track users’ search activities, which can cause the risk of misuse of their personal information.
  • Compromising Data Privacy: The most critical part of these platforms is ‘prompt’. The prompt is a query that users feed to these platforms to generate various forms of data. While trying these platforms, individuals might provide sensitive information to get the details, and these tools may store this information. Because as a general human tendency, if tools give accurate outcomes, we don’t mind sharing sensitive information to get the work done. As these platforms are gaining immense popularity, people are curious about trying them. While using these platforms, individuals may input customer information like corporate client details, patient details, or competitor information, which may compromise data confidentiality.
  • Data breach: The main reason for the efficiency of Generative AI is the ‘data’. These platforms need a vast amount of data for training purposes. Unauthorized parties can hack or leak the data we share via prompts. During processing, if data is not handled properly it can lead to misuse by bad actors. Without proper implementation of security features like access control and data encryption, unauthorized individuals can gain access to users’ confidential data.
  • Phishing attacks: Phishing attacks impersonate malicious communications from trustworthy and valid entities, such as banks. Most of us have received at least one email or message which says, “You have won a prize worth XXX amount”. Such message prompts have a CTA in the content, which can easily compromise the user’s system integrity. Cybercriminals can exploit the ability of Generative AI platforms to generate data on various topics. Hackers can use these platforms with remarkable accuracy to generate fraudulent content, which might be challenging to identify. That, in turn, might lead to an increase in phishing attacks.
  • Deep Fakes: One of the powerful features of Generative AI is to generate a variety of content. Deepfake is the process of producing images and audio that appear realistic to the viewer or listener. With the new Generative AI tools, it is easy to create videos and audio recordings using the voices and faces of renowned personalities. Deep Fakes can lead to the spread of fake content, leading to misinformation.
  • New threats and cyber attacks: Attackers can use Generative AI to create unique, sophisticated threats, phishing tactics, and other cyber risks. The variety of threats and the speed at which they can be made is inconceivable. This can lead to data breaches, reputational risks, and financial losses.
  • Copyright infringement: Copyright has always been a concern for all businesses. Generative AI models are trained on vast amounts of internet data to generate an output. To create an image, video, or any software code from a given prompt, Generative AI tools refer to the extensive database they were trained on. The outcome of this process can be data that is someone’s original creation, and that is not attributed to them.

How to avoid security risks

Generative AI is a technology; like any other technology, it can be used productively or destructively. In every business, Generative AI platforms are getting used for better results. We should leverage its benefits, but at the same time, the industry should identify the ways to use its potential correctly.

Here are some proactive measures we can follow while using these tools. These will help us protect our data and identity.

Limit the data you share

Refrain from feeding GPT your source code or proprietary information to find a solution, as your or your client’s data can be at risk.

For example, if you want to create a presentation for a specific client. Do not put the client’s name and other details. We can query with the anonymized data, such as the client's domain, and get the template. Later, it can be used to place the client-specific information manually. In this way, we can prevent data exposure.

Always query for the situation

Always search for the generic situation rather than providing the exact data in the prompt. If you are facing a specific issue with some source code, don’t put the code or any information about the code file and location directly to find the solution. Instead, try to describe the problem to identify the probable cause. In this way, we are not sharing the source code details with the Generative AI platforms. For example, the below prompt to fix the exception in the code is very risky as it exposes the code details:

>> Fix ClassCastException, the Java code
Int i= Integer.valueOf(42);
String s = (String)i;

Instead, try the prompt below; it will help you understand the problem and possible solutions to solve it.

>> What is a ClassCastException, and how to fix it?

Create smart prompts

Be always mindful of what you are sharing while creating prompts. Double-check the information and make sure you are not sharing any sensitive information.

Train your employees

Training is always needed, but with AI, it's mandatory.

  • Organizations should provide appropriate training about possible attacks.
  • Educate employees on the risks associated with adopting any new technologies.
  • Help them understand the risks of sharing sensitive IP or proprietary information.
  • Share a list of dos and don’ts while using Generative AI platforms.
  • Constantly update employees with security information for future changes in the Generative AI.

Set rules and regulations

As with training, in the new area of AI, you must have some rules to try to avoid more significant problems down the line.

  • Organizations should set appropriate rules and regulations on the usage of these platforms. Organizations can survey who is using Generative AI platforms and for what.
  • Clear information should be conveyed on what they can and cannot share with generative AI apps.
  • Determine how much access they should have to use such platforms on organization-owned machines.

Beware of privacy and security policies

As we said, AI tools may export or play loose with your data, so measures are needed.

  • Before using any Generative AI tools, read the privacy and security policies carefully.
  • Check if there are any opt-out options available. If any platforms/tools collect your personal information, defer using them. To avoid loss of data confidentiality.
  • Never ignore the privacy and security policy updates of these platforms.

Generative AI tools to boost application security

Generative AI tools can perform a better assistive role in many industries. Can these tools help us to improve software security? The answer is yes! Generative AI platforms can be used to improve application security by following the above-mentioned best practices.

Suggesting and guiding on best security practices

As software developers, it is essential to consider various security concepts to enhance application security. The Generative AI platforms can help us understand those concepts. Also, provide multiple ways to implement them. The instructions provided by such tools are self-explanatory and easy to follow.

For example, we need to know the best security practices one should follow while developing an Android application. ChatGPT can provide accurate points to be considered from the developer's perspective. Below is a sample of reactions to Android security best practices.

ChatGPT response on the best practices for Android application development

Dealing with application vulnerabilities

Application vulnerabilities are the most common problem in software development. We recommend using security audit tools to scan the applications to identify vulnerabilities. Some audit tools provide detailed reports. Most of them are easy to understand and suggest solutions to apply. But, in many scenarios, it is difficult to understand the problem and how to fix it. In such situations, GPT platforms are beneficial. As we can search for the given vulnerability, and they can suggest different solutions.

For example, in the below image, we asked ChatGPT how to fix Cross-Site Request Forgery. It provided us with near to 100% accurate and detailed information about the same.

ChatGPT Response on Cross-Site Request Forgery fixes

Conclusion

As promised at the beginning of the article, we explained the common security problems with Generative AI. By following simple practices, we can mitigate those issues. We also learned how these platforms can guide us in improving application security.

As we know, the more popular the technology, the more the risk of getting attacked. Many of us consider using Generative AI tools to boost the effectiveness of work operations. They must also consider emerging security issues. By following simple practices, we can leverage AI correctly and safely.

Happy reading!

--

--