Addressing the Privacy Implications of Generative AI: A Questionnaire for Businesses

Jonathan Bowker
Advanced Analytica
Published in
3 min readJun 8, 2023

The world is abuzz with news stories discussing the implications of generative artificial intelligence (AI) and large language models (LLMs). In fact, nearly two thousand academics and technology experts have recently signed a letter calling for a six-month moratorium on their use.

LLMs, such as ChatGPT and Bard, have captured our collective imagination with their ability to write essays, power chatbots, and even create websites without human coding. However, amidst this excitement, it is crucial to step back and reflect on how personal data is being used by a technology that even made its own CEO “a bit scared.”

In a recent conversation, ChatGPT itself admitted that “generative AI, like any other technology, has the potential to pose risks to data privacy if not used responsibly.” It doesn’t take much imagination to understand how a company could quickly damage a hard-earned relationship with customers through the poor use of generative AI. While the technology may be novel, the principles of data protection law remain the same. Fortunately, there is a clear roadmap for organisations to innovate while respecting individuals’ privacy.

If your organisation is developing or using generative AI, it is essential to consider your data protection obligations from the very beginning, adopting a data protection by design and by default approach. Remember, this is not optional — it is the law when you are processing personal data.

It’s important to note that data protection law still applies even when the personal information you are processing originates from publicly accessible sources. If your organisation is involved in the development or use of generative AI that handles personal data, ask yourself the following questions:

1. What is your lawful basis for processing personal data?
Identify an appropriate lawful basis, such as consent or legitimate interests, for processing personal data.

2. Are you a controller, joint controller, or a processor?
Determine your role as the data controller if you are developing generative AI using personal data. If you are using or adapting models developed by others, you may be a controller, joint controller, or a processor.

3. Have you prepared a Data Protection Impact Assessment (DPIA)?
You must conduct a DPIA to assess and mitigate any data protection risks before processing personal data. The DPIA must be updated as the processing and its impacts evolve.

4. How will you ensure transparency?
Make information about the processing publicly accessible, unless an exemption applies. If feasible, communicate this information directly to the individuals the data relates to.

5. How will you mitigate security risks?
Consider and mitigate risks of personal data leakage, model inversion, membership inference, data poisoning, and other adversarial attacks.

6. How will you limit unnecessary processing?
Collect only the data that is adequate and necessary to fulfill your stated purpose.

7. How will you comply with individual rights requests?
Establish procedures to respond to people’s requests for access, rectification, erasure, or other information rights.

8. Will you use generative AI to make solely automated decisions?
If you make solely automated decisions that have legal or similarly significant effects, individuals have additional rights under Article 22 of the UK GDPR.

Data protection regulators will be scrutinising organisations developing or using generative AI to ensure compliance with the law and considering the impact on individuals.

At Advanced Analytica, we are here to support organisations, helping them scale while maintaining public trust. Our Pre-diligence solutions provide a robust roadmap to data protection compliance for developers and users of generative AI. Our accompanying risk toolkit assists organisations in identifying and mitigating data protection risks.

START YOUR JOURNEY TO COMPLIANCE-ENABLED AI BY TAKING A QUICK READINESS ASSESSMENT

--

--

Jonathan Bowker
Advanced Analytica

Jonathan Bowker is the CEO at Advanced Analytica and founder of the Dataperations.cloud