Understanding Microsoft Copilot Licensing and AI Privacy

Karl O’ Doherty
Version 1
Published in
3 min readJul 11, 2024
Photo by Matthew Henry on Unsplash

Understanding Microsoft Copilot Licensing and AI Privacy

With the rise of Artificial Intelligence (AI), Microsoft has continually updated its Product Terms to address the processing and usage of customer information. This blog will delve into how privacy is managed at an individual service level concerning AI, particularly focusing on Microsoft Copilot licensing.

Data Processing — Microsoft’s Azure OpenAI Service processes user-provided data to offer services and monitor for violations of applicable product terms. The data processed includes:

  • Prompts and generated content
  • Augmented data included with prompts
  • Training and validation data

Data Security — Microsoft ensures data security through robust technical and organisational measures. These include:

  • Multiple layers of protective measures like encryption
  • Comprehensive security protocols to safeguard data in AI systems

Privacy and Confidentiality — Your data, including prompts (inputs), completions (outputs), embeddings, and training data, remains confidential and is not accessible to other customers, OpenAI, or used to enhance OpenAI models, Microsoft products, or third-party services. Despite concerns about data collection, processing, storage, and access by AI, Microsoft has implemented stringent measures to protect user privacy.

Transparency — Microsoft commits to transparency by providing clear information about the capabilities and limitations of their AI systems. This enables users and organisations to make informed decisions.

User Control — Microsoft offers tools and options to control personal data. Users can access, manage, and delete their data, including stored conversation history, ensuring control over their information.

Compliance — Microsoft adheres to all applicable laws, including privacy and data protection regulations, ensuring compliance across all services.

AI Security Posture Management — Microsoft provides end-to-end AI security posture management, threat protection, data security, and governance, reinforcing the security of AI implementations.

Data Governance — Microsoft Purview offers an integrated solution to help users understand, secure, and manage their data effectively.

Implications for Customers — For businesses utilising AI, Microsoft’s measures mitigate several risks, including:

  • Over-reliance on AI
  • Biases in AI algorithms
  • Security risks
  • Errors in AI querying

Microsoft’s commitment to privacy and security provides reassurance to customers about how their data is used and stored. While this blog does not take a stance on the use of AI in business, it highlights Microsoft’s efforts to safeguard customer data.

For additional information on the Product Terms and publications around AI privacy, see the below links.

Enhancing trust and protecting privacy in the AI era — Microsoft On the Issues

Data, privacy, and security for Azure OpenAI Service — Azure AI services | Microsoft Learn

Code of Conduct for the Azure OpenAI Service | Microsoft Learn

By addressing these key areas, Microsoft ensures that its customers can use AI confidently, knowing that their data is secure and their privacy is protected.

About the Author:

Karl is a Principal Licensing Consultant at Version 1, providing global Microsoft license expertise to organisations and ensuring customers get the best value from their Microsoft assets.

--

--

Karl O’ Doherty
Version 1

Principal Licensing Consultant assisting organisations reduce software license cost & manage software license compliance