Creating a corporate AI policy using generative AI

Vladimir Collak
3 min readApr 29


Co-authored with ChatGPT. Image generated by Midjourney

In today’s rapidly evolving technological landscape, organizations must embrace artificial intelligence (AI) to remain innovative and competitive. However, as with any technology, it’s essential to put guide rails in place to ensure the responsible and ethical use of AI within your organization. This task often falls to chief legal officers (CLOs) and chief technology officers (CTOs), who must work together to strike a balance between the benefits of AI and the risks it may pose.

The benefits of AI are numerous, from streamlining operations and boosting efficiency to enhancing customer experiences, creating better products and services, and fostering innovation. However, there are also risks associated with AI usage, such as inadvertently leaking proprietary information or infringing on copyrights. In many ways, using AI responsibly is not much different than using any other online service: it requires a thoughtful approach and robust policies to guide its use.

Organizations should recognize the need to protect proprietary data and intellectual property while encouraging innovation. To achieve this goal, they should set out to craft their own policies governing the use of AI, particularly generative AI technologies like ChatGPT.

I set out to craft such policy with a simple idea: why not leverage AI itself to draft the initial policy? I decided to test this approach by prompting ChatGPT with a Chief Legal Officer (CLO) role and an outline of points to cover in the policy. I provided bullet points with key topics I wanted to cover and asked the AI to generate a policy based on these guidelines. To ensure a comprehensive and relevant policy, I also requested that the AI provide examples of appropriate and inappropriate AI usage.

Within minutes, I received a detailed response from ChatGPT that addressed all the points I had outlined. This initial draft was an excellent starting point, but it still required some human intervention to tailor the policy to our organization’s specific requirements.

The entire process took less than 30 minutes, a fraction of the time it would have taken to draft the policy from scratch without AI assistance. Moreover, the AI-generated draft helped identify important points that I might have overlooked, ensuring a more robust and comprehensive policy.

This experience reinforced the potential of AI as a valuable tool for organizations when used responsibly. By leveraging AI-generated content as a starting point and then applying human expertise to refine and customize the output, we can strike a balance between innovation and risk.

As AI continues to advance, organizations must remain vigilant in ensuring the responsible use of this powerful technology. By working closely with CLOs, CTOs, and other key stakeholders, companies can create policies that foster innovation, protect valuable assets, and minimize potential risks.

In conclusion, the successful integration of AI into an organization’s operations hinges on a delicate balance between embracing its potential and mitigating its risks. With a thoughtful approach and well-crafted policies, organizations can harness the power of AI to drive innovation and maintain a competitive edge while safeguarding their proprietary information and intellectual property.



Vladimir Collak

Technology entrepreneur who loves both technology and startups. You can find me at