Building An AI Compliance Strategy

Adrian Faulkner
7 min readMay 24, 2023

I’m old enough to remember when the web first saw public adoption. I bought my first 28k modem out of university, despite having told my university friends a year earlier that I couldn’t see the World Wide Web ever catching on.

I soon realized I was wrong and that the internet would change everything. It was a crazy time, full of infinite possibilities and danger. Many thought the technology was a threat to society, and others raised legitimate concerns, but one thing was sure… the internet was here to stay.

We like to think that web adoption was rapid, but for many companies, concerns kept them from forming an e-commerce strategy for years. Many companies even held off creating a website until some of the issues had been resolved. If you bought something in the UK from a retailer in the US… would US or UK tax apply? Was the internet just an expensive fad? Where did you get the expertise to build websites which were complex and expensive?

By the time the ’90s ended, everyone and their mother were on the web. The question wasn’t if your business should be on it; it was, why weren’t you already? Those early pioneers of e-commerce went on to become the giants of today.

The web went through its own booms and busts, but even during the dotcom bubble burst, there was always a sense that the technology was here to stay. Other technologies, such as VR or 3D, seemed to pop up with new technological advancements every decade or so, only to die back down as they failed to gain mass adoption.

But whilst blockchain has yet to become the tool for digital assurance that it has the potential to be, AI is already here. AI provides new functionality to your existing apps and opens up possibilities for services previously thought impossible.

And unlike the lifecycles of Web 1.0, Web 2.0, and 3.0, which lasted years, AI’s lifecycles are measured in months and even days.

AI is here to stay

Old compliance thinking would be to wait six months, let the risks with this new technology become known, understand legislation, and devise a risk-averse plan to capitalize on this technical advancement. It’s the approach many companies in the latter half of the 1990s took, and they were still considered at the forefront of e-commerce.

But that’s not possible anymore when the rate of progress is measured in days rather than years. Companies that wait six months to devise an AI strategy will be left behind. Even tech leaders have urged companies to suspend development for months to assess the impact of AI.

But how do you even begin to address compliance when there aren’t laws to comply with? That’s the challenge we faced in Techspert. How do we, as an AI company, ensure that we keep up with developments while keeping our Expert Network customers safe? The dichotomy of being tolerant of risk as a company in the space while simultaneously being risk-averse for our customers proved challenging, especially when countries such as Italy introduced a ban on OpenAI.

Legal experts are still debating the concept of AI learning. Is it just pattern matching, or is it more akin to copying? I won’t pretend to know the answer, as it will get legally defined and refined in the months and years ahead. As much as we want our AI services to learn and process data in new and exciting ways, we must prioritize the privacy protections we are obligated to provide our customers.

But in that dichotomy, the embryo of our compliance strategy was born. We needed to give our tech teams the ability to innovate while protecting our customer’s data.

Innovating whilst protecting customer data

We looked at our risks, and for us at Techspert, the big one was the potential for customer data to be learned by the AI model. If we started putting data on our customers into ChatGPT, there was an enormous risk that the model could learn confidential data and then be at risk of disclosure.

As a result, we issued a blanket requirement to refrain from using tools that would cause the training of public models, even with our data. I have no doubt that tools like ChatGPT will introduce better privacy features in the future — and given the speed of the industry right now, that could be before this article is published — but we can update and amend our policy accordingly, depending on the level of risk mitigation.

The trouble then becomes that we, as a compliance department, become reactionary. Suppose the Tech Teams come to us asking us about some new features in future versions of OpenAI. In that case, we’ll have to go through the same process of looking at the legislative landscape, assessing new risks, or reassessing old ones, all while having very little guidance on what we should do or the direction of the industry.

For that reason, our AI Policy quickly grew from a rule regarding only using the OpenAI API rather than ChatGPT into a set of ethics. At Techspert, our values are to be disruptive, compassionate, collaborative, and transparent. This perfectly explains how we want to be risk-tolerant in developing our disruptive technology while being empathetic to our customers and ensuring we are risk-averse to their data.

We ended up with five key ethical principles that form the foundation of our AI Compliance Strategy. These will be our North Star as we navigate these uncharted waters.

Privacy: However much we want to disrupt, that cannot come at the expense of our customers. Aside from the obvious regulatory and contractual needs for confidentiality, we must ensure that our AI solutions do not put that confidentiality at risk.

Fairness: We must understand that our systems may be biased when we train them on historical data. We have to look for it and actively correct it where possible. We also need to have a way that our experts or customers can raise issues with us, so we can take reasonable measures to ensure that our systems are as fair as possible.

Transparency: People’s opinions on AI are very binary at the moment. Some see it as the most significant technology disruption since the widespread use of the internet. Some see it as a danger that will threaten jobs and spread misinformation. We cannot force people to change their opinions, nor would we want to. Instead, we need to be transparent about what we are doing, how we are doing it, and any associated risks. There are both positives and negatives when it comes to AI solutions, and we need to be transparent about how we are embracing the positives whilst simultaneously guarding against the negatives.

Accountability: We need to have clear policies in place. We need to follow them. We can’t (for example) just abandon our ethics on customer privacy to be disruptive. It means that difficult conversations need to be had and innovative solutions sought. By having a set of ethical principles, which may one day develop into a set of legal obligations, we can ensure that we have something against which we can hold ourselves accountable.

Human Oversight: Our project teams have always added the human touch to our interactions with our customers, and AI will not replace that. Where there is a risk of inaccuracy, we need to ensure our customers understand that risk and do further human due diligence before trusting any results. We must remember that AI is a tool rather than a solution and not rely upon it over human expertise.

AI Compliance Principles allow us to navigate an uncertain future

These principles give us the foundations of a framework upon which to have discussions about AI. Are we meeting these principles in everything we do? It allows the compliance team to make quicker, more effective judgments and identify potential issues. It also gives Techspert a foundation for future versions of our AI Policy to grow as the industry matures and regulates.

We’re excited about the future at Techspert. AI certainly presents more than its fair share of challenges. Still, we’re already facing them to ensure that we can continue to be disruptive in a lightning-fast marketplace while remaining risk-averse for our customers.

Like the internet, AI is here to stay, and much like the early days of the web, there’s a lot of innovation and uncertainty. No doubt, there will be new innovations that improve the lives of everyday people, as well as new dangers and threats that the technology presents. It’s both an exciting and scary time when compliance will perform an ever-greater role. Building the foundations now will give companies the framework they need to deal with the risks of the future.

--

--