Should you need a licence to use AI?

Daniel Ward
Mesh-AI Technology & Engineering
8 min readJun 21, 2024

As AI’s accessibility increases, both in terms of the range and variety of services on offer and their ease of use, so does the number and severity of potential risks. Increased regulation is a certainty as the concerns around AI of companies and society aren’t in alignment.

‘Deep Fake’, or ‘Deep Misunderstanding’?

Let’s take the use of deep fakes for example.

I recently read this BBC article about the use of deep fakes and AI in TikTok videos to promote or critique politics. Whilst interesting in its own right (particularly that much of the created content was not intended to be misleading but rather to be satirical), one of the questions that crossed my mind was, how can you stop this being used to bad ends?

Here we are in 2024, where you see a video and in the process of watching and absorbing the content, have to stop and consider “Is this real?”. If I see the Prime Minister of the United Kingdom say the phrase “We would be proper gutted”, I can take a stab that this is likely a deep fake. However, if it wasn’t a phrase so out of character, would I notice?

Image from the BBC article linked above.

I’ve worked on how organisations can manage their own use of AI, and it is a challenge in itself. Common rules like “Tell people you’re using AI” and “Ensure you try to account for biases in the training data” are all well and good when you have control over the application. In many ways, if an organisation is taking the time to consider “How do we use AI appropriately?”, they are unlikely to be problematic (though they should still consider how to use AI most appropriately to avoid unintended consequences).

The bigger concern is how do we remedy the bigger risks AI poses to society? It can make challenging decisions quickly and en-masse that may be difficult to understand the reasoning for. It can create visually and audibly false media of key influential figures. It can create new works of art with questionable ownership. These are all problems with a raft of considerations and challenges below the surface, hgihlighting the real problem with AI: It’s just a tool.

It’s largely not even a hard to use tool. Virtually everyone with a computer and the internet has access to services like ChatGPT and Midjourney. The more technically savvy may have access to the raw models that run behind these, or other machine-learning libraries. Using them well may be difficult, but actually picking up and using the tool is simple. And that makes the user of the tool the risk, not the tool.

AI doesn’t kill people, people do

I’m going to make a small leap here to what sticks out in my mind as an apt comparison: Guns. “Guns don’t kill people, people do” is a phrase many people will be familiar with. A gun by itself is (largely) harmless. It will sit there and, outside of a severe malfunction, it will never take any action. But as soon as it has outside input (eg, a person), it becomes part of a problem, because the motives and skills of that person are what drive the application of the tool.

An olympic rifle shooter can pick up a gun, place 10 shots safely down towards a target, reliably operate and clear the weapon, and nobody would have reason to be concerned: The operator is well trained and knows the risks involved. Meanwhile, somebody who has never used a gun may immediately be seen as a risk to others (and themselves) the instant they pick up a weapon. Even further, what if a toddler was to pick up the gun?

The gun is not what causes the risk, the operator is. Nonetheless, we can’t assume skilled individuals will always be the ones to pick up the gun. If an operator (skilled or otherwise) shoots someone, that person (victim, potentially) is impacted regardless. It’s certainly more dastardly that someone would intentionally shoot another person, but it is unfortunate when a gun is poorly used and harms another person. Regardless of the motive, the impact is the same.

Which brings us back to AI. I would assume most people do intend to use AI for positive reasons, but the breadth of applications are huge. Teenagers are using deepfakes to make satirical political content, but does everyone realise when the content is satire? Meanwhile, nefarious actors are using deepfakes to intentionally influence election outcomes. On the other hand, artists and media editors are using deepfakes to enhance their productivity. Some companies are using the technology to put on entire theatre productions using holographic deepfakes, with appropriate licensing and approvals.

ABBA Voyage is an entirely virtual stage show using the likeness of the band Abba, projected onstage as AI holograms, charmingly called Abba-tars.

All ABBAout The People

So if the problem isn’t the tool, but the user, how can we minimise possible harm done?

Let’s consider that there are three different groups, each with factors for consideration:

1 — The Impacted

That is to say, people consuming AI generated content or outputs (or those around somebody holding a gun). Being aware of what AI is, how it works, and things to look for helps understand the situation and make the best action. Awareness is the main factor here, to identify where and how AI is being used. There may be recommended best actions to take (like caution around the motives of the AI output), but ultimately this will come down to the experience and common sense of a person to make a judgement once they are aware of the AI and therefore the processes involved.

2 — The Tool (and its Makers)

The tool itself has a role to play, but the makers (or developers) of the tool are more important. For example, ensuring the tool has the right controls baked in so that a user can use it safely and appropriately. For a gun, this is building in a safety mechanism, the ability to remove bullets without firing them, or even disassembly and the ability to replace parts. For AI, this is the ability to control how the AI operates, being open about the processes, training, and biases inherent in the model, or stating the intended applications of the model. The main factors here are Transparency and Controllability. You should know what the model does, what it’s for, and have the ability to control and use it appropriately and safely.

3 — The User

The greatest overall influencer on an outcome, the user ultimately decides how a tool is used. A bank robber with a gun will rarely have a societal-positive outcome, just as a hostile state actor with a deepfake and the likeness of country leaders will not necessarily have a societal-positive outcome*. The key factors at play here are Intention, Awareness, and Training. Ultimately, if somebody has a negative intention, the outcome will be negative. However, ensuring people have the right awareness of what AI can do and the training to use it appropriately (that is to say, to use it how they intend to use it and not to unintended outcomes), we can help ensure we move towards the right outcomes.

✳️ I feel it necessary to state, if you are the actor and your motive is met through the use of the tool, you will probably feel happy with the outcome. I intentionally use “societal-positive” here to suggest that the Impacted people around you may disagree with your motive and outcomes.

We can see there is little that the Impacted can do, whilst Tool providers have a fair degree of influence. The Users have the most ability to impact possible outcomes.

As mentioned earlier in this article, many organisations are trying to get to grips on how to best use AI appropriately. This covers the User bubble, with awareness and training being developed to support their overall intention. Likewise, those building AI Tools are becoming more aware of the importance of being open about how their tool operates, is built, and adding ways to control it (although the pace of improvement here is slow and the fact most tools are proprietary makes transparency challenging). Likewise, overall awareness of AI is growing, improving the situation for those that are Impacted.

Legislation and Regulation

So those using tools are self-governing, those making them are slowly becoming more open, and those impacted are becoming more aware. Does more need to be done?

There’s lots of options here, but ultimately this is now for governments and regulators to decide. AI can be used for good outcomes, it can equally be used for bad outcomes, and the direction it goes largely comes down to the 5 factors discussed above. Currently, well-intentioned Users and Tool developers are trying to self-govern to avoid regulation, though this cannot stop those with bad intentions.

To draw it back to the gun analogy a final time, gun controls vary in scale and scope across the world. For example, in the UK, guns are tightly controlled, requiring a licence to own and operate, plus additional controls, to ensure that only people with good cause and training own and operate firearms in an appropriate manner. This minimises the opportunity for guns to end up in the hands of those with poor intentions and training, improving the situation for society overall.

Some regulatory bodies are already moving to implement controls, such as the EU AI Act, or the US AI Bill of Rights. These have focused on the way AI is used, rather than who should be using AI (influencing the 5 factors discussed, rather than directly controlling any of the 3 groups). This is only the start, however, and as the negative effects of widespread AI adoption are observed and the need to control the 5 factors are realised, we will see the amount of legislation around AI increase.

Closing Thoughts

Increased AI Regulation is virtually guaranteed. The potential harms to society are huge, even with companies self-governing themselves somewhat effectively. The concerns of companies do not reflect the reality of AI risks to society, such as directing public discourse and general misinformation, not to mention redundancies through replacement of people. Inevitably, this will mean regulations are introduced to ensure AI is used appropriately, potentially limiting who is allowed to use AI at all.

However, I think the biggest concern that follows from this discussion (and it is entirely deserving of its own article) is a review of the term “Artificial Intelligence” and what it means. We’ve spoken about deep fakes in this article, one component of Generative AI, but not touched on other parts like Machine Learning Models, Deep Learning, Recommender Systems, Natural Language Processing, Computer Vision, or so on. Many of these terms may be compounds of other terms (for example, Language Models make strong use of Natural Language Processing), so is it really effective to legislate against “AI” as a whole? What do we even mean when we say AI?

That topic will need its own discussion. Watch this space.

--

--