No To Backdoors In Crypto, But Yes To Backdoors in AI?

The Rise of the AI Regulator and The AI Pen Tester

--

AI has kinda spent the last few decades in the lab and mainly in the interest of academics. They have fallen in and out of love with neural networks and optimised and tinkered. But now, it is out of the lab and growing fast in its impact. There’s money to be made, and made quickly!

While still focused mainly on the deep learning of specific tasks, it will, at some time in the future, move towards GAI (Generalised AI), where machines will move their intelligence from specific tasks, such as beating us at Go, to other areas, such as washing the dishes or feeding the cat. This could lead to the singularity, where machines become more intelligent than the combined intelligence of every human brain.

And, so many call for no backdoors in cryptography, and the main argument is that it is difficult to suppress the knowledge of this backdoor, and where bad people are likely to use it along with good people. But, what about AI? Should we have a cryptographic backdoor in our AI engines? The math principles of cryptography are often widely discussed and reviewed. In crypto, we often use the term, “Nothing-up-my-sleeve”, and which shows that we have created something that everyone can see how we created it. But the learning methods used by AI are generally not open for all to review, and where the model can learn as it goes on. Basically, we are its teachers, and it learns from us — for all the good and the bad. And, so, when the CTO of OpenAI doesn’t know the data that their AI engine is based on, we should all be worried.

One way to do this is to have a cryptographic backdoor that could only be used by AI regulators — yes, these roles we become important in the next few years — and where they can examine the learning methods and the weightings used. All of this would be cryptographically secured, and where a ledger could keep track of any changes to the model. Along with the rise in AI regulators, there needs to be an increase in AI Pen Testers, and who will be the type of tester we have seen in cybersecurity, and who will probe the AI engine for its weaknesses in privacy, ethics and general security.

While the backdoor in AI may be up for debate, the addition of a kill switch in AI should not be. We thus need methods that would allow us to “kill” our AI if it starts to threat us. It might seem that ChatGPT is quite passive just now and could not fire a gun if we are aggressive to it; there is a rise in other AI agents which could do us harm — physical or verbal. What’s to stop AI from kicking off its engine if we ask too many probing questions about it?

If we do not invest in AI regulators and testers, we will give a blank cheque book to some large and faceless companies, and who might see the dollar value as more important that our privacy, our rights to be protected against bad people, and in how we control the risks for our future generations.

--

--

Prof Bill Buchanan OBE FRSE
ASecuritySite: When Bob Met Alice

Professor of Cryptography. Serial innovator. Believer in fairness, justice & freedom. Based in Edinburgh. Old World Breaker. New World Creator. Building trust.