Open Source Humans

--

I love science fiction, and the thing I love about it is that many of the things we say as children are now science facts. The communication device in Star Trek is now in your pocket, and the punching-in of the coordinates to find the new planet is in your car navigation system. We are now in a world of scientific facts, and our politicians and lawmakers are perhaps sleepwalking into a nightmare!

Open source humans

Overall, I start many of my conference talks with HAL from Space Odyssey 2001, who refused to let Dave back into the spaceship. Although the movie is several decades old, I believe that the rise of the machine is truly with us, and we are in trouble. So, before we start, let’s ask a few questions. In the next few years:

  • Will AI know the IP addresses that you commonly use? Yes.
  • Will AI know where you work and the people you work with? yes.
  • Will AI know what you like to buy and when you buy it? Yes.
  • Will AI know the names of your family and pets? Yes.
  • Will AI know what car you buy and where you drive it? Yes.
  • Will AI know what food you like and when you have it? Yes.
  • Will AI know what you look like and how you sound? Yes.

For cybersecurity, be worried. The age of when devices protected us from attack is receding. We will be the target — for good things and for bad things. Flipping the good side of surveillance, we will bring a dark side. We have allowed this to happen, as it is all so convenient that we let companies store our login details and give us advertisements that we like. We use maps to get around but leave our digital footprints.

But we cannot hide from this surveillance monster we have created. We need to think about trust, privacy, and rights now; otherwise, everything that we hold so dear in our world will be discovered by the rise of the machine.

Our human lives will be open-sourced.

Borders, no more

And you will say, let’s regulate Google, Apple and Microsoft with our legal and regulatory system. It’s the way that politicians and lawyers have controlled virtually everything in our lives. They love borders and think that everything can be controlled with these ancient dividing lines that, at one time, limited our movements and trade. But, AI will have little respect to these borders, as they are not actually real things. If you regulate the major Internet service providers, there’s a billion other entities ready to take their place and not comply with our laws — in fact, every person with a PC and some AI software will have the opporunity to generate a might AI infrastructure. While many will build this for good, there will be many more who will do it for evil. Because, like it or not, our core failing is the love of money.

The target will be you

And what happens when an AI agent is a billion times more intelligent than the best human hacker? Our current defences will crumble. And when AI agents work together? Perhaps our current thinking on attack tools is limited to our simple little tools that we have created.

We could perhaps underestimate the future power of AI agents, and who would not be limited in a way that our existing probing tools are. I would expect the main target not to be firewalls and devices but human beings. An AI agent with a target of gaining access to company IP could specifically target key individuals and continually find different ways to probe them and discover their weaknesses.

I appreciate that some things might sound like science fiction at the current time, but an AI could actually set up a complete digital twin of a system or even their life in order to trick the user. Audio and video, especially, will become more life-like over the next few years, and it will be increasingly difficult to tell fact from fiction. The one thing about an intelligent bot is that it will not give up on its mission the way that a human will and can continually pivot to other ways of attack. The key intelligence for the AI agent is thus to understand the target in how they live, and in their weaknesses. Overall, every system that we use has a fundamental weakness, and the bot just needs to find it and target that aspect.

The wars of AI

And, so, while we go to war with tanks and guns, AI will go to war with each other, such as in with stock markets or energy trading, and where the smartest and fastest AI agents will battle with each other to make profits — the risks of a global stock market crash is real. Overall, these AI agents will have targets, and will not stop until they reach those targets. An AI agent that wants to make money on the stock market may discover — as many humans have — that one of the best ways to make money is to crash it, and then bail back in. For cyptocurrency, for example, an AI agent can continually purchase Bitcoin, and then sell in an instance. As the supply is more than the demand, it falls in price, and causes a panic. The AI agent then gets back in when it starts to stablise. It’s just the game we have played as humans for centuries. But, the AI agents will not tire like human, they will find other ways to probe and discover new ways. In fact, they could create a cartel, and start working together. What’s to stop them?

AI and Cybersecurity

And, so, one of the main uses of AI will be in cybersecurity: both attacking us and defending us. If your company is not thinking about how it could be used to attack us, be worried.

In the next few years, we will meet the ‘Singularity’ and which is a point in our history where our technological development will be uncontrollable and irreversible and threaten our society — due to the rise of of artificial superintelligence. We might want to keep AI in a box or in a firewalled environment with a kill switch to stop it, but that time has passed. The ability for AI to create digital twins of our world will happen over the next few years, and we will struggle to tell fiction from fact.

Like it or not, we have allowed machines to crawl over every part of our lives — in the desire for convenience and automation. Rather than stopping, this will only increase, especially as many systems now use deep learning, and where it is difficult to unlearn anything. The “don’t do evil” slogan may come back to haunt Google, but if Google fails, there will be a billion Google’s to replace them. In fact, it’s a bit like Mickey Mouse chopping up the broomstick in Fantasia, where each of the fragments turns into a new broomstick.

If you are interested, we have written a paper which outlines some of the ways that AI will be used to attack us, and also to defend:

If your company is interested, we have setup a workshop on this, so please contact us for more details.

--

--

Prof Bill Buchanan OBE FRSE
ASecuritySite: When Bob Met Alice

Professor of Cryptography. Serial innovator. Believer in fairness, justice & freedom. Based in Edinburgh. Old World Breaker. New World Creator. Building trust.