How to prepare for the AI future
When I talk to people who don’t work in tech, I keep getting asked the same questions: “I’ve seen Chat.GPT etc and it’s mindboggling and clever, but what does it mean for me? What should I be doing now to prepare for this rather scary AI future?”
First caveat: I’m excited about AI — there’s a possible future of “AI abundance”, where we are all smarter, healthier, more creative, and live longer, happier, more fulfilling lives surrounded by more beauty and less drudgery. But even Sam Altman (Open.AI founder) admits there’s a reasonable probability of very bad AI outcomes (and has built a nuclear-proof bunker on his ranch in Big Sur with something like 2 years’ supply of food, ammunition etc just in case); so we can be AI optimists but still acknowledge that the world’s about to go through a really extreme shift and we should prepare for bad people using AI to do bad things.
Second caveat: This is not a debate or prediction about AGI (Artificial General Intelligence) or about what AI can and can’t do; but acknowledging the speed that AI is accelerating and self-improving right now, this moment already feels a lot like the “Singularity” described by Kurzweil and others: so this blog is about how we prepare and adjust for the AI that is already here.
OK. Here goes…
- Prepare your family for a whole new level of Phishing/Fraud.
If there is any footage of you on the internet [3 seconds of you talking is enough] then criminals can use AI to create audio or video of you that even your own mother won’t know isn’t you.
So if your mum gets a call from “you” saying you’re in an emergency [and the AI might look up your latest location on Instagram, or knowledge of your favourite dangerous sport, to make this a really likely emergency; and criminals could operate millions of these calls at once using databases from the dark web], she needs to be very suspicious. However urgently “you” might need a money transfer or some details from your passport, your mum needs to fight every maternal instinct to help because she’s likely being duped by an AI version of you.
Suggested defence: tell her to ask you something an AI couldn’t know (NOT your pet-name or house name or her maiden name that you’ve probably filled in before and is now available on the dark web) — how about “remind me when you were last home, what did we eat?” or “Who was my headmistress at junior school”.
Or perhaps even agree a two-way Passphrase with your immediate family that you never post electronically and that AI couldn’t guess. e.g.,
[Mum]“Remember that time we were climbing Mt Snowdon”
[You]“You mean with Gina and Jamie?”
[Mum]“Yes, and Adam fell in the stream”
[You] “And lost the watch grandpa gave him”.
You need to tell your mum that, however much I might argue with you, however much I might plead or make [AI] excuses, if I can’t answer the passphrase or question, put the phone down and call the police. Yes, it might sound a bit John Le Carré, but why not be prepared?
2. Assume anyone you haven’t yet met in “real life” may be fake. Dates…
Even colleagues…
There’s a new phenomenon — interview fraud — where people create fake social media profiles, with a fake profile picture, and do fake job interviews with their fake AI avatar. (Creating fake documents is also a piece of cake with Generative AI). AI-people will be engaging, amusing, flattering: already SnapChat users are chatting with their (always available) AI chatbot more than their friends.
And that “perfect date” on Hinge or Tinder: ask if they could be fake. Sweet Bobby tells the story of Kirat Assi, who wasted her best years in a virtual relationship with a man who turned out to be fake. AI could do that for millions of people at once.
Defence: Meet people in real life before you trust them with anything. If they can’t / won’t, delete them — don’t waste your life with fake people.
3. Prepare for AI-optimised elections and populist government.
Historically, at times of major societal changes, populations turn to a populist with simplistic answers to complex problems. We seek certainty, security, a common enemy, a “golden era” revival.
AI is going to make it easier than ever to convince vast numbers of people of a new movement. Dominic Cummings (of Brexit fame) is probably already working on applying AI to hone the perfect message for each individual in the next election (he’s been doing vast amounts of polling in the US). Rhetoric can be very persuasive — and Large Language Models have been trained on the entire history of the most persuasive speeches and essays — and might use adversarial reinforcement learning to test and iterate to the most persuasive messages. Combined with convincing deep-fakes of opponents, it’s going to be hard to know what we’re voting for. So we can assume that the winner of the next election will be the side with the best AI.
The trouble is, a populist government with AI at its disposal could easily become totalitarian and dangerous. We’ve already seen how China has used surveillance cameras, social media and the “social credit system” to assert unprecedented control. So the AI-enhanced government quashes dissent, entrenches single party rule, and once there’s no check on power, there’s also no protection for unpopular minorities, academic freedom or freedom of thought.
Defence: Beware the untested politician who comes from nowhere and claims all existing politicians are “useless” or “corrupt”: chances are, they’ll be worse. Beware the “emergency” that requires an erosion of core liberties “for our protection”, particularly freedom of the press. Beware attempts to blame all our problems on one simple common enemy [immigrants / the EU / the blob / benefits-seekers]. Challenge excessive surveillance. Follow trustworthy sources, even if they say things you don’t like. Truth is always shades of grey.
4. Be more human / messy / contradictory / sexy.
What makes us human? We’re flawed, we do dumb things, we screw up. But we also have wild moments of discovery, inspiration, joy. We suffer, we mourn, we learn, we grow.
Totalitarian regimes try to make us all the same — because that allows total control. They try to fix our flaws, to remove dangers, to unite against a common enemy. The correct opinion, the correct behaviour, the appropriate dress, the right media and stories. Safety first. Social media, with AI, has the potential to reinforce homogeneity in the same way.
Defence: Reject it. Be naughty. Be dangerous. Challenge everything. Say the wrong thing. Spend time with people who allow you to be completely you. Love people’s flaws and failures. Forgive. Dance naked under the stars. Break the rules. Be present. Be human.
Embracing what makes us human innoculates us from becoming programmable and controlled by a future AI-assisted totalitarian regime.
5. Reject dehumanising language and behaviour.
We all have a primitive brain and a civilised brain. The primitive brain is easily triggered — and AI will find ever more effective ways to trigger it — in particular by tapping into latent fears and tribalism.
Hitler and Goebbels used language to dehumanise jews and gays — they were vermin, rats, parasites. Black victims of American 19th century lynchings were referred to as “monsters”. Why? Because de-humanising people stops us feeling empathy towards them and justifies atrocities. We like feeling united as a tribe against an evil enemy — but that instinct can be used to divide us and control us.
Defence: when you see or hear language like “scum”, “trash”, “parasite”, “waste of space”, “animal”, “monster” to describe a human being, that is the moment to stop trusting that source — even if you share their dislike for whoever they’re describing. So if it’s on social media, just block people who use dehumanising language. Pause. Breathe. Take a walk. We all have some good and bad in us. And that’s OK.
The value of human life is a core tenet of our liberty and civilisation. And in a world where AI can do more and more, we need to guard ourselves against any temptation to de-value it.
6. Beware addictions.
The primitive parts of our brains can easily get hacked. We know how heroine or fentanyl “hack” people’s reward centres, making them lose interest in everything in life except the drug — addicts become entirely selfish, lacking capacity for empathy, love, curiosity — because all those human faculties are erased by the overwhelming need for the next high.
A world of addicts would be a grim and hateful world.
The trouble is, companies will use AI to get us more “hooked” on their products. Just as casinos have spent years perfecting slot machines with random rewards and “near-misses” to hook people until they “zero out” (spend all their savings), so AI will be used to find ever more ways to get people “hooked”, whether it’s social media, pornography, dating apps, games, VR, etc.
Defence: Read Nir Eyal’s Indistractable; set strict screentime on addictive apps like TikTok, Instagram and games; never open an online gambling account; and if you start getting addicted, talk to someone. As animal studies show, isolation makes us more susceptible to addictions.
So…
7. Get to know your neighbours
There’s lots of evidence that neighbourly bonds make communities more resilient in a crisis and more resistant to extremism.
Places where neighbours gather — churches, village fetes, local markets, bonfire-nights, yoga-classes, street parties — give us a sense of belonging and make us better able to look out for each other in a disaster. And less susceptible to wacky ideas propagated by persuasive AI.
8. See your family and close friends more
Similarly, if there’s weird stuff happening in the world, you need to spend time with the people you really trust — the people who’ve known you through good times and bad. You can see if someone’s being sucked in by some Q-Anon silliness or hyper-addictive game — and they can hopefully help you stay sane and humane.
9. Pandemic-readiness
We’ve probably already experienced the first global man-made pandemic — but few of the lax lab controls that enabled scientists to experiment with “gain of function” of viruses have been addressed — and with AI it could become trivially easy to engineer new lethal viruses and pathogens. Yes, AI should also help us combat them speedily too, but probably not before they’ve wreaked social, economic and life-threatening havoc.
So all those things you wished you’d been prepared for in the last pandemic — have them ready. And, assuming no more ill-thought lockdown rules, refer to (7) & (8) above.
10. Up your password game. Two factor. Biometric.
Passwords should have gone extinct a decade ago. The Captcha is dead — there’s no way to prove you’re a human online any more.
With AI, criminals are going to find it much easier to test millions of passwords and operate a bunch of intelligent procedures to lock you out once they get in.
Defence: Turn on 2-factor or biometric (FaceID) passwords for everything that matters: particularly for your email password / Apple ID / anything that’s used to reset other passwords. Because if those get hacked you’re f**cked. Use Apple passwords or OnePassword or Dashlane to set your passwords to long unguessable sequences. And don’t use any bank account that doesn’t have two-factor authentication (i.e. requires your phone / text message to log in) — it will not survive.
11. Learn manual skills
There’s a dystopian AI future where the AI does all the work, and we’re like residents in a Care Home, unable to do anything for ourselves — or like those space-travellers in Wall-E, sitting in arm-chairs, consuming ads and milkshakes and getting fat.
In a world of AI, we should double down on the things computers can’t do. Gardening. Carpentry. Dance. Fishing. Radio hacking. Surfing. Bee-keeping. Wine-making. Cooking. Sailing. Log-splitting. Bivouacking. Brick-laying. Guitar-playing. Pottery. Flying. Singing. Archery. Cheese-making. First Aid. Raft-building. Kung Fu. Car maintenance. Knot-tying. Map-reading without Sat-Nav.
What are the skills that have proven useful for many generations?
Feeling proficient at manual skills gives us a satisfaction that we don’t get from computers, and probably makes use more resilient in a crisis.
12. Assume major institutions will fail
Large Language Models are opening up huge new attack vectors against big corporations. Many aren’t ready. Huge databases will be compromised. Some banks will likely fail. Governments too perhaps.
Defence: Have a back-up plan. Spread your money between a few banks and probably a bit in Bitcoin (use ‘Cold Storage’ like Trezor or Ledger) and perhaps some gold stashed for the worst case. Keep an off-line list of your accounts in case a login stops working.
Keep some physical photo albums rather than relying on Apple or Google. Kindles can be wiped remotely — so keep some physical books. If GPS goes down some day, do you still have a printed roadmap? Does your car work without the internet, and does it have enough fuel to get you home? What’s your backup power source? What means of communication do you have if you lose mobile signal? Buy solar roof panels and backup batteries.
13. Loss of Privacy
Think about who has data on you that you wouldn’t want to be public. Ask them to destroy it (they have to under GDPR). Use Jumbo to help reduce your data exposure.
Everything you send over the internet could one day be decrypted, so just think about that when using any app. Keep your secrets offline. If you write a journal, do it longhand.
14. But… Seize this moment of opportunity
That’s a lot of precautions — sorry.
But this is still an AMAZING time to be learning and building: perhaps the best ever, funding climate aside. So be curious, learn to use it better, learn to build with LLMs, keep up, be creative…
— Educate yourself on, frankly, anything — or build a next-wave education app — there’s never been a better time to learn
— Build a solution for a problem you know best — it’s never been easier to code, build an app, or help others build theirs
— Revolutionise healthcare — with AI-doctors, personalised medicine, new cures, new cells, mental health companions, elderly care…
— Solve some of the dangers listed above — cyber-defences, fraud-prevention, content verification, biometric identity…
— Re-invent government and find ways to protect democracy
And above all, use AI to spend less time in front of screens and more time being human
To quote Bertrand Russell:
The good life is one inspired by love and guided by knowledge.
Although both love and knowledge are necessary, love is in a sense more fundamental, since it will lead intelligent people to seek knowledge, in order to find out how to benefit those whom they love. But if people are not intelligent, they will be content to believe what they have been told, and may do harm in spite of the most genuine benevolence.
Bertrand Russell, 1925
And thank you Amanda for your wonderful AI-assisted images