AI: six serious unanswered challenges (from and for the uninformed).

The Kraken
Aug 9, 2017 · 6 min read
What AI looked like in 1957 — from MGM’s The Invisible Boy (spoiler: the supercomputer tries to take over the world).

I love AI. It’s amazing. It’s cool. It will change the world.

AI is easy to fall in love with. Like space flight and gene editing, it really captures human imagination. You can instantly drift-off into an everything-is-possible future of your own design, where even a limited understanding of the topic allows you to neatly solve the world’s problems with a few imaginative flurries.

That said, AI isn’t new, and only recently started collecting love. We’ve been using simple AI — like spellcheck that machine-learns your favourite words and acronyms — for years without even realising. Many things that are AI weren’t even called AI, until AI became cool (and hot).

However, to the uninformed (like me), there seem to be some real unsolved challenges and limitations out there for the many flavours of AI.

Nobody is talking about these challenges, and it’s hard to find straight answers among the many dreams and possibilities. Maybe there are really simple solutions to these problems, but they’re not obvious to AI outsiders.

So here’s a few AI challenges from the uninformed:


  1. AI is made by (and for) people.

Just to get started, AI needs inputs, a target outcome, and a teacher/programmer. Activities like intelligently redacting documents or summarising news, where humans can explain to AI exactly how something can/should work, are perfectly suited to AI. Things where humans can’t easily explain how it works or where it’s hard to agree on what success looks like, like translation, AI can never satisfy the many forms of perfect. Google Translate has every chance to be great, yet remains barely ‘ok’ since it was launched in April 2006. Human limitations are AI constraints.

2. (Value of AI inputs) > (Value of AI outputs).

In the film Short Circuit, ultra-adorable robot Johnny 5 ploughs through the 1986 world with an insatiable appetitive for input. Today, ever-more AI companies are demanding ever-more input. Supply and accessibility of the most valuable input isn’t increasing at the same pace. There are now so many AI companies, and such a limited supply of input, that the balance of power now rests with the holders/owners of the data. It’s an arms race, with value asymmetry. I spoke with a FTSE 100 company last week that is now charging AI companies (up-front) for access to datasets. The tide has turned, the yield curve has inverted, and the AI input land grab is changing the battlefield.

3. First-mover AI disadvantage.

AI companies pine for big company partners to help develop their AI. Large companies love the idea of using AI to silver-bullet all of their greatest unsolved problems. …but the large company that invests to train the AI will always want some sort of permanent benefit from being the first-mover. …and the AI company working with the first-mover is often doing so in order to train the AI to work across a entire industry or sector. These two forces are in direct conflict. If KMPG teaches AI how to auto-audit, EY, PwC and Deloitte would love to fast-follow. KPMG either (a.) bites the bullet and makes a charitable first-mover investment, or (b.) constrains the AI from being re-used elsewhere. Herein lies a huge barrier to AI scalability; very few interests align in helping AI to work on industry-wide problems.

4. Privacy pending, AI tangling.

In a heavy data privacy world (coming soon), with GDPR in full force and/or German-style data protection, it’s not quite clear how AI can use (or exploit) personal data. The definition of personal data is changing, and will continue to shift in illogical leaps around each regulatory event and reference case. Who do the inputs belong to? How can inputs be deleted/traced/updated? Is AI anonymising and aggregating, or will it learn how to avoid that? A tangled mess of continuous requests to remove and ‘un-use’ personal data is coming… so if I was betting on an AI company right now, it would be AI that can untangle data privacy issues created by other AIs. Google DeepMind & NHS Royal Free Hospital experienced huge legal backlash on the use of personal data for AI… and interestingly it’s The NHS that the UK regulator found guilty. It’s a warning for large companies working with AI startups… those with deeper pockets and more to lose will always be the biggest targets for Ex-Post regulation (it really is best to avoid that). We’re a few years away from reference cases that define right vs. wrong, but a data privacy storm is coming, and AI is global warming.

5. Anti-AI human self-preservation.

Why should the majority of the workforce embrace AI? Do you really want to turn your stable paralegal job — where you work on interesting problems and chat over the water cooler — into an AI-enabled ‘scalable human intervention’ role? Do you want your job to turn into cross-checking the erratic parts of process that a computer finds too irrational to automate? Can we expect you to train the AI to replace you, and then act as its human janitor? I didn’t think so. The people that need to train AI have every reason to oppose AI. This zero buy-in approach to AI from the majority of the workforce hasn’t turned into full Luddite mode yet, but convincing large, disinterested and potentially redundant populations to embrace AI isn’t going to happen quickly. In late-July 2017 Facebook’s neural networks Bob and Alice invented their own language and started communicating in code, and Facebook found this so worrying that the team immediately shut down aspects of the project. AI is scary to many people for many reasons, and the resistance won’t be futile.

6. AI (re-)re-learning mistakes.

Those who do not learn history are doomed to repeat it. To truly advance a problem, AI needs to (a.) learn how to learn from history, (b.) learn the history, (c.) learn how to avoid repeating history, and (d.) learn how to tell humans that it’s avoiding repeating history. …all of that before learning how to be better than any existing approach, and learning how to create useful outputs that existing businesses/humans can actually act upon. That’s a tall order. Easy when there’s no risk (e.g., reading live news), but hard when risks are real (e.g., taking irreversible action based on that live news). Weather, dust and bridges are huge challenges slowing the development of autonomous vehicles (changing inputs, clogs up the sensors)… likewise, creating great AI is much harder than it initially appears.


Anyway, most things people call AI are just harmless IF functions, pattern-matching filters, auto-populating templates or hidden decision-trees. On the other hand, Skynet could become self-aware and attempt to take over humanity. Huge taboo to mention Skynet. Sorry (not sorry) …yet the drama of Skynet somehow reflects all the hopes and fears described above. Even Johnny 5 was converted to use for evil in 1988’s Short Circuit 2.

Alan Turing said (in the 1960s) “the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control”. Here’s the important part: “tell us how to keep it under control”…people are still heavily involved in the futuristic AI utopia. AI has taken over work and invention, but needs careful gardening and human guidance. Human intervention.

Another take on this is …paradoxically… as long as we have an off-switch, AI could release us from our keyboards and monitors, to make us more human again. Focusing on uniquely human activities, that require uniquely human attributes. It’s a view that’s easy to fall in love with.

Again, these are challenges on AI from the uninformed. Inconvenient truths, or totally unfounded? …Are there any simple answers to these problems and ideas? If so, again, it would be excellent to learn about them, so please share!

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade