Why is AI so complicated?

Joehewett
Warwick Artificial Intelligence
4 min readOct 21, 2021

--

Stop for a moment, and consider what happens in your mind when you read “Artificial Intelligence”.

If you’re anything like most of us, a somewhat blurry image arises within. You find it hard to enunciate crystalised ideas about the broader concepts within AI or say anything with complete clarity.

This isn’t just a problem you face, it’s something that AI researchers the world over face. Why is this? Why is it that AI seems so fiendishly difficult?

Depth

It doesn’t help that only one of the words in “Artificial Intelligence” even has a definition that people agree on, and even then only just. A clear and concrete definition of “Intelligence” has, as of yet, evaded us.

The most infamous example of this lack of clarity regarding our concept of intelligence came with the conquest of Chess in 1997, with the fall of Gary Kasparov to DeepBlue. Until that point, it was a widely* held belief that Chess was one of the purest demonstrations of intellect — after all, how do you handle a board game that has 69 trillion possibilities within the first 5 moves, with anything other than raw, unadulterated intelligence?

DeepBlue dispelled that myth rather quickly, and chess awkwardly changed teams in the intellectual Venn diagram from “Intelligence Necessary” to “Just Another Solved Computing Problem”. This characteristic human trait of moving the goalposts of intelligence whenever we find out that seemingly intractable problems of intelligence can be solved with a trivial algorithm is called the “AI Effect”, and points towards instability in the foundations of our understanding of what “intelligence” really is.

More evidence of the absurd depths that the problems in Artificial Intelligence dive to comes in the form of problems like Alignment. Those engaged in AI Alignment concern themselves with trying to figure out how they can outsmart systems more intelligent than themselves**, such that the values of more intelligent agents that we create have a precise replica of our own human values.

If that wasn’t enough, consider the fact that at least since the time of Plato we’ve been actively trying to decipher our value systems, and after 2,500 years, what our values and morals are is still a subject of intense debate. Evidence suggests, however, that if we want to continue as the dominant species on Planet Earth we may need to not only agree on what our morals are but also figure out a way of converting those values into binary such that the machine can understand them too. Not only that but there’s a good chance we only get one shot at getting this value specification right the first time. Oh and there’s a chance we only have about 50 years. Great.

Breadth

Another reinforcement to the barrier of entry to AI is the breadth of knowledge that the field covers.

Other subjects are hard. Computer Science is hard. Probability Theory is hard. Neuroscience is hard. Economics, Statistics, Game Theory, Decision Theory, Ethics, Philosophy, Language and Statistics are all complex topics with frontiers of knowledge well beyond where many will ever venture. But the pursuit of Artificial Intelligence has to draw from the frontier of all of these fields and more. The further we strive forwards towards the recreation of intelligence in silicon, the further we dig downwards into the depths of what it is that our intelligence has created. Perhaps it’s the case that within the fruits of intellectual endeavor lie the secrets to their creator; intelligence itself. Only time will tell.

None of this is to say that Artificial Intelligence is the “most complex” pursuit of all — if you indulge yourself in any topic at a deep enough level you’ll most likely reach the saturation point of your brain long before the frontier of knowledge. The problem with AI is that the level of context switching that is required when trying to think about AI holistically is exceptional.

Just trying to figure out what “Rational agent” actually means will land you 4 Wikipedia links past Von Neumann-Morgenstern 1944 trying to find out which Economics modules you need to take to decipher Expected Utility Theory. Take the “Attention is all you need” paper too seriously and you’ll be studying Cognitive Psychology before you get past the first word in the title.

Conclusion

Getting involved in AI is daunting. There’s a lot of jargon, a lot of misinformation, and most of the important work seems to be scattered randomly in the comment section of obscure Google Docs. But that’s a struggle everyone comes up against when they take their first steps — you’re not alone. The trick is, to pick a part of AI you’re interested in, and run with it. It could be anything…

  • How are governments reacting to AI?
  • How does a neural net actually work at the nuts and bolts level?
  • What’s actually is Deep Reinforcement Learning?
  • How do we ensure AGI remains aligned with our values?
  • What does a world with advanced machine intelligence look like?
  • How do computers actually see?
  • What are the economic implications of AI?
  • How will AI affect income inequality?
  • What on earth is a Q-Learner?
  • How can I take advantage of AI to solve previously intractable problems?
  • What is a superintelligence explosion going to look like in practice?

Pick one, or one of a million other potential questions that the advent of AI has thrown at us, and start digging.

If you need help on your journey, be sure to come and ask — we’re always in the Discord and would love to discuss.

https://discord.gg/XYSxrms

Warwick Artificial Intelligence Society

https://warwick.ai

*- Widely, although groundbreaking research had existed for at least 48 years prior https://www.pi.infn.it/~carosi/chess/shannon.txt

** This is a vast and perhaps tongue-in-cheek oversimplification of a very tricky topic. You can read more about the fascinating work done by AI alignment researchers at https://www.alignmentforum.org/

--

--