The Artificial Intelligence Landscape

Aditya Khanna
23 min readAug 6, 2016

--

Introduction

Artificial Intelligence is the most fascinating topic of our time, not only because of being an important topic, but because of being the simplest among all important ones. Compare it with the question, for instance, of what happens when you enter a black hole. This would require people to first get comfortable with the fact that its black because its gravity sucked all the light from around it.

What? You can suck light? Are you sure about that?

Of course not, but that’s what Brian Greene seems to be saying sometimes.

The one about black holes is an important question but it’s also a tough one to get your head around. Now compare it with the following question — what would you do if you were faced with someone who could do everything that you can, and that too better than you?

Nothing difficult to understand…and an incredible amazement to connect with. Slowly this question gets everyone thinking. And I dont need to give examples of the kind of thoughts that follow. The answer often is a personal one for everyone.

So almost everyone who has heard of artificial intelligence… gets it. And everybody wants to know — with a sense of anticipation not enjoyed by any technology in the past — are we there yet?

Most people believe we’re not there. Okay, but how far have we come, and what part is left?

Foxconn removed 60,000 workers from its factory in China and got robots to do their jobs, but Siri still hasn’t got attracted to your warm personality and proposed to you in the voice of Scarlett Johansson. There’s a robot now that can build a brick house in 2 days, but the robot that predicts farts in a crowded metro and disengages them before the explosions is still nowhere to be found. There is some disagreement as to what marks the arrival of AI. For some people what matters is super intelligence. And they have an easy way to detect it. The onset of super intelligence would be marked by a singularity event. What is a singularity? Fuck knows…

No quite literally…

Singularity is supposed to be a point in time beyond which what happens is not known to anybody, and possibly cannot be known to anybody before it actually happens. It’s a good time to bring attention back to our cute black holes we started with, because they share the concept of singularity with artificial super intelligence. The currently accepted answer to the question I posted in the beginning of this piece is exactly this.

What happens when you enter a black hole — fuck knows

So for this set of people, we’ve clearly not created AI yet. But according to some of them, we’re not quite far. Here’s a fancy name you can remember to feel closer to this group of people — Ray. Not fancy enough? Try Weil.. KurzWeil. Yup, you guessed it. James Bond happens to be the Kurz Weil of the fiction world (not that Kurzweil’s world is considered anywhere close to being realistic).

On the other side is the group of people without much hope, the girlfriends of the world for whom your love would never be enough. “oh your new car is nice, but its not a Jaguar”. Trust me, some critics don’t sound very different when they dismiss every new development in AI as “just another program, and not real AI”. This kind of girlfriend effect is very common in the AI world, although their term for it is more politically correct — its called the AI effect.

So although you might not have witnessed the wonderful or horrible things you’ve been made to imagine, there are changes taking place in the employment structure. Should we be anxious already? Should we worry about our jobs or our lives? The Terminator Salvation or the Matrix Revolution?

I’ve delved into these questions for a long time now and have tried to answer some of the popular ones. Through the course of this article, I have tried to compile the answers into an exhaustive account of the entire artificial intelligence landscape. In other words, the next time you come across anything regarding AI, you would be able to place it in one of these buckets in your head. It might not be a straightforward way of understanding things, but it sure would help to resolve some of the mysteries surrounding AI. Lets start with the first mystery — Consciousness.

Consciousness

Whether you’re deep into learning technology or can’t even get your phone to stop misbehaving because you didn’t know there’s something called a reboot, you do expect artificial intelligence to be able to think like humans. This way you can have your most intimate conversations with it (Think Samantha). Conversations that you might not be able to have with your closest friends. Because somehow, you are assured that you won’t get judged. Would you be bothered by what a machine thinks about you? Not as long as you know its a machine. And that’s the interesting part — in the near future, you might not know whether you’re interacting with a machine or a human being. The single important thing that people expect from AI is to be exactly like a human (at least for sometime, before it flashes past us on the evolutionary path). In this pursuit, the question of how the mind works, or how consciousness works, is among the toughest unsolved problems. All the other pieces are relatively easier to take care of. With consciousness, the world doesn’t even agree on how to define it yet.

This brings us to the first important objective of the study of AI — Replicate human consciousness with a technology that is not biological reproduction. A technology that we introduce ourselves. At this point you should recollect all the initiatives in AI you are aware of and see which ones have this as their immediate objective. I don’t think DeepMind falls in this category. Although they are the pioneers of intelligence research in the world, search for consciousness is not one of their immediate goals. You would be lucky to find such initiatives in popular press, since they are mostly restricted to university labs, or to some rare obscure companies. What are these people trying? Should you care about this research? How close are they to creating consciousness in the lab? How do you go about creating something you don’t even fully understand?

I can’t go in the details of this dimension given the restricted scope of this document but the idea is as follows — we might not currently understand exactly what happens in the brain, but we can crack open the skull and look at what lies inside. We know what it looks like and what it’s made of. In other words, we can observe the hardware of the brain. So lets just recreate the hardware and switch it on, and see what happens. The trick seems smart and simple. So why has it not been done till now? People are trying, but the hardware is just way too powerful.

The works of stalwarts like Hans Moravec and Lloyd Watts place the estimate of the brains computing capacity at around 1014–1018 cps (Computations per second). What does that mean? It simply means that the brain is always bustling with a lot of activity. Imagine you spot Angelina Jolie walking towards you and you decide something must be done. The kind of activity it would require across the Universe to transform you into Brad Pitt and make your shirt come off and finally make her fall for you in that one moment — is a lot of activity. The brain simply has more than that going on inside it at any point of time. It takes super computers like IBM’s Blue Gene/L or Watson to even come close to achieving that kind of computing power. And to imagine that we carry all of that within the size of a football on our shoulders is really very humbling. So its not easy, and people are really throwing their weights behind their research to getting it done. You can get a good account of the kind of research that is already underway in Kurzweil’s book — The Singularity is Near. You would be amazed. Here’s an excerpt from an excerpt in his book -

“ It may seem rash to expect fully intelligent machines in a few decades, when the computers have barely matched insect mentality in a half-century of development. Indeed, for that reason, many long-time artificial intelligence researchers scoff at the suggestion, and offer a few centuries as a more believable period. “

— HANS MORAVEC, ―WHEN WILL COMPUTER HARDWARE MATCH THE HUMAN BRAIN?‖ 1997

Okay that doesn’t sound very optimistic. I promise the follow up lines to this one are full of optimism. This is my way of urging you to go and pick up the book. As of now, it suffices to know that this pursuit is on. In fact the community working in this field is off to the races, and we might reach there soon.

So what does this imply? For a moment lets drift away and think of the possibilities that would entail this achievement. In some ways, this is nothing new. We’ve been creating consciousness at a rapid pace in the form of babies all around the world. Yet something is new and different about artificially created consciousness. A good starting point for you to think about this would be to look at your laptop or phone and imagine if you yourself were trapped in there somewhere, what would you do? With all the hardware, touch screen, camera and wifi, what would you do? Would you have the same kinds of aspirations you do today?

Would an artificial consciousness have the same emotions of happiness, sorrow, confidence or uncertainty as you do? All these are important questions, but the one we need to choose to go forward in this discussion is the following — Would the machine know the same set of things that you do or would it know much more? Once a machine gets conscious, wouldn’t it quickly find out everything there is to know? And then become capable of doing everything possible? My answer is no. I have reasons to believe that it isn’t possible to know about everything, even for a machine, at any time in the future.

I get a feeling sometimes that when consciousness is first created, I (or someone with curiosities similar to mine) would go to this machine and start asking questions — “where did you come from?” or “what was it like there?” and it would say, “I have no clue mate, I’m so blanked out I feel like a vegetable…”

I call this the AIsenberg uncertainty principle.

AIsenberg’s uncertainty principle

Its very simple — we can never know everything.

To some this might be obvious. Others need some light, to appreciate the darkness.

Every time we ask a question, we unknowingly make a transition to a state of not knowing something. And we do that quite frequently. The process starts with an encounter with one or more unknowns, followed by the act of knowing, and ends in the generation (and sometimes storage) of knowledge. If I had to give you the most universal observation possible, it would be this — everything that you’ve ever encountered, or will ever do in the future, is either known to you, or is unknown. Here’s the idea in the form of an equation for later recollection

The inspiration for using Epsilon for uncertainty or unknowns comes from my engineering background. We use Epsilon for all things nasty — error in measurement, noise in data, uncertainty in models. With every initiative we say, ‘okay we’ve done our best, lets see how it turns out’. The ‘lets see’ part is epsilon. I am obsessed with it and can go on and on about it, but would rather suggest that you learn about it from Nassem Nicholas Taleb. You’ll instantly know what I’m talking about. Here it’s sufficient to know that Epsilon represents things that we don’t know about.

This, however, has hardly ever been a limitation. In fact, its always a good starting point for any initiative, to accept that there would be unknowns and often unexpected turn of events. This, and similar guidelines from over the centuries, have been made part of what is called the scientific approach to problem solving, or Science, in short. We’ve known it forever, but can seldom follow it. We are pre-disposed due to evolution to think irrationally in ways that fly in the face of Science. That’s not a bad thing — because it has kept us alive till now. And no, its not simply a behavior aspect that we can correct if we go to some kind of rehab. Its part of our biology to be irrational, and in our environment to reward it. Yet, its important to appreciate the scientific approach because now we have at our disposal tools that are capable of scientific thinking. Moreover these tools don’t have the biological limitations that we do. More on this in the following section, intriguingly titled Computation.

Computation

From the previous section we see that tomorrow if we have artificial consciousness, it would still need to adhere to the AIsenberg principle at least in its nascent state. The machines would also have to deal with things they don’t know about and learn about as they go forward. So we’ve found some common ground with the machines — uncertainty in problem solving. More generally speaking — problem solving itself. And while we might share this with the machines in the future, its what we share among all of us today. Every thing that we do can be defined as an attempt at solving some problem — from as trivial as sipping coffee (solving the problem of transferring nutrition from a cup to our body) to building a giga factory that produces half a million cars in a year (God what a mad man).

Problem solving is more fundamental to our existence than intelligence is. If we have to shift from human intelligence to a more broad understanding of it, then a functional definition of intelligence would be the ability to solve existing problems, and to identify new ones to solve.

How does problem solving link with the AIsenberg principle?

A problem is always something we encounter in the real world. And the solutions that we come up with always emerge from what we already know. As we try solutions, we run into the unknowns and gain a better understanding of the world, thereby increasing our knowledge, till we are finally able to get a solution to work. After every such cycle, we have an expanded knowledge base.

Convince yourself that all forms of artificial intelligence that exists today is about solving some problem. Siri is an attempt at solving the problem of controlling your phone through voice input. A driverless car is a solution to the problem of being transported from one point to another. The attempts described in the previous section solve the problem of replicating the hardware of the human mind.

Would it be right to generalize the study of human intelligence into the study of problem solving? Lets keep this up for debate. In the mean time, the following objective doesn’t seem any less valuable than replicating human intelligence -

Create a general purpose problem solving framework (or agent, or entity) that is capable of inducing any change imaginable, possible and desirable (this last one is optional)

We are lucky in this respect, since we already have an abstract set of ideas for general purpose problem solving — the scientific framework. Science gives us the highest quality of knowledge that appears in the above mentioned equation (I have to say high quality knowledge because some people like to use the word knowledge also for the figments of imagination generated from religious beliefs) It lets us identify what exactly to look for in the realm of epsilon (from among the unknowns) by allowing for hypotheses. After experimentation, every time a hypothesis is proven right or wrong, our knowledge base increases. For centuries now, we’ve been growing the ‘Knowledge’ part of the equation at a tremendous pace, and Science has a crucial role to play in that.

This is not to say that Science is the only way forward. The method has its limitations. In fact, many of the leaps and advancements in our knowledge have come from random accidents and bizarre co-incidences. But sooner or later, every idea needs to churn through the scientific screening process to qualify as knowledge.

The natural line of inquiry at this stage should have been to develop systems that can optimize problem solving using the available technology of the day, within the ethos of science. We observe the problem at hand, build sufficient understanding to arrive at a solution, then indulge in trial and error to successfully solve the problem. Whether it was deliberate or not, we ended up developing a paradigm in which all these steps can be delegated to the machines, for good or for bad. Enter Machine Learning.

Machine Learning

A common story among societies in which swimming pools are a relatively new entry is parents telling their kids how they learnt to swim by jumping into a deep well and ensuring that they don’t drown. Machine learning works more or less the same way (Reinforcement learning works exactly in the same outrageous way) Lets take a minute to analyze what your Who Dares Wins dad just threw at you:

Problem being addressed — Ensure no drowning

Path to finding a solution — Trial and error to experience the water’s response to different body movements to ultimately identify the configuration in which he doesn’t drown

Expanded knowledge base at the end of the endeavor — the ability to swim in calm waters.

Its interesting to note that the knowledge thus collected is not always shareable or easily communicable. After coming out of the water, your Dad may not have been able to instruct the next daredevil in line about the steps to follow during his first visit to the well. However, thrown in the water again, your Dad would be able to regain his swimming ability by himself. This observation will come in handy just in a short while.

Machine learning refers to the idea of throwing a machine into a well and expecting it to learn how to swim, just like your Dad did. Additionally, it involves adjusting your understanding by taking into account how the machine is different from your Dad. Most contemporary machines don’t have hands and legs, for example. While your Dad relies on inputs he gets from the five senses of the human body, the machine can perceive only digital data (that’s why ML is so often about data handling or data science). Similarly, while your Dad can react with body movements, the machine can only react by throwing out more digital data. And that’s about it.

ML in its current form is about a machine constantly taking in data and giving out data till it eventually learns to give out exactly that data which is required to solve the problem at hand.

In the process, the machine learns the transformation it needs to apply to the incoming data to get the desired output — that’s knowledge. But it may not always be able to communicate how it arrives at the data being thrown out. Most traditional ML algorithms end up with a mathematical function of the input set. If you can extract that function then the machine has done a good job of communicating its knowledge. The deep learning algorithm on the other hand, is not able to communicate exactly how it works — its knowledge is not easily shareable.

And that’s how we’ve managed to outsource so many steps of problem solving to the machine. It starts with a problem, engages in vigorous trial and error, builds an understanding of the process, and ends up with a solution.

Needless to say that the current ML infrastructure is extremely primitive as compared to our mind’s problem solving ability. But the way it has captured the underlying process is nothing short of miraculous.

Whether we outsource all steps of problem solving to the machines, or keep some parts to ourselves is a matter of design, depending upon the time’s technological ability (check out IBM’s Cognitive Computing initiative for an example of a hybrid machine-human approach). Ultimately, both the machine and the mind rely on a very fundamental property — the perception of change in their environment. And I would like to close this section with a short discussion about this property.

It took me a lot of twisted reasoning to finally zero in on the word ‘Computation’. The mind works in amazing ways. I knew since the beginning that I wanted computation as the one word attribute for this set of ideas, but for a long time, I couldn’t justify to myself why I would do that. Hence you might find it irritating how I’ve wandered along divergent lines of expression here. Bear with me while I try to make it fall into place. Help me with a better description by providing some feedback.

Intelligence is about problem solving. Problem solving is about inducing some kind of change. Inducing change is a form of computation. Lastly, imagining a change is another form of computation. Distributing 6 apples among 3 kids equally, or imagining how you would distribute 6 apples among 3 kids equally, would require you to carry out the same process in your head. I like to refer to that underlying process as a computation. Machines do that too. A machine’s ability to detect a change from 0 to 1 and vice versa is a computation, and that forms the basis for its ability to do everything else. 0 to 1 is the most fundamental change that a machine is able to bring into effect, and the rest builds on top of that.

Any more abstract exploration would throw us off the overall objective of this article. So I would urge the people who want to find out what is it about machines that make them capable of developing intelligence — to look into the theory of computation. Its serene.

The Matrix

“people fear what they don’t understand… ” — Andrew Smith

Its true. We do. If you want to disagree, head over to Quora, there’s an entire discussion about it.

What is it about AI that spooks people out? We’ve established two main things about it till now. One, that we are getting closer and closer to developing the ability to solve harder and harder problems. That should be a good thing. Two, we might some day be able to synthesize consciousness artificially, and that should give us more knowledge about our own selves. What could possibly be worrisome about either of the two developments? Or, what is so difficult to understand that might be scaring people?

There are two ways in which humans think — via induction and via deduction. Induction is when we’ve seen something in the past similar to what is being witnessed and we’re able to generalize and say, ‘oh right I know this’. Artificial consciousness has no precedence whatsoever that we can relate it to. We genuinely know absolutely nothing about it.

Deduction is the lesser used way of thinking in which we piece together cause and effect and fill blanks is sequences of logical reasoning. With respect to AI, people like Nick Bostrom have been doing so for quite some time, and their deductions are disturbing. I like deductions too, so I would like to present some in the following passage, but I would try to keep them as non disturbing as possible.

Before that, just to be clear, not everyone is concerned. Kurzwell’s camp is suspiciously optimistic about the future of AI, while referring to it as Singularity at the same time. Andrew Ng maintains that the kind of fears being raised are way too far fetched and we don’t have anything to worry about in our lifetime. Maybe we have nothing to worry about even about our great grand children’s lifetimes. But that is not sufficient ground to dismiss an idea. As Tim Urban pointed out, people dismiss fears regarding AI as too far away in the future, but no one has been able to find any logical counter-argument to prove that these fears point to implausible scenarios. Finally, ideas have merits not just in their plausibility, but also in how much they can fire your imagination. So why not explore…

A long time ago we figured that moving things around is way too inconvenient, and we need to do something about it. Initially, a plank on a couple of logs was not the perfect looking solution, but it worked in some cases — to help us move things around. As time passed, it ended up evolving into technology that now moves us around, faster than we can ever move on our own feet. Trains go like 400–500 km/hr today. You go like.. 5?

Fast forward to the present times. We’re struggling with making machines behave like us. Computer vision systems are barely able to identify objects from images using the deep learning algorithm. This doesn’t bother us, because the technology is nowhere as good as we are. But it doesn’t take long for a technology to evolve. A computer might not be able to recognize the breadth of objects that we can, but it can definitely process what it sees much much faster. Have a look at this, and you would have a better chance of winning against a train, than competing with this camera, with respect to speed.

So, we have a proven track record of outperforming ourselves with our creations. Whatever it is that we mechanize, we would end up leaving ourselves far behind eventually. We don’t understand non-linear dynamical systems very well. Its now widely accepted that we screw up with macro-economic decisions because an economy is a complex adaptive system, and we’re bad with understanding those. We are already trying to make the machines understand our world, and they will end up outperforming us. They would eventually have a much better understanding of our social dynamics than we do. In some literature (its a blog post, I can be that vague) it has been suggested that our intelligence might someday compare with machine intelligence, just like a mouse’s intelligence compares with our intelligence today. How empowering..

So, machines have been surpassing us in the past, and they will keep surpassing us in the future. It is the question of consciousness that makes things tricky.

Human beings have a consciousness and are biologically programmed to ensure their survival. When machines develop consciousness, would they share this goal of self preservation? We don’t know.

What if self preservation is not a matter of being programmed into the machine, but turns out to be an emergent property of a complex consciousness? Somewhere, life began as self replicating molecules emerging out of a complex system of interacting chemicals. Somewhere human consciousness emerged out of a complex system of animal species competing for survival. What if the need for survival emerges out of a complex system of artificially conscious beings interacting with each other?

To self preserve or not to self preserve.

First, lets assume that machines have no desire for self-preservation. Here, despite consciousness, machines develop an important distinction from humans and give up on a critical ability — making choices out of self-interest. Self-interest guides our decision making more than logic or rationality does. It guides our actions, and thus our experience, and thus the knowledge that we have. No two individuals share the exact same knowledge because no two individuals have had the exact same experiences.

Not only does self-interest ensure different knowledge sets for every individual, it also limits the amount of knowledge that we are able to share or communicate with other people. This might be because a lot of underlying motivations or axioms on top of which we build the rest of our knowledge reside in the section of the mind called our sub-conscious. And we don’t have a particularly good access to that region, in general. Here again there are two possibilities. The less exciting possibility is that the nature of consciousness may be such that its subconscious region would be hidden even from the machine. In this case the machines would face the same limitation as we do in communicating with other machines. The other possibility is that the machine has access to its entire consciousness, and thus is able to communicate its entire knowledge base with other machines. Everything known to one machine is known to every other machine as well. In a way there would be no boundaries between the consciousness of two machines. You might be surrounded by different kinds of devices performing very different functions (the one that irons your clothes will not be driving you around as well) but behind their physical incarnations, there would be a single consolidated consciousness that has visibility of all events that are being witnessed by each of those machines. There’s going to be a single ethereal presence all around you, in everything that you interact with. I don’t know about you, but to my mind this brings forward just one word — The Matrix.

Ok Bye…

Next…

Now lets look at the other possibility — the one in which the machines develop the objective of survival. If every conscious machine has the survival instinct and needs to compete with other machines, then it may suit them to not share all information among each other, sparing us from the Matrix. This development could further branch out into two different possibilities. In the first possibility, the machines would think of themselves as separate from humans. We might exist in the same ecosystem, but they will be them and we would be us. It would be like ordinary human beings living among super heroes and mutants, all going about their daily business. We may not hold a special place in a machine’s heart. It is ideas like these that start to scare people. We get a sense of being dispensable.

There is a way out to prevent this from happening, and that constitutes the second possibility. There might be a way that tomorrow even when an artificial consciousness arises and scales up its intelligence to become super human, instead of that consciousness being separate and disconnected from human consciousness, it could develop as an extension of human consciousness. The machines then won’t be separate from us. Instead, as our consciousness evolves and expands into the domain of machines, their capabilities would simply be extensions of our own capabilities. Extended consciousness might be a consumer product, or it might end up being a step in our evolutionary path, making us, what many people like to call, transhumans.

So which of the above possibilities are going to turn into reality? We don’t know the answers to such questions yet. And its possible that we might not know these answers in advance (before consciousness is realized) This is the first glimpse of the Singularity that Ray Kurzweil often talks about.

It’s important to understand what differentiates these ideas from the ones discussed in the previous sections. As long as you have an objective that you choose to achieve, the ideas and initiatives you take fall in the category of problem solving. Everything you do to achieve human like consciousness artificially falls in the category of consciousness. But when artificial consciousness decides to have its own set of problems to solve in the same environment in which we exist, we end up in an ecosystem we can’t yet describe. This ecosystem should be the focus of these set of ideas and discussions. It is difficult to find a sense of purpose in these ideas, other than dealing with this super-intelligence. These conversations need to be not just technical, but also philosophical.

In conclusion, I believe despite the uncertainty accompanying the singularity, that there is nothing to suggest that machines in this era would violate the AIsenberg principle. Their knowledge base would grow at a rapid pace, but the machines would have their own set of uncertainties and unknowns to deal with. If only the machines found a way to wipe out all the unknowns they would become the Gods of the universe (or the multiverse, I don’t know.. obviously)

God

So how do you become a God? This segment is primarily for the sake of completeness with respect to the AIsenberg principle. We’ve maintained so far that any form of intelligence that exists in the future would experience things from its environment that it does not know about. That epsilon would always be non-zero. While we do have a scientific basis for the quantum counterpart of this idea — Heisenberg’s uncertainty principle — I don’t have sufficient scientific theory behind the general principle I’ve been using here. The AIsenberg principle is a belief that I hold, and as scientists, we are often tempted to question beliefs, including our own. So, in the unlikely event of the principle being violated, in a seemingly non-existent future, there might exist an entity for which epsilon becomes zero. In other words, the intelligence that is able to achieve zero epsilon would know everything. I personally don’t get attracted towards thinking about this idea because it swiftly gets very mundane, but I invite others who might think otherwise to express themselves here to produce something interesting. I’m always game for something interesting.

--

--