I’ve always found analyzing data to be very fun (yes, I know). It’s what I do now. There’s just something so exciting about deciphering through masses amounts of numbers to discover a trend, or uncover a pattern; to make order out of the seeming chaos that is the universe, represented in a quantifiable, universal, and most importantly, unbiased manner. Now, wouldn’t it be that much more exciting if we got computers — machines we’ve built for this exact reason, to perform calculations — to do this automatically? This is, in essence, the power of artificial intelligence (AI) that we have available to us right now, and this is why I’m such a geek for it.
But in order to really appreciate where we are and where we’re going with AI, we need to first understand where we came from. The story of the modern concept of intelligent machines came about in 1950, in the seminal paper “Computing Machinery and Intelligence” by the English mathematician and computer scientist (among many other titles) Alan Turing. In it he discussed the building of thinking machines, and how a human could test the intelligence of such a machine (the “Turing Test”).
The term “artificial intelligence” was actually first coined in 1955 in a proposal for the “Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI)”, written by professors John McCarthy and Marvin Minsky, two of the founding fathers of AI. The Dartmouth Workshop resulting from the proposal was held in the Summer of 1956, the first workshop of its kind and the date of which is generally considered to be the birth date of modern AI. In the same year Allen Newell, Herbert Simon and Cliff Shaw, the former two being attendees at DSRPAI, developed the Logic Theorist, considered the first AI algorithm for being the first computer program to emulate human problem solving skills.
The subsequent 40 years saw both the surge and decline in both the advancement and enthusiasm for AI, including: the first “chatbot” named ELIZA, the largest Pan-European driverless car project, as well as the two “AI Winters” of reduced funding and interests, each spanning approximately 10 years. Then in the late 1990’s, excitement for AI started back up again. In 1997 IBM’s Deep Blue defeated Garry Kasparov, the reigning world chess champion at the time. In 2000 Honda unveiled ASIMO, the first AI humanoid robot that is able to walk upright and on two legs. In 2009 Google started to develop the next generation of driverless cars in secret. In 2011 IBM’s Watson, a natural language question answering machine, competes in the popular TV game show Jeopardy! and defeated two former champions. And most recently in 2016 and 2017 Google’s AlphaGo algorithm defeated the world’s top champions in the ancient board game of Go, a game long-thought to be unsolvable by AI.
APPROACHES TO AI
Many techniques and methods have been developed for AI over the decades: expert systems, artificial neural networks, deep learning, etc. However, most can be categorized within one of two approaches: Symbolic and Connectionist.
The Symbolic approach, or sometimes called GOFAI (“Good Old-Fashioned Artificial Intelligence”), explores the idea of designing intelligence in a machine from a high-level, top-down perspective. It bestows logic, reasoning, and symbolic representations onto the machine to enable it to understand and interact with the world, including the application of rules (“stop when the light turns red”) and bodies of knowledge (“a monkey is a mammal”). It seeks to mimic the general, human-like intelligence of problem-solving and decision-making. To use an example, it would be programming into a car all the traffic rules and possible road scenarios in order to make it capable of being driverless. This approach has been dominant since AI’s inception in the 1950’s, and while it flourished in the 1980’s in the form of expert systems — computer systems that reasoned and solved problems through human logic and knowledge as input — it has since fallen into disfavour for its inability to tackle problems beyond experiments and simulations, due to the complexity and sheer amount of possibilities of the real-world. Using the same car example above, it is very difficult to anticipate and program for all the different possibilities on the road, like a raccoon being chased across a busy street by a toddler.
The Connectionist approach seeks to build intelligence from the ground-up using many simple and uniform processing units (which could be hardware- or software-based) such as artificial neurons (modeled after how real, biological neurons work) that, when connected together, form a system such as an artificial neural network (or simply neural network). The neural network then becomes “a machine that learns” through sensing and perceiving the world, interacting directly with it or indirectly through its data, and improves its intelligence through pattern recognition and trial-and-error. Extending on the car example, rather than programming in all the rules and anticipated scenarios, the car is programmed with many software nodes that connect together to form a neural network. The network begins as a blank slate, with no inherent intelligence or knowledge. As the car is driven around by a human driver, the network “learns” from the driver how to react in different situations (slowing down in front of a pedestrian, stopping at a red light, speeding up on an open road, etc.). Much like a child learning to ride a bicycle, the the car gets better at driving through repetition with more training as data.
Currently the most widely-known method within the Connectionist approach is machine learning (ML), under which reinforcement learning and deep learning (the technique used by AlphaGo) belong. Machine learning and other Connectionist approaches excel at identifying patterns (finding cat videos), and making predictions (predicting movie ticket sales) based on past information. They also have wide applications in areas such as computer vision, speech recognition, natural language processing, many of which we see and use in our everyday lives.
The reason why ML has become so prominent so quickly in just the last few years is due to the emergence of two key factors. The first is the abundance and availability of data, the raw material needed to train and improve machine learning algorithms — i.e. the emergence of Big Data. It has been estimated that 90% of the data in the world today was generated in the last two years alone, and the rate of growth is ever increasing. This provides a near-infinite amount of basic resources with which to make ML algorithms better and more applicable across many different industries. The second key factor that can sometimes be overlooked is the raw hardware computing power, the engine needed to process all of that Big Data, that is now available. Whereas it might have taken hours, days, or even weeks to process the same amount of data, it now only takes minutes or even seconds, if not instantaneously. This has enabled ML to train much more quickly and become much more practical to use.
PRESENT & FUTURE
Everything discussed up until now falls under the domain of artificial narrow intelligence (ANI), the first of three general stages of AI progression. ANI, sometimes referred to as Weak AI, specializes in one area of focus or one specific task, such as playing chess, making song recommendation, translating between English and Chinese. This is available to us right now thanks to the umbrella of ML techniques. However, because it is very goal-oriented and task-specific, critiques of the Connectionist approach believe that it will only ever allow ANI to be developed, that the next stage of AI progression — artificial general intelligence (AGI) — will require us to use a different approach to AI research, namely the Symbolic approach.
AGI, sometimes referred to as Strong AI, is in essence a machine or system of machines that exhibit the intellectual capacity of an adult human. It is one capable of a wide array of activities that we humans take for grant, such as holding a conversation with a stranger, or making an omelet; to the ability for reasoning, problem solving, abstract thinking, and perhaps even creativity. Note that my uncertainty with regards to creativity comes from the fact that no one yet knows what AGI will look like and how “human-like” they will be in terms of intelligence and abilities. On that, the important thing to keep in mind is that AGI, as far as experts are concerned, will only ever exhibit humans-like characteristics. So even if they many seem like they are capable of creativity, morality, and even emotions, they are only programmed to do so or that they have learned (from data and other inputs) to demonstrate them; that these are not inherent in them. This, as you can imagine, treads on the realms of philosophy and what it means to be “intelligent”.
The next, and ultimate stag of AI is artificial super intelligence (ASI). As for what it will be like, we can only conjecture. The comparison I’ve seen made is that its intelligence compared to that of human’s would be similar to comparing human intelligence to that of a monkey’s. Just as a monkey cannot begin to comprehend many human concepts like economics and rocketry we — as many speculate — would not be able to comprehend many concepts espoused by the ASI. This has many implications, both good and bad. And if you’re like me, this can be both super exciting and super scary to think about…
So this is my brief, brief look at the topic of artificial intelligence. As someone no more than an enthusiast of AI, I do not claim to know everything — or anything — about the topic. So please do reach out with your comments and feedback, they will be very much appreciated by someone extremely fascinated by the subject matter.