The Past, Present and Future of A.I.

Zacharias Voulgaris
aipinion
Published in
7 min readSep 12, 2018

Introduction

A.I. has gained a great deal of publicity over the past few years and has even become a house-hold name, at least in certain countries. But it wasn’t always like that. Even in times when some people would talk about A.I. (usually in a sci-fi setting), it was quite niche and perhaps even laughable. After all, people back then had clear boundaries between what was science and what was science fiction. So, what changed and what milestone did we encounter on the way here? Also, what does the future hold for us, when it comes to A.I. and its impact on the world we live in?

Reminiscing the Past of A.I.

Back in the 50s (yes, over 60 years ago!), the perceptron was born. Many A.I. researchers consider that to be the first A.I. module, and some scientists back then speculated about the potential of this magnificent mathematical construct that so accurately mimicked the neuron, our brains’ most fundamental element. Yet, even though a single neuron is an astounding kind of cell (lasting far longer than many other types of cells, for example), its artificial counterpart was very limited. In fact, those who studied the perceptron quickly figured out that although it could approximate many a function and solve a variety of problems, it failed miserably when it came to non-linear scenarios. As the majority of data analytics problems involves non-linear data landscapes, the perceptron didn’t look all the promising.

However, the ambitious A.I. researchers of the time didn’t give up. They quickly discovered that by combining many such artificial neurons they could create a very powerful system which they called the artificial neural network (ANN) and this system didn’t have a problem dealing with non-linear problems. However, the computing tech at the time was quite limited so it was rare to see ANNs that were large (i.e. deep) enough to do anything truly amazing, while the variety of other options available for data analytics made ANNs a bit niche as an alternative.

Not too long after ANNs made their debut, the most innovative discovery in the history of A.I. since ANNs was made. This time a different approach was taken, as the innovator of this new kind of A.I. was a rather pragmatic scientist specializing in control systems, the famous Prof. Zadeh. His A.I. framework, which he called Fuzzy Logic, enabled computers to “think” using ambiguity as part of their process through the ingenious advent of fuzzy sets, mathematical entities that modeled uncertainty through the use of custom-made functions that he called “membership” functions. Applying this strategy to predictive analytics gave rise to what’s known as Fuzzy Inference Systems. The Fuzzy Logic approach, which is also referred to as the possibilistic approach, offered a viable alternative to the probabilistic models that Stats professionals employed. The best part of this approach was that it worked really well without much computational cost, making it easy to implement even in machines that didn’t have the computing power of a PC.

Beyond these two A.I. frameworks, there were other ones such as the combination of them (a family of models called neuro-fuzzy systems the most well-known of which being ANFIS), each having its own set of benefits. Yet, the computing power was still very limited so all these fancy constructs couldn’t bring about sufficient value to render the research of A.I. a worthwhile process.

That’s what caused the so-called A.I. winter to take place, a time when A.I. research wasn’t something that could claim a grant easily, so fewer and fewer people would pursue it. One such person who continued working on A.I. systems despite the overly adverse circumstances was Prof. Hinton. Other A.I. winters followed, yet A.I. was evolving as a field, making more A.I. systems viable options to consider, should the computers get strong enough to implement these models efficiently.

Observing the Present of A.I.

Currently we live in an A.I. golden age, not because some super-genius came up with some fancy A.I. algorithm out of a sudden, but rather due to the fact that computers grew more powerful, while the corresponding hardware that made A.I. systems easy to implement got significantly cheaper. Also, the fact that A.I. enthusiasts transcended academia (e.g. through the R&D departments of certain tech companies and a few start-ups) may have helped in that. Moreover, more and more people were affected by A.I. systems, mainly through electronics that made use of it (e.g. smartphones), bringing about monetary value to this field which soon became independent of computer science and electronic engineering.

So, today we are exploring various applications of A.I. through various data science related systems (e.g. virtual assistants like Amazon Echo), as well as novel hardware (e.g. robots like Asimov and the metal “dogs” at Boston Dynamics). The A.I. software is improving too, but only because people like Prof. Hinton and his students have stuck around. The most interesting artifact of our era, however, is that now it makes business sense to have a company that offers A.I. as its main product or service. Companies like Deep Mind (acquired by Google recently) can focus on A.I. applications primarily, while even more conservative companies such as IBM, don’t hesitate to branch out and develop advanced (yet still narrow) AIs such as Watson, that can stun people with its intelligence, which is more relatable than the mathematical models of the previous eras.

Speculating about the Future of A.I.

So, what’s next? Where will all this A.I. craze take us? Will the machines rise and take over the world? As much as this possibility makes for some great sci-fi films, the thing is that it’s to no-one’s benefit to let A.I. loose when it comes to its evolution. Surely there is merit in the idea of a general A.I. (aka AGI or Strong A.I.) that can perform a variety of tasks equally well as a human, if not better. Yet, it’s unlikely that it will become something everyone has access to, like a calculator, even if the technology of A.I. systems is bound to drop in price and production cost. Also, it’s probably not going to come about in the next decade, leaving us sufficient time to prepare for it and research the matter of A.I. safety more thoroughly (this is an active field of research already).

We’d also expect to see more and more autonomous A.I. systems that can learn more efficiently than conventional deep learning networks. Yet, fail-safes are bound to be in place to ensure that they don’t become too full of themselves, so to speak. After all, just because it is intriguing to see A.I. systems in charge of their own training and “reproduction” it doesn’t mean that anything good can come out of such an initiative if there is no human in the loop.

Moreover, the applicability of A.I. is bound to increase, offering more and more value to various industries (e.g. the medical sector through not just diagnostics but also treatment such as surgeries). This can happen through automation mainly, though additional processes may also because feasible through A.I. (e.g. new materials or new designs that offer a new series of possibilities for consumer products and B2B or B2C services).

All this may have a disruptive effect in our society, leading to the necessity of new socio-economical structures, such as the granting of a basic income to every citizen over a certain age. The possibility of new kinds of jobs is also quite likely, though this sort of occupations may be beyond what we can currently fathom, so it is likely that flexibility and adaptability become the most sought-out qualities in an employee then.

Final Thoughts

A.I. was, is, and is quite likely to remain a fascinating field of science and technology. Even if it is quite “math-y” in its core, its applications make it rather appealing to both the A.I. professional and the enthusiast of this field. So, it’s bound to continue being a relevant factor in our world, even if certain people may choose to remain more traditional, for their own reasons.

There is no doubt that it has its dangers, much like every other technology out there, but if we handle it with care and foresight, it’s bound to be an asset for us. Perhaps it may even become ubiquitous, much like electricity and internet connectivity, offering its benefits without drawing too much attention to itself, while the need for concern from its users will become minimal. Maybe it will change our world fundamentally but maybe it won’t, just like blockchain tech hasn’t changed the use of currency in the majority of countries regardless of what the visionaries of this tech had predicted.

Whatever the case, if enough care is taken so that it co-evolves with us, instead of spinning out of control, it will be a new kind of resource that can better our lives in ways that help us evolve too, into more a more intelligent and more creative species.

Dr. Zacharias Voulgaris was born in Athens, Greece. He studied Production Engineering and Management at the Technical University of Crete, shifted to Computer Science through a Masters in Information Systems & Technology (City University of London), and then to Data Science through a PhD on Machine Learning (University of London). He has worked at Georgia Tech as a Research Fellow, at an e-marketing startup in Cyprus as an SEO manager, and as a Data Scientist in both Elavon (GA) and G2 Web Services (WA). He also was a Program Manager at Microsoft, on a data analytics pipeline for Bing. Currently he is the CTO of Data Science Partnership, in London, UK.

Zacharias has authored three other books on data science: Data Scientist — The Definitive Guide to Becoming a Data Scientist, and Julia for Data Science, andData Science Mindset, Methodologies, and Misconceptions, while he has co-authored a book on A.I. for data science (currently in production) with Yunus E. Bulut. He also mentors aspiring data scientists through Thinkful, and maintains a data science / AI blog.

--

--