The role of Artificial Intelligence in future technology

Adel Fatkhutdinov
Nov 4 · 6 min read

What is intelligence in terms of an electronic machine? How to measure that intelligence? Almost seventy years ago Alan Turing published his work (Turing, 1950), fundamental for the entire computer science field, discussing exactly these questions. He laid the foundation for artificial intelligence, even though this term hadn’t existed by that time. The field of artificial intelligence (shortly AI) was established during the Dartmouth Conference in 1956 (McClymont, n.d.). It was described by professor John McCarthy, the coiner of the term per se, as ‘the science and engineering of making intelligent machines, especially intelligent computer programs’ and the ‘intelligence’ was declared as ‘the computational part of the ability to achieve goals in the world’ (McCarthy, 2007). Nowadays, the definition has been slightly complicated by the phenomenon of AI effect (McCorduck, 2004, p. 204) — as soon as something is done by machines, it is not considered AI anymore. In fact, as Russell and Norvig state in (Russell & Norvig, 2003), even things like time-sharing, graphical user interfaces and the computer mouse, interactive interpreters, the linked list data structure and many others originally arose in AI laboratories. Today, these technologies can barely be called artificial intelligence.

I would like to look at some crucial milestones of the field before discussing its future. Games, where, at first sight, only human can succeed, has always been a good way of showing AI progress. The first breakthrough happened when IBM’s Deep Blue, a chess-playing computer, defeated the reigning world chess champion Garry Kasparov (McCorduck, 2004, pp. 480–483). This winning showed that computers are going into activities where people had been unchallenged any time before. However, Deep Blue used ‘brute force’ methods, considering every possible option with an extremely high rate, and won the game just because of its computational capabilities rather than the ability to understand the gameplay itself. In 2011 artificial intelligence showed that it can beat human not only at games where every possible move can be described mathematically but even at the ones where cognitive skills and creative thinking are involved. Again IBM’s computer, now called Watson, beat two legendary players of Jeopardy! quiz show with a great margin claiming a $1 million prize (Markoff, 2011). With improved algorithms, big data and data-hungry deep learning approaches, a digital computer came to a great milestone of natural language processing and cognitive computing in the whole. The last, but probably the most important AI breakthrough, came to a scene in 2016. AlphaGo computer by DeepMind (startup acquired by Google) beat the world’s Go game champion Lee Sedol in a five-game match. This had been publicly considered as impossible due to the game complexity, having ~250150 possible sequences of moves while chess has only ~3580 (Silver et al., 2016). That’s why brute forcing using exhaustive search trees is infeasible in the game of Go and the machine needed to understand the gameplay itself as well as to make intelligent decisions — the ones that are more intelligent than the human’s. And with the help of artificial neural networks and deep learning technologies, a machine succeeded even in this task. Chess, Jeopardy!, Go — the best performer now is artificial intelligence — what is going to be the next?

Apart from playing games, these algorithms are becoming the basis for more serious issues. The same techniques behind the AlphaGo computer can help to make recipes for the most complex chemical compounds required to tackle diseases (Segler, Preuss & Waller, 2018). Watson has found many real-life appliances, such as being used for utilization management decisions in lung cancer treatment (Upbin, 2013) or as a teaching assistant to provide natural language, one-on-one tutoring to students on the reading material (Penty, 2016). But the more important part of these achievements is that they are tremendously taking forward the technologies and algorithms behind artificial intelligence. Because of machine learning and deep learning approaches, we now have virtual assistants like Siri and Alexa, can put less effort driving our car and see relevant recommendations on Netflix. With these technologies advancing and getting better and the computational power increase (Moore’s Law), we can make optimistic enough predictions about our future. Self-driving cars being primarily “hands-off” (Level 2) type today will reach the “steering wheel optional” level (Level 5) (“J3016B: Taxonomy and…”, 2018), leading us to the world with significantly fewer car accidents and traffic jams. When the cognitive and social computing reaches the human level, we will have conversational assistants all around us, graphical user interfaces within the software will be replaced by conversation user interfaces; computers like Watson will become our beloved psychologists. By analyzing the enormous data from patient histories as well as all possible cases of disease manifestations, the algorithms will be able preventing many diseases, including cancer and AIDS. With the same approaches, we could prevent environmental disasters and cataclysms. Artificial intelligence could help humanity reduce its impact on the environment and avoid the terrible consequence of climate change and global warming. Researchers from DeepMind have already shown good progress in algorithms for reducing energy consumption (O’Donnell, 2017). This is just a few possible predictions for the role of AI in future technology and the exhaustive list can certainly be much and much longer.

However, when discussing artificial intelligence, we can not guarantee that the same approaches will not be used in malicious goals. For instance, projects like Face2face (Thies et al., 2016) and Lyrebird (Vincent, 2017) can make a revolution in the film industry but, at the same time, they could be used for fake news production and manipulating on the political scene. The same algorithms, the same technology — but so different goals and possible consequences. In order to avoid this situation, we need to urgently make up the game rules and ethics for this particular field, so that AI benefits humanity rather than leading it to extinction.

To sum up, we should say that the role of artificial intelligence in future technology is pretty promising. With appropriate treatments, rules and limitations, we can be optimistic about the advantages the world of AI can bring us. Fortunately for us, projects like “OpenAI”, “AI for Good” and even the European Union (“Draft Ethics guidelines…”, 2018) are creating the game rules for our bright future together with artificial intelligence. We should always remember that losing it out of control can lead to irreversible consequences.

References

Draft Ethics guidelines for trustworthy AI — Digital Single Market — European Commission. (2018). Retrieved from https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai

J3016B: Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles — SAE International. (2018). Retrieved from https://www.sae.org/standards/content/j3016_201806/

Markoff, J. (2011). Computer Wins on ‘Jeopardy!’: Trivial, It’s Not. Retrieved from https://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html

McCarthy, J. (2007). What is Artificial Intelligence? [Ebook] (p. 2). Stanford: Computer Science Department, Stanford University. Retrieved from http://www-formal.stanford.edu/jmc/whatisai.pdf

McClymont, K. AISB — The Society for the Study of Artificial Intelligence and Simulation of Behaviour — What is AI?. Retrieved from https://www.aisb.org.uk/public-engagement/what-is-ai

McCorduck, P. (2004). Machines who think (2nd ed.). Natick, Mass.: A.K. Peters.

O’Donnell, J. (2017). Artificial Intelligence Climate Change — Improving Energy Consumption. Retrieved from http://www.aroundtheworldineightyyears.com/artificial-intelligence-climate-change/

Penty, R. (2016). Pearson Taps IBM’s Watson as a Virtual Tutor for College Students — Bloomberg. Retrieved from https://www.bloomberg.com/news/articles/2016-10-25/e-learning-enters-bot-era-as-pearson-taps-ibm-s-watson-as-tutor

Russell, S., & Norvig, P. (2003). Artificial intelligence (2nd ed., p. 15). Upper Saddle River, N.J.: Prentice Hall.

Segler, M., Preuss, M., & Waller, M. (2018). Planning chemical syntheses with deep neural networks and symbolic AI. Nature, 555(7698), 604–610. doi: 10.1038/nature25978

Silver, D., Huang, A., Maddison, C., Guez, A., Sifre, L., & van den Driessche, G. et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489. doi: 10.1038/nature16961

Thies, J., Zollhofer, M., Stamminger, M., Theobalt, C., & Nießner, M. (2016). Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2387–2395).

Turing, A. (1950). I. — Computing Machinery and Intelligence. Mind, LIX(236), 433–460. doi: 10.1093/mind/lix.236.433

Upbin, B. (2013). IBM’s Watson Gets Its First Piece Of Business In Healthcare. Retrieved from https://www.forbes.com/sites/bruceupbin/2013/02/08/ibms-watson-gets-its-first-piece-of-business-in-healthcare/

Vincent, J. (2017). Lyrebird claims it can recreate any voice using just one minute of sample audio. Retrieved from http://www.theverge.com/2017/4/24/15406882/ai-voice-synthesis-copy-human-speech-lyrebird

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade