13 AI systems that show how far artificial intelligence is

Michael Gramlich
9 min readAug 9, 2020

--

This blog article is extracted and adapted from “The Ultimate Data and AI Guide”, which I co-authored with Alexander Thamm and Dr. Alexander Borek.

One of the questions I most frequently get is: “When will AI be here”? The answer is it is already here. In fact, it has been for a long time. The thing is just that that due to the hype around AI in the media, the public has developed a distorted perception of what AI actually is.

In this article I want to share 13 impressive (but sometimes also scary) AI systems that have already been developed and that are meant to help you demarcate fiction from reality. They are meant to show you what AI systems today are already capable of and what is still out of our reach.

Note: This list is subjective and obviously far from complete. I have tried to include AI use cases from various verticals. If you have AI systems that should be added to list, I am looking forward to your thoughts in the comments section.

What is Artificial Intelligence?

But first let’s talk about what artificial intelligence (AI) actually is. Instead of drowning in an academic discussion about how AI can be defined, let’s keep it simple:

“AI is computer systems or machines that display intelligent, human-like behaviour and capabilities that allow them to perform tasks that would usually require human intelligence.”

Weak vs strong AI

Since the notion of intelligence is complex, it helps to distinguish weak from strong AI on a continuous scale. Weak (or narrow) AI refers to systems that are able to perform a narrow and specific task, e.g. classifying an email as spam, or a more sophisticated AI system would be a chat bot that is able to understand and react to human language. These might seem trivial to us, but they actually require a good deal of intelligence to perform. Such weak AI systems have been around for years (and some even for decades) already. Hence, the answer “AI is already here”.

The thing is that when people hear AI, what they think of are typically futuristic, human-like androids like the Terminator, R2D2 or Sonny from the “I, Robot”. That’s somewhat of a misconception. That’s a bit like asking someone in 1920 how they imagine a car and they give you the description of the newest Mercedes Benz S-model from 2020, rather than what cars looked like in 1920. What people think of when hearing AI is what we call strong (or general) AI. Strong (or general) AI refers to systems that are able to perform an entire spectrum of tasks that require human intelligence, including visual perception, verbal communication, emotional interpretations and empathy.

Weak (narrow) vs strong (general) AI

What is currently possible with AI? These 13 AI systems will give you an idea

So if AI systems should be put on a strength scale rather than being conceived of as one futuristic thing — then we are we at? On the scale from very weak AI system (e.g. email classification) to strong AI system (the Terminator), what are AI system currently capable of?

The following 13 AI systems should give you a sense of what is currently (not) possible.

1. Detecting mental health from Instagram images

Field: Medical/psychological research

Use Case: Use images from people’s Instagram account to detect whether they are (inclined to be) depressed. Relevant features for the ML model were, among others, colours in photos (grey, blue and darker images were more likely to be posted by depressed people) and number of people in a picture (healthy people tended to have more people in their pictures).The trained model was able to identify patients who had depression 70% of the time. A review of studies has shown that the respective number of doctors identifying depressed people is 42%.[i]

Data used: 43,950 Instagram pictures of 166 individuals.

2. Composing music

Field: Creativity/art/music

Use Case: The Luxembourg-based start-up AIVA has created a neural network able to compose entire rock songs or symphonies by itself. AIVA trained the model using the works of Bach, Beethoven and other composers (in the machine-readable MIDI format) to find common patterns and predict the next tone or chord in a song. The symphonies and songs created by the algorithm are of super-human quality and you would not be able to tell that they were composed by a machine.[ii]

Data used: Works and songs from a number of composers and artists such as Mozart, Bach and Beethoven stored in MIDI format.

3. Recognizing chimpanzees’ faces

Field: Wildlife research

Use Case: Scientists from the University of Oxford have created a deep neural network that is able to recognize and track the faces of chimpanzees in the wild. This can significantly decrease the time wildlife researchers spend analysing video footage.[iii]

Data used: Ten million images of wild chimpanzees.

4. Diagnosing breast cancer

Field: Medical research

Use Case: Some breast biopsy results are extremely complex to interpret in order to check for possible cancer (sometimes doctors even disagree with their previous diagnosis when shown the same image again).

Researchers from the University of California, Los Angeles, have developed an algorithm to analyse and detect breast cancer from biopsies that promises to be more accurate than doctors’ diagnoses.[iv]

Data used: 240 breast biopsy images.

5. Detecting banana disease and pest

Field: Agriculture

Use Case: The app Tumaini is able to use a picture of a banana to detect banana diseases and pests. The app was designed to be used by farmers to detect potential threats to their banana plantations.[v]

Data used: 20,000 images of bananas with visible signs of disease or pests.

6. Detecting PTSD based on voice recordings

Field: Medical/psychological research

Use Case: Researchers from the NYU School of Medicine have created an algorithm that is able to detect whether a person is suffering from post-traumatic stress disorder (PTSD) based on voice recordings of them speaking. It is 89% accurate in distinguishing between those with PTSD and those without.[vi]

Data used: Hour-long interviews with 131 Iraq and Afghanistan War veterans (53 suffered from PTSD, 78 did not).

7. Diagnosing Alzheimer’s disease earlier

Field: Medical/psychological research

Use Case: A research group has created an algorithm that is able to diagnose Alzheimer’s disease earlier than was previously possible. The algorithm detects changes in the metabolism of a patient which are subtle and can be difficult to recognize for humans. The symptoms are early signs of Alzheimer’s disease.[vii]

Data used: 2,100 tomography brain images from 1,002 patients.

8. Draw Flintstones images based on scripts and descriptions

Field: Art

Use Case: A group of researchers has created an algorithm that is able to create images of The Flintstones based on scripts and descriptions. So, for example, it is able to paint a Flintstones scene from the description “Fred, wearing a red hat, is walking in the living room”.[viii]

Data used: 25,000 videos of The Flintstones with descriptions.

9. Determine sexual orientation from an image of your face

Field: Medical/psychological research

Use Case: Researchers from the University of Stanford have created an algorithm that is more accurate than humans at detecting a person’s sexual orientation from images of their face. Based on one facial picture, it was able to tell whether a person was gay or heterosexual in 81% of the cases for men and 74% for women. This compares to a human judgement with much lower accuracy of 61% for men and 54% for women. [ix]

Data used: 35,326 facial images from online dating websites.

10. Recognize the sentiment in a sentence

Field: Natural language processing

Use Case: Deep learning specialists from OpenAI have created an algorithm that is able to detect the sentiment of a word, phrase or even sentence.[x] So, for example, it would classify the sentence “This development is amazing” as having a positive sentiment.

Data used: 82 million Amazon reviews.

11. Read thoughts

Field: Medical/psychological research

Use Case: Researchers from Columbia University have created an algorithm that is able to “read thoughts”. By examining brain signals measured with electrodes placed directly onto patients’ brains, they were able to create an algorithm that translates these signals into words. However, this experiment is still in its infancy and only one-digit numbers could be deciphered.[xi]

Data used: Five patients undergoing open brain surgery.

12. Beating humans in (computer) games

Field: (Computer) games

Use Case: ML algorithms have been trained to play various games and have been beating humans in more and more complex games. Today, algorithms play at a super-human level in the following board/video games:

- Go (board game)[xii]

- Quake III Arena [xiii]

- Star Craft II[xiv]

- Dota[xv]

It is likely that we will see many more AI-powered bots perform at super-human level in other games in the very near future.

Data used: 2,100 tomography brain images from 1,002 patients.

13. Show robots to perform a task

Field: Robotics

Use Case: Researchers from NVIDIA have developed a deep-learning-based system that enables a robot to complete a task simply by observing a human doing it. This could ultimately be used to teach robots to perform a task alongside humans, e.g. in a production hall.[xvi]

Data used: 2,100 tomography brain images from 1,002 patients.

Machine learning as the engine to create such AI systems

What all of the following use cases have in common, is that they were built with machine learning methods. If you want to create an AI system with machine learning, you need data. Ideally lots of it, the more the better. The more (high-quality) data we have, the more and better AI systems we can build. Guess what: the amount of available data is increasing exponentially. So we can expect the speed at which we are moving to the right on the weak-vs-strong scale to be faster than you think.

How is machine learning used to create AI systems? Why is the growth of available data following an exponential path? And most importantly: when will general AI be here? These are all questions for another blog article. If you want answers right now, check out “The Ultimate Data and AI Guide”, where these questions are all answered.

Do you have other cool machine learning use cases/ data products/ AI systems that should be added to this list? I am looking forward to your thoughts and comments in the comment section below :)!

Also check out my website and other blog articles here https://www.michael-gramlich.com

[i]Reece, A. G., & Danforth, C. M. (2017). Instagram photos reveal predictive markers of depression. EPJ Data Science, 6(1), 15.

[ii] AIVA. (n.d.). N.t. Retrieved 2020–01–01 from https://www.aiva.ai/i]

[iii] Schofield, D., Nagrani, A., Zisserman, A., Hayashi, M., Matsuzawa, T., Biro, D., & Carvalho, S. (2019). Chimpanzee face recognition from videos in the wild using deep learning. Science Advances, 5(9), eaaw0736.

[iv] Mercan, E., Mehta, S., Bartlett, J., Shapiro, L. G., Weaver, D. L., & Elmore, J. G. (2019). Assessment of Machine Learning of Breast Pathology Structures for Automated Differentiation of Breast Cancer and High-Risk Proliferative Lesions. JAMA network open, 2(8), e198777-e198777.

[v] Selvaraj, M. G., Vergara, A., Ruiz, H., Safari, N., Elayabalan, S., Ocimati, W., & Blomme, G. (2019). AI-powered banana diseases and pest detection. Plant Methods, 15(1), 92

[vi] NYU Langone Health / NYU School of Medicine. (2019, April 22). Artificial intelligence can diagnose PTSD by analyzing voices: Study tests potential telemedicine approach. Science Daily. Retrieved 2019–12–15 from www.sciencedaily.com/releases/2019/04/190422082232.htm

[vii] Radiological Society of North America. (2018, November 6). Artificial intelligence predicts Alzheimer’s years before diagnosis. Science Daily. Retrieved 2019–12–15 from www.sciencedaily.com/releases/2018/11/181106104249.htm

[viii] Gupta, T., Schwenk, D., Farhadi, A., Hoiem, D., & Kembhavi, A. (2018). Imagine this! scripts to compositions to videos. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 598–613).

[ix] Wang, Y., & Kosinski, M. (2018). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of personality and social psychology, 114(2), 246.

[x] Radford, A., Jozefowicz, R., & Sutskever, I. (2017). Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444.

[xi] Quach, K. (2019, January 30). Say what?! An AI system can decode brain signals into speech. The Register.https://www.theregister.co.uk/2019/01/30/ai_brain_reader/

[xii] Byford, S. (2017, May 25). AlphaGo beats Ke Jie again to wrap up three-part match. The Verge. Retrieved 2020–01–01 from https://www.theverge.com/2017/5/25/15689462/alphago-ke-jie-game-2-result-google-deepmind-china

[xiii] Timmer, J. (2019, May 30). Quake III Arena is the latest game to see AI top humans. ARS Technica. Retrieved 2020–01–01 from https://arstechnica.com/science/2019/05/googles-ai-group-moves-on-from-go-tackles-quake-iii-arena/

[xiv] AlphaStar team. (2019, January 24). AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. Retrieved 2020–01–01 from https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii

[xv] Vincent, J. (2018, June 25). AI bots trained for 180 years a day to beat humans at Dota 2. The Verge. Retrieved 2020–01–01 from https://www.theverge.com/2018/6/25/17492918/openai-dota-2-bot-ai-five-5v5-matches

[xvi] N.a. (2018, May 20). New AI Technique Helps Robots Work Alongside Humans. Retrieved 2020–01–01 from https://news.developer.nvidia.com/new-ai-technique-helps-robots-work-alongside-humans/

--

--