Brain by Gordon Johnson

Looking back at the eXplainable Artificial Intelligence

Hands-on collection and analysis of Twitter and Google Trends data with twint and Python

Adrianna Janik
Published in
5 min readJan 14, 2020

--

Among photos of the black hole, editing genes, quantum computers outperforming classical machines, and the ongoing debate on climate change… the past year brought a well-deserved recognition of deep learning — the technology behind so many digital solutions that it’s hard to keep track. The ACM Turing Award for 2018, also known as the Nobel Prize of Computing, was awarded to Yann LeCun, Yoshua Bengio and Geoffrey Hinton for breakthroughs in deep neural networks development for computing. Machine learning is inescapable and currently augments our everyday life starting from machine translation to a plethora of smart assistance. For example, predictive keyboards are integrated in our mobile phones to suggest words based on our writing style.

Technology affects the everyday lives of many people — but are we ready? Our readiness can be seen by the delayed reaction from the law enforcement bodies who have just started to introduce laws regulating the usage of AI. In response during the last few years, relevant research areas emerged: eXplainable Artificial Intelligence or XAI and Fairness, Accountability, and Transparency in Machine Learning in short FATML. The focus of XAI is to provide an understanding of trained models, which are often referred to as black boxes, and assess their trustworthiness.

A crucial part of XAI is visualization, where the user of the black box model is given one or many visual representations to better understand how a certain task is solved. There is on-going research that aims to develop methods to explain deep neural networks. Some works rely on feature visualization, others are described in the building blocks of interpretability, exploring neural networks with activation atlases, LIME, TCAV, etc. The list does not end here as new methods are being published and old one rediscovered. Useful resource of some common techniques used in XAI can be found in the book by Christoph Molnar Interpretable Machine Learning.

Channel’s attribution can reveal what channels contributed to the final output classification and to what extent. The Building Blocks of Interpretability
Activation Atlas of Inception V1, layer Mixed5B. Atlas of features shows how the network typically represents some concepts.

The topic of XAI is hot and the demand keeps on growing as more methods are being published. Yet, is there a way to get global insight into this growing trend? Let’s check out some clues together and try to answer some relevant questions: How interesting is AI? How interesting is XAI? Was this interest always present?

To answer those questions, I collected data from two sources:

  • Google Trends for topics: Explainable Artificial Intelligence, Artificial Intelligence, Deep Learning, and Machine Learning
  • Twitter for hashtags: #AI and #XAI.

These should show us how popular are the subjects of AI and XAI under the assumption that the chosen keywords and hashtags, as well as platforms, are representative of the subject and the internet users.

Among 4.48 billion Internet users worldwide, Twitter is used by roughly 330 million of them, which constitutes less than a thousandth fraction of 1%. Despite being a niche it is a popular medium to communicate breaking news. On the other hand, Google Search Engine globally accounts for 76% and 86% of desktop and mobile search traffic, respectively. Looking at just Twitter data wouldn’t be representative for the whole Internet users community but it is exactly why it’s worth it as its specific to a micro-blogging platform, it limits content to messages of 280 characters, and hashtags in tweets makes it very easy to classify. For the context, Google Trends are also presented.

Just during the last three weeks of 2019, there were more than 7000 tweets with the hashtag #AI per day, which translates to the fact that

Every 12 seconds there is someone who tweets about #AI. Exaggerating we can say that every 12 seconds there is a breaking news involving AI.

This is all great, but who can catch up? Who can understand the impact of AI and alleviate the risks?

We can see how the field of artificial intelligence became more and more popular by looking at the increasing interest rate across years 2015–2019, roughly starting with the publishing of the Deep Learning article in Nature in spring 2015, authored by the three Turing award recipients. In the chart below we can see similar rising interest trends among topics of artificial intelligence — AI, machine learning — ML (a subcategory of AI) and deep learning — DL (a subcategory of ML).

Explainable Artificial Intelligence as a subject in Google Trends as well as in Twitter is getting more popular, we can see the red trend-lines in the charts below, where the data downloaded from Google Trends are compared with the data collected from Twitter using twint.

Google Trends for Explainable Artificial Intelligence, annotated with the date of publishing Deep Learning article in Nature by LeCun, Bengio & Hinton, introduction and implementation dates of General Data Protection Regulation (GDPR).
Tweets for hashtag: #xai, annotated with the date of publishing Deep Learning article in Nature by LeCun, Bengio & Hinton, introduction and implementation dates of General Data Protection Regulation (GDPR).

Every 20 minutes someone is tweeting about #XAI. So for 1 tweet about #XAI there are 240 tweets about #AI.

XAI is getting more and more popular and we can clearly see this in the increasing number of tweets and terms searched for in Google.

Back in 2017 before the implementation of GDPR Google’s research chief — Peter Norvig questioned the value of eXplainable AI, recalling poor explaining performance of humans. Although there may be other ways of checking for algorithms correctness like robustness against adversarial attacks or detection of bias, eXplainable AI attracts enthusiasts and is still getting more popular. If the trend will continue, next year we could observe further rise of interest in XAI with average interest per month rising to 55 points or more from current 47.

In summary, every 12 seconds there is someone who tweets about AI. Every 20 minutes someone is tweeting about eXplainable AI. Google Trends for the topic of eXplainable AI and tweet counts show similar rising trends over the last 5 years. The source code and the data used to create these charts are all available here. Go check it out for yourself and let’s maybe visualize those trends next year!!

--

--