We’ve analysed 6,500 images that appeared on our homepage and here is what we’ve learned

AJ Labs
7 min readJan 7, 2018

--

A collage of all the outside images our journalists used in news articles in 2017.

Choosing the right image to tell your story is just as important as a good news headline. As 2017 came to a close we decided to collect all the images that we published on our homepage throughout the year and ask ourselves what types of images did our readers see when they came to our website?

Asking the right questions

To analyse both the contents and context of each image we used Google’s Vision API. This powerful machine-learning model uses Google’s massive database of images to detect faces, landmarks, and everyday objects within an image. It works by letting you upload any image to its service and returns an image’s characteristics in the form of a weighted score.

Uploading an image of Trump returns:

To capitalise on this technology there were two main factors that we took into consideration:

1- What would we want to learn from our 6,500 images?

2- How can visual machine-learning techniques such as this one be used in the newsroom?

We started our exploration by asking ourselves the following questions:

  1. Which president or public figure appeared the most in our images in 2017?
  2. How many times did we use people’s faces and who were they?
  3. How many photos were of women and how many of men?
  4. How many times did we use photos of protesters?
  5. How often did we reuse the same photograph for another story?
  6. How many times did we use maps as our main image?
  7. What everyday items appeared most in our images?

Of course, we weren’t sure how accurate or granular Google’s Vision API was in analysing our dataset, so we started with a small sample of images and kept working our way up until we ended up querying and intersecting more than 25,000 records of data.

Technicalities

We used Python for scripting and querying the data and MySQL for storing and sorting the data.

It took around 8 hours to run the script and another 4 hours to perform the SQL queries and analysis.

Preliminary findings

While Google’s Vision API is regarded as one of the most advanced image detection platforms, it too has its shortcomings. As to be expected, it doesn’t always correctly identify the objects within the frame. In some cases this margin of error is quite acceptable but in others it totally misses the mark.

Knowing this, here are some factors worth considering when using Google’s Vision API:

  • The most useful property for analysing news images was definitely the “Web Entities” feature. Web entities returns a weighted keyword list as well as contextual links to stories containing the image. This was often very accurate for detecting well-known people.
  • In cases where people were less known, combining the “Web entities” and “Label entities” yielded better results.
  • Photos with groups of people didn’t perform very well. In several instances, a large group of refugees in boats wearing life jackets were often mislabeled as “fun” with a high level of certainty.
  • Sometimes important elements in a photo were neglected. For example, a photo of fighters on top of a pickup truck in the desert only returned “vehicle” as a keyword.
  • Hand-drawn images or illustrations performed very poorly.

Answers to our questions

  1. Which president or public figure appeared the most in our images in 2017? Trump, followed by Erdogan and Tillerson. We further drilled down to find the emotions on Trump’s face to be 20% joy, 0.6% anger, 3% sorrow, and 2% surprise.
  2. How many times did we use people’s faces? 3,726 times.
  3. Did we use more photos of men or of women? Unfortunately, we weren’t able to answer this.
  4. How many times did we use photos of protesters? 414 times
  5. How often did we reuse the same photograph for another story? We have also reused 1,703 images during the past year for news stories.
  6. How many times did we use maps as our main image? 143 times
  7. What everyday items appeared most in our images?
+----------------------------------------+----------------+
| Label | Times appeared |
+----------------------------------------+----------------+
| Vehicles | 609 |
| Military, soldiers or troops | 434 |
| Protests | 414 |
| People speaking in public | 290 |
| Maps | 143 |
| Police | 99 |
| Natural disasters | 94 |
| Aircrafts | 93 |
| Plane or aircraft | 87 |
| Earthquakes | 83 |
| Children | 67 |
| News conferences | 56 |
| Imams/muftis | 54 |
| Tourist attractions | 39 |
| Missiles | 31 |
| Explosions | 28 |
| Riots | 22 |
| Monochrome photography | 20 |
| Football players | 18 |
| Wildfires | 15 |
| Musicians | 8 |
| Mosques | 4 |
| Church | 1 |
| Monk | 1 |
+----------------------------------------+----------------+

The a list of people who appeared 5 times or more

+-----------------------------------+-----------------+
| Name | Number of times |
+-----------------------------------+-----------------+
| Donald Trump | 155 |
| Recep Tayyip Erdoğan | 35 |
| Rex W. Tillerson | 28 |
| Vladimir Putin | 23 |
| Kim Jong-un | 15 |
| Barack Obama | 14 |
| Benjamin Netanyahu | 13 |
| Jim Mattis | 12 |
| Emmanuel Macron | 12 |
| Sergey Lavrov | 11 |
| Mahmoud Abbas | 10 |
| Xi Jinping | 9 |
| Haider al-Abadi | 9 |
| Abdel Fattah el-Sisi | 9 |
| Mohammed bin Abdulrahman Al Thani | 9 |
| Theresa May | 8 |
| Tamim bin Hamad Al Thani | 8 |
| Saad Hariri | 8 |
| Michel Temer | 8 |
| Shinzō Abe | 8 |
| Staffan de Mistura | 8 |
| Mevlüt Çavuşoğlu | 7 |
| Nawaz Sharif | 6 |
| Boris Johnson | 6 |
| Bashar al-Assad | 6 |
| Robert Mugabe | 6 |
| Paul Ryan | 6 |
| James Comey | 6 |
| Rodrigo Duterte | 6 |
| Jared Kushner | 5 |
| Malcolm Turnbull | 5 |
| Angela Merkel | 5 |
| Mike Pence | 5 |
| Carles Puigdemont | 5 |
+-----------------------------------+-----------------+

Final thoughts

Using image analysis tools on their own means nothing without asking the right questions. To yield any actionable results these kinds of technologies should ideally be integrated into existing newsroom processes to provide value for both journalists and viewers.

The plan next, is to experiment with the following integrations:

  • Tagging photo repositories inside our CMS to make it easier for our journalists to find specific images very quickly. For example, find all images of Donald Trump next to Emmanuel Macron with a smile on his face.
  • Help journalists find the best photo that matches the story. Or better yet, filter out all the images that should not go with the story.
  • Utilise Google’s Cloud Video Intelligence to analyse the contents of live video and extract newsworthy content on the fly.
  • Apply this technology to VR and 360 images where objects in a given scene can be detected.

We believe 2018 will push machine-learning forward and we are looking forward to developing its applications within the newsroom.

Those were our questions, if you were to analyse the same set of photos, what questions would you ask?

Follow us on Twitter and Instagram

--

--