View from the MozFest Fringe — Can machines see the invisible?

Stevie Benton
Mozilla Festival
Published in
5 min readOct 26, 2016
Dr Miriam Redi delivers her compelling and fascinating presentation

In this compelling MozFest Fringe event, organised by BCS Women in London, Dr Miriam Redi, a research scientist at Bell Labs UK, presented on the topic: “Can machines see the invisible?”

It turns out the answer is yes, and a much more resounding ‘yes’ than we might have expected.

Dr Redi has a background in computer vision, working to discover how machines can analyze and understand images. Together with her team at Bell Labs, she is also working to understand human behaviours through computational methods, with the aim of designing new services and producing long term research. It’s an interesting combination of disciplines including computing, urban studies, sociology and psychology.

When we look at images, we don’t always agree on what is actually there as the image will contain invisible elements such as aesthetic appeal or sentiment, which are subjective. So how can we make machines understand intangible properties? The answer is to embed knowledge from different disciplines within a computer vision system. One of the ways Dr Redi does this is through a process called Computational Aesthetics.

Simply put, Computational Aesthetics is a way to teach machine vision systems how to assess image beauty.

Humans are asked to annotate large numbers of images in terms of aesthetic appeal to create an information base for the machine. Machines can then be taught how to replicate human judgments. This is done by making machines evaluate the same rules that photographers follow, such as sensing proportion and the rule of thirds. Using deep learning techniques, machines can also replicate how humans assign intangible qualities such as beauty, by automatically learning how pixel relationships score for their aesthetic qualities.

One of the reasons for doing this is to teach machines to surface beautiful, but unpopular content. Image sharing platforms such as Flickr have huge numbers of users, but not all of those users have followers. Much like an iceberg, there is some beautiful content on the surface, but much, much more of it is below the surface produced by users who don’t have a following on the platform. Dr Redi and her team took a sample of 10 million images from Flickr with little or no social standing, processed them through the computer to identify the top images, then used the crowd to compare these images to popular content. The unpopular content surfaced by computational aesthetics techniques was often regarded by human viewers as more beautiful than the popular content.

Portrait photography is very difficult for computational assessment because humans have very specific psychological processes related to facial recognition. Each face for humans has layers of interpreted meaning beyond proportion and framing, such as reactions or feelings shaped by experience, socialisation and unconscious bias. By processing the images based on the techniques that photographers use, such as proportion sharpness of image and line, Dr Redi’s team found that portraits are equally considered beautiful by computers, regardless of age, gender, ethnicity or other factors. There’s something that humans can learn from machines, perhaps.

What can machines tell us about people based on their social media profile pictures? University of Texas, Austin, sent students to different places in Austin and asked them to annotate the places with data on various different parameters to record the ambience, such as whether the place was considered hipster, cool, romantic, for extroverts and so on. Researchers then selected a number of social media profile pictures from the frequent customers of these places. A computer was taught to recognise aspects of profile pictures of people going to the same places, such as whether the subject was wearing glasses, their facial expressions, colours worn and other aspects to create a profile of those who attend places.

The team then tested the computer’s assessments by asking students to make similar assessments. They found that students made more value judgements based on cultural stereotypes. For example, students viewed certain places as romantic and decided they were mostly aimed at women. The computers disagreed and highlighted objective factors (e.g. colors worn or lines in images). Another example was a place seen by students as a “pick up place” for women, but the computer disagreed and viewed it as having an equal gender split.

The venue for Dr Redi’s presentation quickly filled up.

The technology and learning behind this kind of artificial intelligence is accelerating rapidly and becoming ever more powerful and accurate.

The implications of this technology and its applications are not yet widely understood, but they could be profound.

On the one hand, there are some powerful positives. Providing an audience for genuinely wonderful images that would otherwise have gone unnoticed could be a launchpad for many careers, and bring much beauty to us. We’ve seen how this use of technology can help to dispel stereotypes. Someone in the event audience pointed out that there could be great potential for this kind of artificial intelligence to help people on the autism spectrum who have difficulties making the distinction these computers can now make.

On the reverse, will we soon be at a point where computers will be able to accurately assess who we are, our views, opinions and the kind of company we keep simply by looking at photos we place online? We are already aware that some types of social media tend to create around us a filter bubble, presenting us with content that the platform believes we will like. Could this escalate further, with machines becoming the ultimate influencers of thought, taste and trends?

The issues surrounding artificial intelligence are important because in some ways the technology is overtaking our understanding of what it can do. This is why topics such as online privacy and security take such a prominent position at MozFest. How can we make sure that the use of artificial intelligence contributes in a positive way to the web we want?

You can learn more about Dr Redi’s work here.

--

--

Stevie Benton
Mozilla Festival

Pen for hire. Provides own ink and pixels. Open tech in edu & democracy. Co-founded @opencoalition Loves retro games & Philadelphia Eagles. Occasional standup.