Bad robots, quantum teleportation and what does AI think of the Mona Lisa?

By Imogen Malpas and Endre Szvetnik

Sparrow
sparrow.science

--

Artificial Intelligence can liberate us from mundane tasks, solve tough scientific problems and even fend off shady hackers. But what about the ethical questions and predictions of AI taking away human control? We went to #AIBE2018 to find out.

On a cold sunny Sunday, cool wind in our hair… no, not the Eagles, but Sparrho, found itself in attendance at the Artificial Intelligence in Business and Entrepreneurship conference, a.k.a #AIBE2018 - right next to London’s iconic Westminster Abbey.

Having survived the frosty wind outside, we spotted a title on the bill about ‘surviving the AI Winter’.

Was it really coming, the AI Winter, we wondered?

Well, luckily for humanity, an ‘AI Winter’ is not some doomsday scenario, where we are ruled by the machines, scraping by on our guaranteed basic income.

In fact, it’s a period when funding for AI research dries up. It’s far from the case now and, as we saw, there are exciting applications and important questions around the corner.

Could this sleek-looking humanoid robot be your future home assistant?

AIBE: a meeting place of heavyweights and agile startups

For a conference that’s only in its second year, AIBE is a big deal. Set up by London School of Economics students and later involving other top universities, it punches above its weight by securing names such as Microsoft, IBM Watson, Oxford University and Amazon.

It also attracts a number of smaller firms and startups, showcasing exciting AI-applications, including an AI music composer and a cute robot assistant.

Meet Pepper, the robot — she’ll set you back a cool £17,500

It’s really about machine learning

As some argue, when we talk about Artificial Intelligence, we are really talking about crafty algorithms using statistical techniques for finding patterns in large amounts of data and then making some decisions using those.

Freaking out good folks with talk about a Big Bad Robot, a ‘superintelligence’ and a ‘singularity’ is misleading, “because algorithms have less intelligence than a mouse”.

This is the opinion of one of the most intriguing speakers, Luciano Floridi — holder of the lengthy title of ‘ Professor of Philosophy and Ethics of Information and Director of the Digital Ethics Lab, at the University of Oxford, Oxford Internet Institute’.

Prof. Floridi claims the real challenge will not come from technological innovation itself, but the governance of the digital domain.

“The real challenge [will be] the governance of the digital domain.”

The question is who will lay down and enforce the many guidelines needed to enable the use of AI — because there are a number of pressing ethical challenges involving the increasing use of algorithms, for example:

Floridi also warned that, if we don’t think carefully about these challenges, we might be facing costly damage limitation, akin to a long overdue visit to the dentist. Better think about prevention.

The EU has already started to pay attention, with the European Data Protection Supervisor’s Ethics Advisory Group (where Prof. Floridi is a member) issuing a fresh report this year.

AI boosts your business, just don’t forget to feed the data

But, apart from those ethical questions, the conference was a great showcase of practical AI solutions.

Hugo Pinto, managing director at Accenture, kicked off the meeting by speaking about how AI dramatically improves efficiency — although most businesses don’t know it yet.

He detailed how to holistically boost the yield of your business — by learning from experience, monitoring outcomes, and plugging in data.

His experience includes time as a board member with CognitionX and as a business mentor at the Open Data Institute — so you should probably take his advice.

Recognising and categorising: is the Mona Lisa a selfie?

Next up was Danilo Poccia, resident Tech Evangelist at Amazon, providing a fascinating insight into the cool things AI can do to recognise and categorise things around us — ‘things’ in this case being facial expressions, intended meanings, and activities spotted in a photo.

We heard how AI-performed sentiment analysis determined that the Mona Lisa was ‘not smiling, although appearing happy’, and even figured out that the famous painting could be a selfie (there’s a debate among humans that Leonardo in fact created a self-portrait).

Poccia also explained how ambiguity recognition works. If you ask a well-programmed AI about “who recorded pet sounds?”, you will get an answer that it was the Beach Boys (whose work includes the album Pet Sounds) and not your geekish neighbour picking up growls at the zoo on his phone.

In the business world, AI can do the tedious job of parsing through a photo library and autotagging the pictures, having recognised what’s in the image. It can also sentiment-analyse company mentions on Twitter and tweak a marketing campaign accordingly.

AI joined the debate about the Mona Lisa

John Williams has a machine competitor and it sounds good

Machine learning is also making inroads into music, although you’d think AI and art don’t go hand in hand.

We had the honour of meeting one of the many exceptions to this ‘rule’. (That is, if you can truly ‘meet’ an AI.)

Created in 2016 by a French team of engineers, computer scientists and musicians, AIVA — whose full name is Artificial Intelligence Virtual Artist — is a deep learning algorithm applied to composition. The team’s mission? To establish AIVA as one of the greatest composers in history, and fuel the world with personalised music. You heard it!

Having listened to a few of AIVA’s pieces, it’s quite incredible how she’s able to imitate the styles of famous composers like the classics or John Williams to such powerful effect.

With technology like this, it’s easy to imagine some amazing possibilities — the imminent revival of long-dead musical greats, ‘new’ music by Bach and Beethoven filling concert halls, artists’ immortality preserved permanently.

We could not avoid speaking up on behalf of humans and pressed AIVA’s co-founder, Vincent Barreau, about what role would be left to us in this. As it happens, some of AIVA’s oeuvre was recorded by a human orchestra, led by a human conductor and people still have an input in how the finished piece should sound. Phew, humanity survives!

So, could this be the future of music? We’re not sure, but watch this space.

Peppering speech recognition with deep learning

We also bumped into some companies that are quietly making our lives more comfortable with the help of AI.

Particularly impressive were the real-time video transcription capabilities of Speechmatic (recognising speech, accents and languages).

They utilise machine learning for automatic speech recognition in 72 languages and claim that their system can ‘learn’ a new language within a week.

As speech input is becoming more widespread globally — remember you can send a text by talking to Siri, Google Assistant or Alexa — there is more demand for this kind of tech.

We had a more entertaining encounter using speech recognition and that was Volume’s Pepper robot ( mentioned above). The elegantly gesticulating Pepper takes it several steps further.

As Chief AI Officer, Benoit Alvarez explained, Pepper is a humanoid robot working as a receptionist.

She is a keen listener, focusing her occasionally green eyes on you, and is capable of recognising human facial expressions, tone of voice and even spotting the difference between first time visitors and return customers. While Pepper is making small talk with you, she shoots off an email to let your host know you’ve arrived.

We really enjoyed talking to Pepper, although she did not manage to sell us a new Aston Martin, but that was not her fault.

Watch our chat with Pepper

AI needs to explain its data and decisions

Back in the hall, Dr. James Luke, Chief Architect of IBM’s groundbreaking AI platform Watson, spoke about cognitive systems needing new people with new skills to build them. A key skill? Explainability.

For example, a self-driving car’s AI consists of several AI subsystems and here is how it needs to explain the data it read prior to pulling over for an ambulance:

“[…]the noise classifier detected a siren approaching from behind, the vision system identified an emergency vehicle approaching from behind, the road was straight and it was a safe place to stop without causing an obstruction.”

But this is only half of the story: the system also needs to explain how it reached a decision. If there is no clear explanation, “machine learning systems should be able to support their decisions with training evidence”.

Meanwhile Nigel Willson, Microsoft’s Global Strategist, started up his talk with some Deep Purple and quickly got deep into quantum computing — and how it’ll change the world.

He told us how one day, quantum computing will be able to massively boost computational speeds to solve complex problems (‘complex’ being an understatement: these are problems that’d take ‘regular’ computers longer than the life expectancy of the universe to figure out).

The price for this awesome power? Stable, highly secure and costly environments, massive facilities and refrigeration capabilities of around -459.65 F.

Nigel treated us to a demonstration of a quantum computer simulator using the QSharp language, with a strange message flashing up on the screen — ‘Teleportation successful’ — which was just quantum talk for a successful computation. (Or so he told us…)

Are there too many AI startups?

But let’s get back to the present and address a fundamental question: what makes AI startups successful?

The answer came from the 30 year-old founder of a multi-national, multi-million dollar startup accelerator, MBE, and one of the most influential women in the UK IT scene, Alice Bentinck, from Entrepreneur First.

Alice emphasised the importance of owning one’s data, arguing that startups are most successful when they maintain a tight hold on their input.

A new AI startup has been created in the UK every week for the past 3 years: but these will head for failure without proprietary access to data, unique technology, and/or experimentation allowing for new discoveries, she warned.

We also heard from Ioana-Roxana Dascalu of DarkTrace about AI stepping into the trenches to offer cyber-defence and predict hacking attacks.

You guessed it right: DarkTrace also uses machine learning and probability theory to emulate Tom Cruise from Minority Report predicting threats through changes in behaviour.

So, is an AI winter coming?

Well, some people say yes, some people say no.

What is more interesting than the question of funding is the idea that machine learning offers a lot of solutions to replace repetitive tasks and speed up their execution.

At the same time — as we heard — it would be silly to overhype AI. as we are nowhere near a ‘superintelligence’ that would somehow threaten us.

Rather, we need to focus on regulating and clearly explaining what and how the algorithms will do to assist us… and pay attention to the real winter, wearing a hat.

--

--

Sparrow
sparrow.science

Steve, the sparrow, represents contributions from the Sparrow Team and our expert researchers. We accredit external contributors where appropriate.