“We’re writing the future as we go along”: talking to Nigel Willson, Microsoft’s Global Strategist

Sparrho spoke to Nigel after his appearance at the 2018 Artificial Intelligence in Business and Entrepreneurship conference — and he covered a lot of ground, from AI ethics to why we don’t have to understand quantum computers to accept them. Read our conversation here.

Sparrow
sparrow.science

--

Nigel Willson from Microsoft shows off an impromptu facial analysis of the AIBE 2018 team. Photo by Bartu Kaleagasi on Twitter

‘What were your thoughts on this year’s AIBE conference?’

I went to the inaugural AIBE event last year, and I was blown away by the fact that people turned up at the weekend — it just showed how much passion and enthusiasm there is around this topic. The level of noise in the breaks as everyone was talking showed that there was a real buzz.

It really is on a different level to other events on AI, both in terms of interaction and the quest for knowledge. This year took it to another level: it was a sell-out, it was bigger than 2017, but it maintained the same level of passion. It’s not for profit, it’s not corporate, and people aren’t going because they have to but because they want to. Long may it continue!

“We all need to get back into that mindset of being continuous learners and being enthusiastic about the future.”

The enthusiasm is slightly generational, because the attendees were university or post-university students: people that have a lot to learn and are still excited about learning and understanding. In society it’s really important, as we go forward, to recognise that we’re all going to be lifelong learners: we live longer and work longer, and our jobs will change on a regular basis, so we all need to get back into that mindset of being continuous learners and being enthusiastic about the future.

(Read about what we’ve learnt at #AIBE2018)

‘You touched on quantum computing at AIBE. How does it work, and do we need to get it to use it?’

Let’s go back to the birth of quantum computing, not long ago. Richard Feynman said that you can count on two hands the number of people that truly understand Einstein’s theory of relativity — and that in comparison, “nobody understands quantum mechanics.”

In the same way, nobody really understands quantum theory. One of the reasons why is that it’s hard for people to comprehend, because it’s so different from conventional science. Since everything is going on at the atomic level, we’re seeing things happen that defy our conventional, classical scientific understanding.

The foundation of quantum computing lies in classical computing, which is based on information packets called bits that can exist in one of two states — 0 or 1. When we need to store another piece of information, we add another bit, and so the system grows linearly.

Now you know: a bit is the basic unit in computer information and only has two different values, normally defined as a 0 or 1

But at the quantum level — and a lot of this goes beyond explanation, so there’s a leap of faith required for us mere mortals — bits can exist in both states simultaneously. This is a phenomenon called superposition, where they’re both 0 and 1 at the same time. In comparison with conventional bits, quantum bits give us an exponential increase in power with the amount of bits we add.

“At the quantum level it’s like one massive party” (🎉)

Classical computing is very stable (minus the odd crash), but at the quantum level it’s like one massive party going on down there, subject to interference and thermal waves. It’s much harder to control the quantum state, which is why supercooling is necessary to slow things down, and why it’s taken so long for people to make use of quantum technology in a computing model. The key thing I wanted to highlight in my talk at AIBE was that you don’t need to be a quantum expert in order to be able to accept that it works — and to use it.

‘What’s the timeline for quantum computing reaching mainstream use?’

That’s the million dollar question! There’s no easy answer, other than to say that work on this isn’t new. Microsoft started working on quantum computing in the late nineties, as did others, and it’s seen massive financial investment in that time. We aren’t going to see quantum replace all traditional computers either; it’ll be an addition.

IBM researcher Jerry Chow in the quantum computing lab at IBM’s T.J. Watson Research Center. (Credit: Jon Simon/Feature Photo Service for IBM)

Probably in the next five years, quantum computing will start to become mainstream. But if you listen to the news or read articles from Microsoft or Google or IBM, they all talk about this technology being imminent — what that doesn’t mean is that from Day One there’ll be some massive, fully operational quantum computer. It’ll grow over a period of time, so I think a five-year time frame is realistic.

‘What facilities will quantum computing require — and where will we find them?’

Quantum computers will largely be confined to research centres. Microsoft have a research establishment, Station Q in California, and there are a few others around the world. These computers won’t initially be in data centres — they’re going to be in research centres until they have been found to be reliable, scalable and ready for primetime. Once they’re ready to be rolled out for commercial use, they’ll be housed in massive refrigerators within big data centres, with people consuming their services through a cloud model.

‘What does AI have in store for us?’

There is an awful lot of hype around AI: and people’s perception is largely based on Hollywood movies. We go to the movies to be excited and frightened and challenged, so when you look at AI in those contexts, of course that’s how it’s portrayed. The reality is that AI is like anything else: it can be very dangerous, or it can be amazing, based on how it’s used or misused.

Take electricity — it can be life-threatening to touch a live wire, but electricity is nevertheless an amazing technology that brings so much good to the world. AI is in the same bucket — absolutely people could misuse it, so it’s up to society and our governments to make sure that it is used in the right way.

“AI is going to be in every smartphone and Christmas toy and practically everything that we do.”

I was glad that ethics was a theme at AIBE. It was great to hear people talking about AI’s ethical implications, its trustworthiness; all the things that people don’t see as exciting but that are fundamentally going to affect how AI is implemented. Microsoft advocates responsible AI — we have a book called The Future Computed, which discusses AI and its social role, as well as the need for accountability and transparency.

It’s especially important that we talk about the need for regulation, because AI is going to be in every smartphone and Christmas toy and practically everything that we do. That means that it’s got an awful lot of visibility — and that’s what will get it on people’s agendas.

“AI is going to be in every smartphone and Christmas toy and practically everything that we do.”

We have to start somewhere: over time, the conversation will develop and mature. One of the really good examples of this was with Strava [the fitness tracking app that earlier this year was found to reveal potentially highly sensitive military location data through public disclosure of its users’ workout routes]. Probably, in fairness, nobody considered that that’d be an issue. And we will come up against things like this again: so it’s important that we immediately act on it and incorporate that thinking and knowledge into what we’re doing.

“We mustn’t become complacent, since it’s a technology that’s not going to go away.”

Provided we keep conversations going, and engage people at all different levels, we’ll keep our progress on track. The danger always comes from people becoming complacent, and we mustn’t become complacent, since it’s a technology that’s not going to go away: it’s going to become more powerful and do bigger and better things over time, so we need to be aware and cognisant of that.

I think it’s also a hard one to predict: the future isn’t written, and it never has been. We’re writing it as we go along. And if everyone’s aware of that, that’s how we make sure things go the right way.

For more thoughts, you can find Nigel on Twitter and Medium.

--

--

Sparrow
sparrow.science

Steve, the sparrow, represents contributions from the Sparrow Team and our expert researchers. We accredit external contributors where appropriate.