AI: Artificial Intelligence or Artificially Inflated…?

Rory Macduff-Duncan
GradientHQ
Published in
14 min readAug 21, 2018
The CogX AI conference at London’s Tobacco Dock

Back in June we attended CogX, an annual festival of all things AI held in London. It was huge:

  • 6,500 attendees
  • 17 stages of content
  • 300 speakers

In this blog I’m going to write about a few of the talks that really grabbed our attention.

But first…

Why did we go?

Cutting edge technology fascinates us. As we continue to develop Gurn we’re always looking to the future to try to see what we tech we could use to improve the value we deliver to our users.

Whilst AI per se isn’t something we are looking to capitalise on right this minute, we definitely are wanting to explore how Machine Learning can improve the accuracy and relevancy of the results we return for our users. And therefore we wanted to get out there, listen to some talks, meet some people and find out what’s going on in the world of AI.

We saw lots of talks over the two days. Some amazing, some good and some way beyond our field of expertise — which was scary, but brilliant!

Why the title for the blog?

Firstly, thank you to Manoj Saxena (Chairman of Cognitive Scale) for the inpiration for the blog title. In his introduction to the session on Enterprise Augmented Intelligence in Financial Services Manoj gave a great overview of where he saw AI today, with the proclamation that…

“AI today is both artificially inflated and also stands for amazing innovations”

We think he is spot on — recognising there are some brilliant advancements out there, but there is also a lot of hot air too…!

Anyway, enough of the introduction.

Who did we listen to and what did they say?

Matt Hancock MP, then Secretary of State: Digital, Culture, Media and Sport

Gov & Tech — a match made in…?

Being a UK-based startup we made sure we attended this talk. It was given by Matt Hancock MP, who was UK Minister repsonsible for ‘Digital’. He’s also been dubbed the ‘Minister for AI’ — mainly because it falls under his brief, but also because not many of his colleagues are keen to roll up their sleeves and take the time to ‘get to grips’ with AI. He’s keen, he’s vocal and he gets out and meets the tech entrepreneurs and is relentlessly pushing for dialogue between tech and government. (Not an easy job!)

His talk was very much one of a politician, but what can you expect? He was out there to reassure that the UK government ‘gets it’ and that they recognise they need to do more — which was encouraging, plus he called from greater investment from the Government, but the proof is in the pudding as we Brits say! Will £1bn keep pace with China and the US…?

He spoke about how the UK’s risk appetite in business is not one for adventure and that he wants to “see more freedom to fail” across UK business. How you might ask? Well he didn’t really say…

The State of AI panel discussion

This was a very lively debate between Jurgen Schmidhuber, Antoine Blondeau, Vishal Chatrath and Joanna Bryson.

Initial remarks were about how UK business is scared to fail (backing up Matt Hancock MP’s assertion) and that this holds back the Tech industry.

Leaders of companies need to take a stance, and show that failure is part of the process toward success.

The bit that really fascianted us was when the panel discussed the regulation of AI.

In life you can generally seek permission or ask for forgiveness. It’s fair to say that the panel were split on how this should apply to AI. Antoine Blondeau stated “I’m fundamentally against regulation” and he wishes peoples’ efforts to be focussed on innovation and empowerment.

Whereas Vishal Chatrath spoke of the need for ‘export controls’ on AI to avoid its weaponisation, and in order to do that we need regulation and standards. His point was supported by Joanna Bryson who didn’t see regulation as a bad thing, but this debate about AI is a chance for countries to cooperate.

There must be a way where light-touch regulation doesn’t stifle innovation? Where coperating countries can start to lay down some standards, but not via the UN where it’ll take years to make any progress. The World Economic Forum perhaps? But then if a country strays out of line what ‘stick’ do you have to hit them with to discentivise deviation from the agreed standards?

Catch the video of the panel discussion here

Calum Chace, The Economic Singularity

Credit: Splento

Calum Chace’s talk was fascinating, thought-provoking and unapologetically cutting.

Best soundbite of the talk for us:

“AI is collar-blind”

There’s definitely a public perception that ‘the machines’ are coming primarily for the jobs of those working in factories, and that other industries are immune. But this is patently wrong.

One of Calum’s main points was that it doesn’t matter whether it’s a blue collar job or a white collar job, if AI can reduce the cost of delivering the outcome then why would people not deploy AI to do so? Especially if your cost p/unit/hour is high, i.e. white collar jobs such as auditing and legal services.

Calum explained that people are clinging to past performance as a guide for future success. Whereas we’re always taught, especially in the financial world, that past performance is not an indicator of future returns.

Politicians’ standard reaction to the idea that AI will take people’s jobs: hear no job losses, see no job losses and speak no job losses (Photo by Joao Tzanno on Unsplash)

What about his view on what politicians are saying?

He made the point that politicians can do nothing but say AI will create jobs. They have too much to lose in the short-term, i.e. their jobs via the ballot box, and will bury their heads in the sand even if it’s it means their citizens will lose in the long-term. Also politicians know that if people thought robots were coming for their jobs then they would panic, and politicians like nation-wide panic about as much as they like losing elections.

The talk then moved onto the evolution of work, which has seen the migration of people from different ‘arenas of work’ due to technological advancements. Calum explained that the second agricultural revolution meant people left the field for the factory, the industrial revolution meant people left the factories for the office, but where do these workers go next…?

Calum’s talk was one of the best talks of the conference. We see a lot of ostriches out there with regards to what AI’s impact on jobs might be, and his insights show that we need to be discussing this now. What is there to lose?

…apart from not having your workforce prepared for what’s coming down the line. Then there really will be panic.

If you want to read more of Calum’s work, then as a starter I highly recommend checking out his blog on The Reverse Luddite theory.

Baroness Onora O’Neill: Communication and Trustworthiness

Where are we with trust online? (Photo by Bernard Hermant on Unsplash)

This talk was awesome.

And it was probably the least technical talk we attended over the two days.

Although the title was Communication and Trustworthiness, the talk really centred around who is responsible for content in the modern day and whether companies will self-regulate.

To begin with, Baroness O’Neill took us on a journey from the Classical world of Socrates to the era of the invention of the printing press right through to the modern day, exploring how the responsbility for the spoken word and written word has differed throughout the ages.

Understanding who was repsonsible for the spoken word tended to be simple, Baroness O’Neill stated, as the orator was the sole distributor; they are responsible for their content. With orators they are present at the point of authoritative delivery — they can answer questions, and clarify.

With the written word things are harder once published. Who truly wrote it and who is responsible for the content? With the arrival of the printing press in the early modern period neither was not answered… You could publish with anonymity and…

“…there was no distinction between a printer and a publisher, so who was responsible for the stuff that got produced?”

These questions in the age of social media, according to Baroness O’Neill, is still waiting to be solved. Nowadays, when written word is consumed on social media, it is not always possible to identify the actual author to seek further clarification. So who is therefore responsible? The potentially untraceable author or the publisher?

Are we all just cyber romantics?

When the internet was coming into mass use the public-at-large didn’t really stop to think about these questions, as we were all too eager to get our hands on technology. I remember it being almost like a ‘consumer arms race’ — being able to go into school and say you had played with the newest tech, which would then catapult you up into the stratosphere of ‘cool’.

But were we too hasty craving this technology? Should we (by this I mean the adults in the room) have paused and tried to resolve these questions?

“10 years ago one might think of it as the age of the cyber romantics, everybody thought it was all net gain, things were getting better and better”

Here, Baroness O’Neill reminded me that in my more recent years I was not fully aware of the true cost of using online services — that in fact, it wasn’t all gain, and that ‘free’ didn’t mean free at all.

Had people begun to discuss these issues more widely at earlier stages in the development of the modern social media age then the ethical storms that have arisen may have been seen through a different lens, one where people fully understood what they were giving up in return for using a free service.

Are the ethical storms behind us? Or are we just at the beginning? (Photo by Tom Strecker on Unsplash)

Baroness O’Neill classes these storms as private harms and public harms, which she characterises as follows:

  • Private harms are consequences from activities such as cyber bullying and financial crime etc.
  • Public harms are those which harm democracy and harms the public space.

If social media companies are to take responsibility for the publishing of content on their platforms then Baroness O’Neill sees private harms as being the reason they will be moved to do so.

This is because there “is at least some convergence of interest between members of the public and the profits of these companies”. An example cited is advertising, where companies’ adverts appear alongside content that causes private harm. This would then cause companies to pull advertising spend from the social media companies, something we have actually seen this past year. Though this needs private harm to be experienced at scale to have an effect.

But what about self-regulation? Is that possible?

A non-starter for Baroness O’Neill.

As she stated how she recently realised that it was naiive to ever have thought the tech companies would be interested in taking on the responsibilities of being publishers.

“I’ve come to think that is an illusion.”

And, even if leglisation was introduced to assign publishing responsibilities onto tech companies this would have limited effect. This is because these companies operate as distributed businesses by off-shoring themselves, in terms of labour and tax for example, and therefore would continue with this model so that the entity responsible for ‘publishing’ would be resident within a jurisdication that wouldn’t burden them with said responsibilties.

“I don’t think we can really expect to enter a world in which the online service providers, social media companies, data anlaytic companies [take on publishing responsibilities]… it is too profitable to not do it.”

What next?

Baroness O’Neill didn’t exactly leave us with a clear plan to solve this dilemma. Instead she mentioned some next steps we should take, including defining what types of communciation are acceptable, and which ones aren’t. When this is agreed we can press for redress for private harms and discuss what we, as users of the technologies, are willing to put up with.

The trouble is people are still reticent to vote with their feet, an example being the recent #DeleteFacebook campiagn; because whilst some might take a stand and leave their friends probably haven’t, and so the fear of missing out will start to creep in.

Is artificially extending life intelligent?

(Left to right): Dr Jack Kreindler, Dr Gregory Bailey, Polina Mamoshina, Matt Eagles and Maxine Mackintosh

This was a panel discussion hosted by Dr Jack Kreindler.

The key takeaway for us was the question ‘are we talking about extending healthspan or lifespan?’

The example being a family choosing to extend the life of a loved one who has dementia. What are their motivations, and who’s emotional happiness do they have in mind by keeping their relative alive for another 15 years let’s say?

As Dr Kreindler asked,

“if we can extend someone’s life so they live till they’re 150, but if 70 years of that is in poor cognitive condition, or a huge net drain on resources, then is it a worthy pursuit?”

The debate is nuanced, but the panel seemed split. With some members of the panel clearly seeing extension of life via AI as a success, regardless of whether healthspan of lifespan was extended.

Maxine Mackintosh made it clear that it should be a choice for the person who’s life is being extended — if they have the ability to follow a logical thought process and the outcome is extending their happiness then why wouldn’t you enable them to have their life extended?

Will AI be your next Chief Marketing Officer?

Wes Nichols (Board Partner, Upfront Ventures)

Wes Nichols took us on a whirlwind tour of the past, present and future of marketing.

He made it clear that a lot of current day marketing methods and practices belong in the past, that too many companies are focussed on analysing what has happened and are not looking forward, using data to predict and validate the outcomes they can achieve.

Specifically that the next level of marketing will be when someone says to their boss, “I know within a small margin of error that if I do X then I will achieve Y. So give me £Z so I can achieve Y many times over.” And the boss will trust them to deliver.

Stop looking backwards, use your data to predict what you can achieve

Wes sees AI as powering marketing decisions, but that humans won’t be removed from the equation entirely.

“AI is more Augmented Intelligence, fusing the machine and the human for superior results.”

Wes invoked Charles Darwin to explain that we must become ‘responsive to change’, now more than ever, because we are experiencing change faster than ever before.

One piece of advice that really stuck with me, having spoken with friends recently about the large corporates they work for, is that if you work for a company and see it isn’t evolving, but standing still or changing at a glacial pace, then get out…

“There’s lots of exciting companies out there to work for.”

As co-founder of a startup, even we are concious of not standing still, yet I see many friends at large coporates which are still moving at a glacial pace and launching enormous top down initiatives that fail way more often than not. Why do they stay? I presume it’s mainly down to job security and very competitive compensation packages — therefore to prise them away/encourage them to take that leap of faith people have to believe in the vision of the company they’re going to work for and have courage. (that, and some nice stock options!).

How do we equip people for the future and re-skill the workforce?

(Left to right): Phil Smith (Chairman, Innovate UK), Deep Nishar (Senior Managing Partner, SoftBank Investment Advisors), Liz Ericson (Partner, McKinsey), Baroness Joanna Shields (CEO, Benevolent AI), Kathryn Parsons (CEO, Decoded) and Shiva Rajaraman (Chief Product Officer, WeWork)

This panel discussion was chaired by Phil Smith, and gave us an insight into the different views amongst senior leaders for how we should prepare ourselves, and our children, for the future of work.

Sending your kids to code camps is one of the ways to secure their success in the future workplace according to Kathryn Parsons. However, this was in stark contrast to Deep Nishar who said his advice to his children was to look the Humanities and Arts, and not to learn to code, “as machines will do that in 10 years”, instead…

“learn to learn. Be able to understand and phrase problems in a way that convinces/educates people and insturcts machines what to build.”

Whether or not machines will be able to code in the next decade, it seems interesting that people are rushing towards becoming experts in disciplines that, for humans at least, may not be around in the future; as Deep Nishar went on to say, “ultimately coders are putting themselves out of business”. And that we need to accept that…

“The only constant is change”

If Deep Nishar is right about the need for a focus on the Humanities and Arts, then what are the softer skills that we should be learning?

Shiva Rajaraman from WeWork said he believes there are four skills people should learn to prepare them for the future:

  • Pitching — be able to explain your ideas and get buy-in from people.
  • Negotiating — be able to achieve your desired outcomes.
  • Understanding data — analysis of information to make sense of what to do next.
  • Psychology — understand the emotional levers and how best to navigate humans

I think Shiva is pretty spot on!

Our parting thoughts

What we’ll take with us… (Photo by Cristina Gottardi on Unsplash)
  • A worthwhile event to attend. We came away reassured; reassured that smarter people than us are talking about AI’s impact at very senior levels.
  • Being able to listen to senior technology leaders talk about, and debate, where they think AI is at the moment was useful and sobering.
  • Most speakers were very pragmatic, and most were concious of how AI is set to impact people’s lives — both positively and negatively. Though still, only a few were actually talking about what we must do to prepare the future-displaced workforce and how are they provided for.
  • Always go and listen to talks outside your comfort zone! You’ll never know what you might stumble on. Our ‘we weren’t planning to go to that’ highlight: Kamil Tamiola’s talk on Directed Protein Evolution. Check it out here.

Would have been great if…

  • In a few talks some panellists could have had their views challenged a bit more. In a few talks some panellists made statements that were just accepted as ‘fact’, and weren’t asked to explain the evidence or sources.
  • Instead of ‘meet the speaker’ there were structured workshops where you can work through some of their research with them and talk in detail about how to apply it to problems.
  • Food inside the venue was £££. (thankfully there was a McDonald’s two minutes walk from the venue :-) )

Until next year…?

Rory is co-founder of www.gurn.io. Gurn is a browser-based information retrieval tool that enables you to navigate to all your tools and resources with a single command. Check out how it can help you be more productive at work by getting you to what you need instantly and speeding up your workflow.

--

--

Rory Macduff-Duncan
GradientHQ

Ever-curious. And trying to give people some time back in their lives. Co-founder at Gradient - makers of www.gurn.io.