Steve’s ITK: Just getting started

DARPA’s explanation of explainable AI

SUBSCRIBE TO THE NEWSLETTER HERE.

Opening thought: AI’s blackbox problem

Ever since I began writing about technology for a living (and even before that), I’ve had non-tech people ask me if I think the Internet, and by extension almost any piece of new technology, is a good or bad thing.

My reply over the years has remained consistent, albeit slightly flippant: ‘I’m agnostic,’ I’d say. ‘The truth is, this thing is only just getting started and no one knows how it’s gonna play out’.

The same can also be said regards Artificial Intelligence (AI). Not in the ‘any startup with an algorithm’ sense of the word, but proper deep learning-based AI where, to put it over simply, computers use neural networks to teach themselves.

In previous ITKs, I’ve written about what I’ve dubbed an AI honeymoon where AI is currently augmenting existing jobs before replacing them, and how I’m particularly bullish on the application of AI to healthcare.

And just a few weeks ago I wrote up Y Combinator President Sam Altman’s visit to London where he talked about the need for mass-retraining and a basic wage to help offset job displacement by AI, before humans and machines will eventually need to become one.

However, a more pressing issue and one that has left me scratching my head is in relation to AI and accountability. Or, more accurately, the lack thereof.

That’s because when a computer that has taught itself makes a decision (as valid or useful as that decision is) there is no easy way to know how it came about. Unlike an algorithm designed by a human, certain forms of deep learning can not be easily reverse engineered or interrogated. This is already giving rise to what has been called AI’s blackbox problem. From MIT Technology Review:

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.
This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either — but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate — and get along with — intelligent machines that could be unpredictable and inscrutable?

The article goes onto explain why deep learning powered by neural networks, a specific branch of AI, is winning the day despite a lack of explainability, and how this is giving rise to research into how to build some kind of feedback loop into AI so that, at least on a rudimentary level, you can ask a machine to explain why it came to a particularly decision. See, for example, the DARPA program for XAI (explainable AI).

However, another article in MIT Technology Review this week provides a fun counterpoint. Titled: ‘Deep Learning Is a Black Box, but Health Care Won’t Mind’, it argues that the healthcare industry, including regulators, won’t care if AI is a blackbox, as long as the results speak for themselves.

“In the case of black-box medicine, doctors can’t know what is going on because nobody does; it’s inherently opaque,” says Nicholson Price, a legal scholar from the University of Michigan who focuses on health law.
Yet Price says that may not pose a serious obstacle in health care. He likens deep learning to drugs whose benefits come about by unknown means. Lithium is one example. Its exact biochemical mechanism in affecting mood has yet to be elucidated, but the drug is still approved for treatment of bipolar disorder. The mechanism behind aspirin, the most widely used medicine of all time, wasn’t understood for 70 years.

Would you trust an AI more or less than an approved drug? Just imagine reading the leaflet with all those disclaimers. ‘This AI is statistically proven to work but we have no idea how’ 😕

With that said, in an email exchange today, one founder of a UK AI startup summed up the current state of play: ‘Frankly, it’s just that we are making so many incredible breakthroughs with machine learning that slowing down to work on the XAI problem simply doesn’t pay off’.

Or, to give you the tldr version, he says the problem with black boxes is that you can make money without solving them.

For now, at least.

Things I wrote

Huddly raises $10M to “reinvent the camera” with a computer-vision platform for video meetings

techcrunch.com

Huddly is a Norway-based startup that sells a camera targeting remote company meetings (or huddles) and is building out what it describes as a “computer-vision” platform to help managers glean better data from those meetings.

Telegraph Media Group acquires UK exam preparation app Gojimo

techcrunch.com

Gojimo, an app that helps U.K. high school students prepare for exams, has been acquired by Telegraph Media Group, the publisher of The Daily Telegraph and The Sunday Telegraph newspapers. Terms of the deal remain undisclosed.

AIDoc Medical raises $7M to bring AI to medical imaging analysis

techcrunch.com

We are probably still quite some way off from seeing Artificial Intelligence (AI) replace doctors, but there are already lots of proven use-cases where AI is being used to augment the medical profession.

Banking app Pockit picks up further £2.9M as it readies new remittance service

techcrunch.com

Pockit, a mobile banking app that provides current account functionality and is targeting the U.K.’s “underbanked,” has picked up £2.9 million in further funding and will soon begin rolling out a remittance service to make it easier for users to send money abroad.

HR and employee benefits platform Hibob raises $17.5M led by U.S.-based Battery Ventures

techcrunch.com

Hibob, a HR and employee benefits platform for small to medium-sized businesses, has raised $17.5 million in Series A funding. Leading the round is U.S. VC firm Battery Ventures, with participation from Arbor Ventures, and Fidelity’s Eight Roads Ventures.

Babylon Health raises further $60M to continue building out AI doctor app

techcrunch.com

Babylon Health, the U.K. startup that offers a digital healthcare app using a mixture of artificial intelligence (AI) and video and text consultations with doctors and specialists, has raised $60 million in new funding.

Flux, a fintech startup founded by ex-Revolut employees, wants to make paper receipts obsolete

techcrunch.com

Flux, a London-based fintech startup founded by three early employees of foreign exchange and banking app Revolut, is on a mission to make store receipts truly digital.

Online grocery platform Farmdrop raises £7M Series A led by Atomico

techcrunch.com

Farmdrop, a farmer-friendly online grocery platform based in the U.K., has raised £7 million in Series A funding. Leading the round is Atomico, the London VC fund founded by Skype co-founder Niklas Zennström.

Closing thought: It was the O’Hear wot won it

I’m joking, of course, and definitely not taking credit for this one. However, following my TechCrunch article calling out the lack of disability diversity reporting by the major tech companies, Slack has made good on its promise to include persons with disabilities in its most recent diversity report. My colleague Megan Rose Dickey has the scoop:

Tech companies rarely, if ever, include information about how many people with disabilities they employ. Today, Slack is changing things up. According to the company’s latest diversity report, 1.7 percent of its employees identify themselves as having a disability.
As TechCrunch’s Steve O’Hear noted, tech companies are generally hesitant to discuss disabilities. Slack, however, was rather open in its dialogue with O’Hear at the time about including that information in future diversity reports, as long as the company followed legal processes and employees were willing to share it. Good on Slack for following through.

I couldn’t have said it better myself, although, let’s be clear: 1.7 per cent is shockingly low. No wonder Silicon Valley doesn’t want to talk about it! 😎

Get in touch

Want to continue the conversation? Just hit reply to this email — I answer every single ITK email I receive.

Please forward this newsletter to friends and colleagues who might also enjoy it. More subscribers and better open rates makes me happy.

Till next time,

Steve

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.