AI owned 2017, but just wait for 2018, says Iris

Denise Young 楊 玲 玲
Future Earth Media Lab
6 min readJan 16, 2018

2017 was a big year for AI — the year it went fully mainstream as a topic of public debate. We sat down with Anita Schjøll Bred, co-founder of Iris.Ai, the world’s first AI-powered science assistant (and a good friend of the Future Earth Media Lab) that semi-automates the process of finding relevant scientific literature, to recap the year’s highlights and take a peek at what’s next in this exciting field.

Q: What were the big breakthroughs for AI and science in 2017?

Anita: The first thing to mention is DeepMind’s AlphaZero. Most people will be aware of AlphaGo, which in early 2016 surprised everyone — including its own researchers — by beating 18-time world champion Lee Sedol 4–1 in the Chinese game of Go. In October 2017, the new algorithm AlphaZero accomplished an even more stunning and surprising feat: it beat AlphaGo 100–0 (!), and it did so knowing only the basic rules of the game, while AlphaGo had been trained on hundreds of thousands of recorded games. What’s more, the same program learned both chess and Shogi (Japanese chess) within hours, at the same skill level. Why is this such a huge breakthrough? Because it means we’ll probably need less human data, and human annotated data sets is one of the main bottlenecks for most AI/ML projects.

A big breakthrough of 2017 in core AI development was Geoffrey Hinton’s Capsule Networks, which are a new type of neural network based on “capsules” that could potentially transform machine’s ability to understand (initially) images via the understanding of components’ relationships to each other in an image.

This means that even if an object is unexpectedly placed in an image (i.e. a boat is upside down or a statue is photographed from the side), the machine will recognize it. This could be a major step up from neural networks, and promises to reverberate through applied AI applications.

There were a number of small breakthroughs in the medical field, which, stacked together are starting to point to a future with radically improved diagnostics. For example, this study on deep learning algorithms for detection of lymph node metastases in women with breast cancer found that “some deep learning algorithms achieved better diagnostic performance than a panel of 11 pathologists participating in a simulation exercise designed to mimic pathology workflow”.

From an AI business perspective, the Chan Zuckerberg Initiative’s acquisition of Meta, an AI-powered research seach engine, was an important development. We’re excited to see a larger shift in philanthropy to focus on scientific results and AI tools to help us make sense of them.

Access to data is of course core for any AI application, so from a data openness and policy perspective, we have been joyfully following EU’s Open Science initiatives and are glad to see they are moving us in the right direction, although it’s still a painfully slow process to open up academic research for free.

Finally, we’d like to highlight the growing importance of ethics discussions which look at how how successful implementation of AI for human augmentation depends on awareness of and agreement to core values of trust, transparency and equality. We’re starting to see disturbing examples of racist and sexist algorithms, that are really not more than quantified reflections of attitudes we already carry — so we are encouraged by this debate and will continue to follow it closely.

Q: Of those things you’ve mentioned, were there any that came as a complete surprise?

Anita: On core technical breakthroughs, it’s more about the timing and the detail. For example, Alpha Zero came faster than anyone could have predicted. On Capsule Networks, Hinton had been thinking about this for nearly 40 years, and said “It’s made a lot of intuitive sense to me for a very long time, it just hasn’t worked well,” Hinton says. “We’ve finally got something that works well.” The Meta acquisition, on the other hand, was a total surprise.

Geoffrey Hinton, the father of deep learning

Q: What about Iris? What were the highlights of your AI year?

Anita: As for Iris.ai, we’re perhaps most thrilled — from a science perspective — that we submitted and presented our first two research papers. One on a component of our algorithm, our document similarity metric, and one indicating that our Exploration tool (freely available on the.iris.ai) enables teams of researchers to outperform teams using existing search tools to solve the same R&D challenge in the same time span.

In addition to this we researched, developed and launched the Beta-version of the next version of our tool, the Mapping tool. This tool allows you to take the corpus of documents identified in the Exploration tool (or any other corpus of 1000–20,000 documents) and narrow down to a precise short reading list. The tool is based on the systematic academic approach but allows researchers to drastically reduce their time. We’re already seeing around 85% precision and recall, which is not quite the academic requirement but more than good enough for industrial researchers or others “in a hurry”.

The Iris.AI team

We also closed our seed funding in December, which gives us both a bit more breathing space and running speed. We’re very excited that Nordic Impact and a number of others impact focused investors believe in what we do enough to bet their money on us!

We spent the first few months of 2017 in London at Founders Factory before moving to the GTEC Lab in Berlin, and got accepted to the Creative Destruction Lab in Toronto. All these ecosystems offer a variety of networks, and it’s lovely to be allowed to tap into them all. We even got to meet the Canadian prime minister (yes, the one and only) and pitch Iris.ai to him!

Anita of Iris.AI with Canadian Prime Minister Justin Trudeau at the Creative Destruction Lab in Toronto

Q: What’s next for AI and science in 2018? And what’s Iris planning for the year ahead?

We are tackling scientific fact and knowledge, which are in many ways related to “fake news” — yet in so many ways requires very different solutions. We’re focusing on the core of scientific knowledge; machine understanding of academic research papers and other scientific language texts.

On the AI front we are now focusing our research efforts on what we call pseudo-hypothesis extraction: breaking down each research paper into a structure of problem-solution-evaluation-results and making a connected fingerprint of each of these. Initially this will mean we can start showing you not only that a paper is related to your research problem, but which section of the paper is relevant. Then we can eventually allow you to search only for say a related method, or a related problem. And ultimately, beyond 2018, we can start seeing all papers’ hypotheses in connection with each other.

We’re also focusing on delivering our tools to our commercial clients, mainly corporate R&D departments who wish to be more efficient while missing out on less knowledge. We can deploy our tools both with the content they’re already paying for access to, and for their internal research content, in addition to the Open Access content the tool “comes with”.

Then there’s the biggest project of 2018, which we announced at the end of last year. We believe that science deserves openness and transparency. To make that happen we need to radically change the economy of research — and we can now do that through the deployment of new technology.

The question is simple: what would the world of science look like if all researchers get paid to publish high quality research? To address the challenge, we’re creating a blockchain based community where all contributors get paid in tokens for their contributions, can use tokens on all of the Iris.ai tools and Open Source community built tools. These tokens increase in value for the community members as corporates pay for access, and most importantly — everyone has a voice.

We’re designing the economic model for this right now, deciding on the governance structure, and looking at how we can empower thousands of researchers around the world to take ownership of building and unbiasing an AI for science — and the scientific results and papers themselves.

It’s a massive undertaking, and a risky one, but as one of our favorite advisors says: “Whether you go small or go big, it’s going to be really difficult. So you might as well go big”.

--

--

Denise Young 楊 玲 玲
Future Earth Media Lab

Host of upcoming podcast “New Climate Capitalism” and co-host Climate Narrative Circle. Fellow @EHFNewZealand