Learnings from the 7th Research and Applied AI Summit (RAAIS)

Anna Debenham
9 min readJul 12, 2023

--

A couple of weeks ago, I attended RAAIS — a summit covering AI both from the research and applied AI perspectives. The summit was organised by Nathan Benaich of Air Street Capital, and took place at Google’s London HQ.

A photo of Nathan presenting, with the backdrop of the RAAIS logo.
Photo credit: RAAIS

There was a good mix of talks, with some focussing heavily on research, and others more application-based. I obviously found the application ones more relevant, but the research ones were interesting too. The sessions are available in full on YouTube.

Here are my takeaways from the day’s talks.

AI to make self-driving cars better than humans

The first talk was by Oliver Cameron, on self-driving cars. He co-founded the self-driving car company Voyage, which was acquired by Cruise in 2021, where he became VP Product.

He started with the challenge of designing self-driving cars early on at Voyage, as it is such a broad market and problem space. To simplify this, they decided to exclusively focus on a specific audience: senior citizens. By doing so, they were able to work more efficiently, as they were not bogged down with trying to make it work for every possible use case.

During the Q&A, a member of the audience asked if there had been any malicious use of self-driving cars, such as using them as getaway vehicles for bank heists! Oliver was not aware of this, but highlighted that there is a whole set of infrastructure behind the scenes that works with law enforcement. He was also asked about the trolley problem, to which he responded that in all the millions of miles driven by self-driving cars, they had not yet encountered this issue. He highlighted that self-driving cars have been proven to cause fewer accidents than human drivers.

Watch Oliver’s talk.

AI to make safer batteries

Sid Khullar from Northvolt discussed his team’s work with AI in detecting flaws in the engineering process. Flaws in batteries can be catastrophic, causing electric cars to combust and sparking fires that can spread throughout a home. According to Sid, “In our industry, there’s a thin line between creating energy and creating bombs”.

A photo of Sid presenting on stage to an audience.
Photo credit: RAAIS

Sid demonstrated how Northvolt has built highly sensitive detectors in their giga-scale battery factories that use AI to identify the tiniest of flaws in battery cells. These calculations must run very quickly as the battery components move rapidly along a conveyor belt, requiring significant processing power. Additionally, they use AI to predict battery efficiency at different temperatures, reducing the time required for testing.

Watch Sid’s talk.

AI to design new medicines

Michael Bronstein, DeepMind Professor of AI at the University of Oxford, gave a talk about how his team is using AI to create geometric protein structures that bind successfully to each other, for the creation of new medicines. These processes require a huge amount of computational power, and the variables are atoms in the universe level of scale. However, effective use of AI, plus humans refining the number variables, is significantly reducing the time it takes to develop vaccines. This type of technology has been used in the development of Covid vaccines, shifting the bottleneck from drug development to clinical trials, where AI is also being used to help reduce the time it takes for a drug to come to market.

A photo of Michael presenting on stage.
Photo credit: RAAIS

Watch Michael’s talk.

AI to make translation available to more languages

Research Scientist Angela Fan from Meta AI presented her work on improving the number of languages that the system supports (which is currently in the hundreds, amongst thousands of spoken languages).

A photo of Angela presenting on stage to an audience.
Photo credit: RAAIS

She discussed the difference between “high-resource” languages, (such as English, which has a lot of learning data available on the internet), and “low-resource” languages (such as Bengali, which despite being spoken by millions of people, has comparatively little training data online). Since so many languages are currently low-resource, they are poorly covered by translation tools. And the challenge is not only translating from low-resource to high-resource languages, but low-resource to low-resource languages where the accuracy gets much lower. AI is helping bridge these gaps. I loved this talk, and it made me appreciate just how much training data is required to get high quality output.

Watch Angela’s talk.

AI to help with customer support

My favourite talk was given by Fergal Reid, Senior Director of Machine Learning at Intercom, because it was the most relevant to my area. Fergal talked about Intercom’s work on Fin.

A photo of Fergal presenting on stage to an audience.
Photo credit: RAAIS

Fergal said that around 4% of the UK workforce works in customer support, so improvements here can have a massive impact.

Before Fin, their explorations into AI included a tool to help users craft responses to customers. This allowed users to generate a summary of the issue the customer was experiencing, write a response and have the AI change the tone of voice, and expand text from shorthand so the support rep doesn’t have to spend time crafting a well-worded response.

Their other experiment was a tool that helped users automate some of the most common responses. However, Fergal indicated that usage of this was low as it required work on the user’s end. This was essentially productising their LMM interface as a backend training UI, but it put a lot of the setup work on the user. Fin can answer questions without this manual setup.

I asked Fergal what the gap was between an Intercom customer enabling Fin and it being able to answer questions, thinking that it required context specific training data in the form of responses from customer support. He explained there is no cold start, as the bot has access to a customer’s knowledge base, so can provide responses based on existing documentation as applied context. This is smart, as it means customers can start using Fin from day 1 without having to wait for it to build up context from message history.

They are opinionated about what works well when it comes to working on AI features. To ensure success, they tasked a small, senior machine learning team with building Fin. They ensured that the team was laser-focused on the problem (making customer support more efficient) and were empowered to ship, going for a breadth rather than depth-first initial approach. They set a goal on time for the first customer to try and get it in the hands of real customers quickly. They also didn’t over-optimise on making it cheap — instead, focusing on making it work first before making it efficient.

One of the things I love about their approach to this is that they are billing on value — only charging for successful responses. This creates a compelling incentive for Intercom to ensure Fin is as accurate and helpful as possible, as it correlates directly with revenue. They currently bill $0.99 per resolution, which Fergal admits “only just covers cost”. Intercom claims that Fin is able to resolve 50% of queries, so increasing that percentage is in their best interest.

An audience member asked whether customers were willing to pay for Fin. He explained that cost is a big deal, but so is the cost of a human answering the questions. He stressed that Fin wasn’t meant to replace customer support, but ensure they’re only having to answer questions that require additional support, so Intercom’s customers can give the users that need it more attention.

As language models get smarter, they are able to derive new solutions based on context. To evaluate the performance of Fin as they make changes, they measure hard resolutions and soft resolutions. Hard resolutions are when a user explicitly confirms that the response helped them solve their issue, such as by clicking a “yes, that helped” button rather than the “let me talk to a human” button. A soft resolution is typical where it is inferred that the user’s issue was resolved — they may have left the chat without confirming but didn’t ask to talk with a human. Fergal’s team A/B tests their changes with the baseline hard and soft resolution metrics and checks for any divergence in these numbers.

Watch Fergal’s talk.

AI to broaden access to medical diagnosis

A photo of Vivek presenting on stage to an audience.
Photo credit: RAAIS

Vivek Natarajan from Google Health AI talked about the work his team is doing to make health information available to people with low access to medical staff. This is another great example of where AI is being used not to replace humans, but as a polyfill where a human diagnosis isn’t available, or as a co-pilot for medical trainees.

The testing that Google Health AI goes through is rigorous, with responses verified by professionals in testing phases. To ensure the AI isn’t hallucinating, prompt tuning is performed, whereby the LMMs are frozen with soft-prompt vectors added through back propagation. This has resulted in Google Health’s AI performing at an “expert” level on the US Medical Licensing Exam.

Watch Vivek’s talk.

AI to advance genomic sequencing

Eric Lander from the Broad Institute was interviewed by Nathan Benaich (Founder of RAAIS and a Partner at Air Street Capital).

A photo of Nathan and Eric sitting down for a chat.
Photo credit: RAAIS

Eric was a principal leader of the Human Genome project (the world’s largest collaborative biological project), and it was fascinating to hear about how that data is now being used in research. As technology improves, the speed of discoveries is also accelerating.

AI Panel takeaways

One of the speakers couldn’t make it to the event, so there was an unplanned panel which covered a range of different topics, and was one of the highlights of the event. We had Siddharth Khullar from Northvolt who we heard from earlier in the summit, Alex Dalyac from Tractable, and David Healey from Enveda.

A photo of Nathan, Sid, Alex, and David sitting down for a chat.
Photo credit: RAAIS

Here were my takeaways from the panel:

Startup investment

  • Pair researchers with commercial CEOs, as different skills are required to make something awesome and to successfully bring it to market.
  • Speed isn’t a differentiator anymore because everyone is building stuff with AI. What’s valuable is access to customers.

Open source

  • Open source startups need to build a moat, such as charging for additional services like storage.
  • Hugging Face was given as an example of an open source company that’s able to build a viable business. As well as the open source tooling it offers, it also has optional, paid for infrastructure support, as well as a compelling licensing model.

Training data

  • Getting training data can be a big challenge. One of the panellists described how they needed imagery data for auto damage insurance, but the biggest player wouldn’t provide it. They ended up approaching the number 2 and 3 competitors and asked them for the data, which combined, was enough to get them where they needed.
  • If a company is refusing to make the data available due to competition concerns, one way round this is to offer to only make the service available in countries where they don’t compete.

Product-led Growth

  • One of the reasons for AI’s recent explosion is that tools like ChatGPT and Midjourney used Product-led Growth to reach a very wide audience. They saw exponential growth as users shared the outputs they created, and this was encouraged. This created a flywheel.
  • In contrast, a sales-led approach will attract companies that don’t want to share the output, or even tell people they’re using it out of concern their competitors will learn about it and gain an advantage.

Watch the AI panel.

Closing notes

The event really showcased the innovation that AI is bringing to a range of different applications and industries, and it was also a nice opportunity to meet some of the people harnessing the technology. I’m already looking forward to next year’s event! Speaking of which, Nathan has a newsletter if you’d like to get a heads up when tickets are available.

--

--

No responses yet