Meet RONNY FEHLING, one of the most energetic tech leaders in #AI

Christoph Auer-Welsbach
Applied Artificial Intelligence
7 min readDec 12, 2017
Ronny Fehling

Ronny Fehling is one of the most energetic tech leaders in Artificial Intelligence. Now an Associate Director of GAMMA Artificial Intelligence at Boston Consulting Group, he is former head of Cognitive Computing and AI at Airbus, and has also held leadership roles at Oracle and startup Cognito.co. And his time at Airbus was spent making company-wide transformations using data science and AI.

He believes AI is now on the cusp of industrial capability and, despite the hype, can now be utilised at scale across certain settings. I caught up with him to pick his brains about the state of the sector now, and what the future holds.

1. Where does your passion for AI come from?

Growing up in Tanzania, I didn’t have much exposure to high tech. But a Commodore C64 sparked something in my brother and me and we were soon combining Technic lego with the technology that controlled my car. I went on to read Maths, but it was a course called Complexity Theory in Computer Science that hooked me. I had the honour of studying at graduate level under the great Marvin Minsky, which confirmed my passion for AI.

It was in 2004, while working on a project for Nasa using radio frequency identification and sensor computing, that I realised there was this new kind of data that needed to be contextualised before it could be interpreted. This data gave way to entirely new architectures like the NoSQL movement. Then, in 2011, deep-learning breakthroughs gave rise to the AI Spring — which I’ve been watching, and participating in, with absolute fascination.

DEFINING AI

2. You’re known for being critical of the term AI, and (in)famously said being in this sector is “to work in a profession in a soup of terms with no meaning, yet nobody wants to point this out.”. How should we define AI?

Haha. There is a hype around the term AI. While there is no single definition, it is often described as the attempt to create intelligent machines for learning, problem-solving and decision-making. While this sounds intriguing, I find this formulation more useful:

AI enables a new interaction model for exploring complex and large data sources (structured and unstructured, text, images, video, speech, audio, sensors) with conflicting answers, ambiguous evidences, and hard-to-automate processes. The ability to learn from data makes AI powerful. No more hardcoding every single possible behaviour; a cognitive system learns and improves with data (and outcomes). We want to create algorithms that can learn, adapt, interact and understand to carry out tasks in a way that we would consider ‘smart’.

Compared to more traditional machine learning, I think AI enables us to abstract concepts from raw data that lie beyond just the data itself. For example, interpreting transactional data to deduce intention or sentiment. Together with a strong outcome-based model interacting with the real world, we will be able to create vicarious forms of machine intelligence that can not only help us with the current task we are attempting, but also propose, based on other experiences, other methods that might get us to where we need to go faster.

MAKING AI RELEVANT NOW

3. A lot of future-gazing goes on in the AI space. What is the key to applying the technology successfully today, and how do we do so in zero-fault-tolerant areas like the aviation industry?

AI is currently in a very narrow, specialised state, which means it’s only intelligent in the defined domain boundaries given to it. But it can help us analyse concepts above the data — concepts which define the ‘knowledge’ humans will typically apply to data. What we are trying to do currently is to start encoding such knowledge into machines.

Different to the rule-based systems in the past, which proved too complex to build and maintain, we now try to let the system learn these concepts by itself, and through external data entities. We can, for example, train a machine to learn about concepts of technical domains by having it do entity extractions of Wikipedia articles. Armed with that, it can now recognise these entities in enterprise data and put them into context. The result is a more flexible, less deterministic knowledge representation.

AI is also very good at finding correlations and similarities in highly complex, multidimensional problems — systems where outcomes are determined by an interplay of hundreds of variables so complex that traditional, deterministic model-based systems tend to break down.

But AI can only be as good as the data it’s fed. If the data cannot be correlated to the observed outcome, AI systems cannot reliably be used in mission or life-critical systems. The certification of AI systems in operational settings is still little understood since, typically, machines have been certified as deterministic systems, with humans covering the remainder. I expect that, while they will increasingly develop ‘explain’ functions, certification requirements for AI systems will also evolve.

AI mines data in such a way that it can find anomalies or safety issues that otherwise might have been overlooked. And it is very good at recognising what are commonly referred to as false positives, i.e. it might flag an event as potentially critical even though it isn’t. People are much better at discriminating critical events, so these are cases where humans and AI can work well together.

Panel: AI in Healthcare and Other Applications with Ronny Fehling

4. Tell us about cleaning databases and making information more algorithm-friendly. Are those the biggest challenges when it comes to putting AI into production over the coming years?

AI cannot work well with dirty data, and cleansing data is currently 70–85% of a data scientist’s work. For an AI strategy to be effective, it’s important to not only have clean data, but to have functioning, scalable big data implementation.

Having said that, I strongly believe that AI can actually help with data cleansing. When humans cleanse data, we try to match lexicographically or structurally different data sets, using general and domain-specific knowledge. But AI can learn about the data. It can, for example, recognise in a particular data set of social security numbers, a wrong value or type.

AI can already detect correlated fields and use that for error-checking. Moreover, data cleansing is generally not deterministic, and has little transparency. AI can not only give more transparency to the cleansing operation (you can tune the confidence threshold), but it can also help annotate and tag the data for upstream processing.

ETHICS AND THE FUTURE

5. What are the ethical challenges for executives and founders applying AI?

I think it’s important here to distinguish between the current state of AI and what some term artificial general intelligence (AGI), which, in essence, refers to completely self-learning general purpose machines. Putting AGI aside, I’ll focus on what I think is important for AI right now. Of course you should not use AI for bad things, but that just seems obvious and is no different to using algorithms to do bad things.

But we have to understand the potential danger of biased data. Ultimately, AI has to be trained by examples, so an AI system not only depends on the data it is fed, but also biases in that data. If a significant portion of the data or outcomes you feed the AI is biased, faulty or fake, the AI will be faulty (think Microsoft’s Tay). This is where the ethical debate has to be: less on the AI itself, but rather on the (selective) power of data bias.

If AI systems are used in mission or even life-critical systems, we need to make sure that the data we feed it is representative and, as far as possible, bias-free.

Today, AI can already help us develop recommendation systems that help reduce the cognitive workload of human workers by providing them with the right information at the right time and in the right context. But as the systems become more complex and the interactions between the various data points harder to examine, the danger of biased data arises. And if an AI system displays a recommendation to the human but the human cannot detect an eventual slight bias in the data, he might, by following the recommendation, inadvertently reinforce the bias in the machine.

We already see examples of data biases being purposefully introduced and reinforced by intelligent algorithms. Fake news, for instance, is seeded in various forums through bots, then picked up by indexing algorithms from Facebook, Google, Twitter, etc. As they bombard those algorithms with fake accounts, supposedly sharing these stories over and over, they rise on indexes and find an ever-growing audience — continuously reinforcing the bias towards such stories.

This problem is reflected in the larger context of anthropomorphisation of AI — i.e. attributing human emotions to the AI, which will further contribute further to a data bias. We need to constantly be on the guard for biasing effects, and must attempt to design systems that can counter, or at least alert us to, such outcomes.

Ronny Fehling at WorldSummit.AI 2017 in Amsterdam

6. In the next 18 months, how can we all work to ensure that AI’s impact on business and humanity is a positive one?

No matter what we do, AI will continue to evolve. And it will have a profound impact on our lives. Governments must invest in this technology, enable its development, and invest in the education of young people, as well as the existing workforce, so that we can learn how to deal with the societal changes AI will prompt in the future.

AI will challenge a lot of existing jobs and an education system that has endured for centuries. We will have to accept that change is going to be so fast that, rather than simply completing a degree, we will have to keep up with new information and developments throughout our entire working life.

The looming risk is that the gap that already exists in education will widen, causing more friction in society. We must strive towards reducing that inequality, by offering education in these developments to all employees and pupils so that they can adapt and prepare.

--

--

Christoph Auer-Welsbach
Applied Artificial Intelligence

Venture Partner @Lunar-vc | Blog @ Flipside.xyz | CoFounder @Kaizo @TheCityAI @WorldSummitAI | Ex @IBM Ventures