Turning the Black Box Transparent in Sydney
By Maksud Ibrahimov, Data Scientist, QuantumBlack Sydney.
“I’ve been making business decisions for decades. Why would I trust a machine learning model, built by data scientists with no clue about my industry?”
Anyone working in advanced analytics will have faced this question before and it can often be a fair one. Machine learning algorithms can transform a business’ fortunes and identify opportunities that humans simply cannot. But the most complex machine learning models lack explainability — they are simply too complicated for many in the business to trust, and this can lead to a lack of adoption, performance oversight and transparency, and heightened risk of prejudiced outputs.
So how can data scientists tackle Explainable AI (XAI), lift the lid on black box algorithms and ultimately cement trust with the businesses we produce them for? This was the theme of QuantumBlack’s inaugural Sydney Advanced Analytics Meetup on Friday 21st February.
We welcomed more than 50 attendees a range of businesses and universities to our newly-launched Sydney Experience Studio for an evening of idea exchange and networking amongst advanced analytics professionals. This was the first in a series of events tackling practical, high value topics — issues that are explored less frequently than high level theoretical concepts, but that can be grasped and applied almost immediately to yield results.
The evening began with a line-up of industry experts to share their thoughts on enhancing data transparency in business partnerships. Our own Tim Fountaine, Head of QuantumBlack Australia, began proceedings by talking about our heritage in Formula One.
Tim was followed by QuantumBlack’s Maksud Ibrahimov and George Mathews and McKinsey Digital’s Rishni Ratnam, who delved into Explainable AI, the commercial importance of trust and how to practically implement more transparency with frameworks that explain complex algorithms, such as LIME and SHAP.
We were delighted to host the evening’s key speaker, Dr Roman Marchant from the University of Sydney’s Centre of Translational Data Science. Roman’s research explores how data science can be applied to social sciences, with a particular focus on criminology and understand and predicting criminal behaviour. His area of expertise is Sequential Bayesian Optimisation (SBO), which is a novel probabilistic method for finding the optimal sequence of decisions that maximise a long-term reward.
Roman’s session emphasised the importance of models which are explainable by design and how a holistic development approach is crucial in achieving this by collaborating with domain experts. He was also kind enough to explore Bayesian techniques which can generate models that deliver both transparency and flexibility.
He finished by exploring real world examples of XAI including an interpretable model for crime occurrence. Some of these examples are currently being used by government agencies in New South Wales and our audience found both the advice and context fascinating.