Scaling AI: Why Trust Matters

Fin-ML
Fin-ML

--

Manuel Morales, Rheia Khalaf, Dominique Payette

This article was written at the end of February 2020, just a week before the COVID-19 pandemic started putting the world on hold. A few weeks later, the article is as valid today as it was before the crisis, if not more. In today’s unprecedented times, AI technology can play an important role in the fight against the spread of the pandemic. There is a sense of urgency, and innovative ideas are addressing the immediate challenges. In a data-driven world, where people leave a digital footprint with almost every one of their interactions, we are in a position where these challenges can be faced through innovation and data technology. A thorough and efficient governance framework is required to draw guidelines and safeguards to make sure that as we fight the pandemic with data-driven solutions and AI technologies, we do not transgress principles that are dear to our societies. In today’s emergency context, life and safety of individuals in the short-term might take precedence over privacy and confidentiality concerns, thus presenting another extremely important challenge. More than ever, a framework is necessary to ensure that AI is not crossing any boundaries, but is also explored to its full potential.

Montreal, Canada. October 2019. As we were about to inaugurate a panel on AI Governance in Financial Services, anxiously watching how the room was filling up beyond capacity, it dawned on us just how much governance is key to large scale AI deployments in the financial sector. It spoke to us of an ecosystem that was experiencing a roadblock in its AI journey, and that was longing for a comprehensive governance framework. This includes reflections around ethics, fairness, transparency and explainability, at all stages of an AI-based project, from design to deployment. Without this, financial institutions would not only face potential issues with regulators but they would also be putting at stake a very precious asset: trust.

We would like to tell the story of the responsible AI focus in Montreal through our unique vantage points, as three professionals coming from different backgrounds, and working from different perspectives on governance of AI technologies. This story is representative of how the Canadian AI ecosystem comes together to build trust in AI, and how Canada, and in particular Montreal, became a fast growing hub for AI worldwide.

Our panel on AI Governance in Financial Services, October 2019
Our panel on AI Governance in Financial Services, October 2019

The Canadian Ecosystem: a Truly Collaborative Ecosystem

Montreal, with its five universities, has ranked high for more than a decade, in the list of university cities in North America. Already a major center of research excellence in the field of operations research for years, Montreal saw itself thrusted into the world scene following major breakthroughs in AI research. Indeed, around 2010, applications of AI in image recognition, language understanding and translation, began to capture the collective imagination of the public. These were possible through research in the field of deep learning which had found its stronghold in a research laboratory at the University of Montreal: The Montreal Institute of Learning Algorithms, Mila. This was the spark that ignited and shaped the Montreal ecosystem. Since then, the traction around Montreal’s high quality research increased, in turn attracting the first private investments onto campus. Ubisoft, Google and Facebook not only established research laboratories in Montreal, but also invested heavily in fundamental research carried out by local universities.

The federal and provincial governments also took notice and strategic investments followed. For instance, the Institute for Data Valorization, IVADO, was created in 2016 thanks to a CAN$200 million in public and private funding and supported by three of Montreal’s universities, Polytechnique Montréal, Université of Montreal and HEC Montréal. IVADO was a response to the increasing need to bridge the gap between academia and industry and create an effective technology transfer mechanism, that would allow research and talent from universities to get integrated quickly into the economy. Also through a similar funding of over CAN$100 million in 2017, Mila had the means to transform itself and take on a more active role as a technology transfer organization focused on deep learning applications.

Around that time, Canada was the first country to announce a national AI strategy, with a CAN$125 million investment. This strategy has two pillars: advancing research and innovation in AI; and examining its broad societal implications.

Financial institutions were also preparing their own digital transformations, and quickly started to develop collaborations with universities. It was clear from the onset of these initiatives that there was a shortage of experts and professionals in finance with the right combination of complementary skills in machine learning. In this context, Manuel Morales (one of the co-authors of this piece), Associate Professor of Mathematics & Statistics at the University of Montreal, led an initiative whose objective is to train the next generation of financial and business intelligence professionals. Manuel assembled researchers from six universities across the country, to create the Fin-ML network (referring to Machine Learning in Finance), through funding from the National Sciences and Engineering Research Council of Canada, from IVADO, and support from major corporations and regulators. It was the first pan-Canadian action focused on the financial sector and aims to provide training activities to academics & professionals, scholarship programs to some thirty students every year, and internship research opportunities in the industry.

Mile-Ex neighborhood in Montreal, an AI hub where you can find Mila, IVADO, Element AI, Borealis AI, Microsoft and more…

The ecosystem was growing fast, but it was also evolving into a more collaborative one. It is no wonder that many AI researchers were headhunted by companies. However, interestingly enough, most of them refused and chose to remain independent by joining the private sector only part time, to lead research laboratories. Notable examples are Joelle Pineau and Doina Precup, both professors at McGill University and world-renowned experts in reinforcement learning, who are leading scientific teams for Facebook and DeepMind laboratories in Montreal. This cooperative model has two measurable effects. On one hand, it gives companies direct access to the state-of-the-art research produced in university labs. On the other, it contributes to the acceleration of training new talent in the field. Indeed, these researchers turn out with more resources and direct access to industry problems for their students to focus on, thus nourishing the ecosystem.

In the financial sector, one of the first banks adopting this hybrid model was the National Bank of Canada. The sixth largest bank in the country, headquartered in Montreal, launched in 2018 a transformation initiative through AI technologies. They appointed Manuel Morales as their Chief AI Scientist.

One thing that distinguishes Montreal and the Canadian ecosystem from other tech hubs such as London or San Francisco is that all players work together. Universities have a long tradition of collaborating among themselves and with the industry, from supporting early stage to multinational companies. Moreover, corporations are invested in the growth of startups through incubators and accelerators. It is not rare to see corporations among early adopters of home grown technologies developed by startups, joining the creation process, or helping with the prototyping, and thus accelerating innovation.

One of the key elements that unites the whole ecosystem is the desire to address the ongoing public debate on the impact of AI in society and create a responsible AI movement, one that respects our fundamental values such as privacy and equality. The Montreal Declaration for the Responsible Development of Artificial Intelligence was one of the first expressions of this movement. It is a detailed consensus of ethically responsible principles — not rules, but principles — among researchers, professionals, and public figures. This initiative was followed by the creation of the provincially funded International Observatory on Societal Impact of Artificial Intelligence and Digital Technologies in Quebec City. It is a multidisciplinary center whose mandate is to accompany researchers and organizations in their reflections around responsible deployment of AI technologies.

An overview of the timeline of the AI ecosystem in Montreal, courtesy of IVADO.

Responsible AI and Governance

One of the key objectives of governance is to ensure that legal and ethical obligations are respected, and as such, is one of the main pillar of responsible AI. One of Manuel’s responsibilities at National Bank was to ensure a comprehensive governance framework for all AI processes deployed by the bank. It was in this context that Dominique Payette (another co-author of this piece) and Manuel met, and worked together on establishing the components necessary for that governance. Dominique is a lawyer working at the National Bank, and has been developing an expertise in digital and analytics matters.

The road to a comprehensive and effective governance framework is a complex one. Namely, it is important to understand thoroughly the environment in which the AI model will operate — the field, organizational structure, the end-consumers, the potential impacts, as well as what it will be performing. It may also include drafting enhanced guidelines, principles, or code of conduct. Proper risk assessment targets full life cycles of AI models and projects. Risks should be assessed early at outset (this prompting “by-design” movements such as fairness, ethics, and privacy-by-design), tested and monitored continuously throughout deployment, and until end of life.

Responsible AI obviously starts by AI respecting the law. Regardless of its coolness and smarts, AI is, as any other product or tool, fully subject to applicable legislation and regulation — just as the use of an airplane would be. Moreover, AI deployers are responsible and liable for legal damage — just like the airplane carrier is legally responsible for its airplane and its automatic functions, a financial institution would be for its automatic trading capacities. By way of example, AI must not be operated in a way that would be neglectful , and deployers must respect targeted statutory laws such as privacy legislation like General Data Protection Regulation in the EU and PIPEDA in Canada, that govern personal data processing.

Some fields in which AI is currently being developed are effectively regulated. This means that there is an overseeing regulator that has rule-making and enforcement powers over the activities in such fields. In Canada (and in many other jurisdictions), the foundational guiding regulatory principle is the protection of end-consumers and the general public, and the second, market efficiency. This remains equally applicable when they consider emerging risks associated with AI and assess if AI is safe and unharmful. What might be a challenge here is rather the novelty or sophistication of this new technology. Some regulators have created sandboxes, a contained space that allows testing innovative services, like a new robo-advisor selling insurance, in a controlled but less-regulated environment. This helps determine how to comply with applicable rules once in the regulated space. This is the case of the Autorités de Marchés Financiers in Québec which is another key player of the ecosystem.

The notion of responsible AI goes beyond legal liability, however. AI deployers also have ethical and social responsibilities. There is currently a worldwide responsible AI movement, and several initiatives have issued documents establishing these responsibilities. Common principles for responsible AI have thus emerged and become generally recognized. These include transparency, fairness, explainability, accountability, robustness and reliability, respect for privacy, individual autonomy, well-being or the prevention of ill-being and environmental sustainability. Even if none of these documents are prescriptive nor binding, they exert pressure on AI stakeholders in a way that has concrete impact on their ethical responsibility.

A picture of the Berlin wall that we felt was very representative of our thoughts

Universities and research centers are key players in the discussion

University research centers are natural neutral grounds to ascertain concepts that make up responsible AI deployments. In Montreal, IVADO and Fin-ML have been important contributors through workshops around the multiple declinations of what responsible AI is. They have also provided spaces for meaningful discussions between regulators, researchers and industry, such as our conference on AI Governance.

Fin-ML was key to this initiative through the participation of Rheia Khalaf (the third co-author of this piece), Director of Collaborative Research & Partnerships. She had spent most of her career in actuarial & risk management roles in the corporate sector, and has always been close to data and modeling. What attracted her to join Manuel’s initiative was the opportunity to contribute to the Canadian AI development, and her desire to facilitate innovation in the financial field, a relatively rigid environment.

Trust in AI, or rather lack thereof, is truly a barrier to the deployment of AI models. It is therefore a barrier to innovation. Using AI, financial institutions can develop customer centric-approaches, reduce costs through automating time and labor intensive tasks, or protect consumers with efficient fraud detection. Indeed, through Rheia’s role of searching for research opportunities in the financial sector, she quickly noticed some hesitation around the implementation of AI models. In various projects, it was the lack of understanding of AI that created reluctance. In the financial industry, it is essential to understand your models — supervisors of financial institutions or internal audit teams have to be able to review capital requirement calculations, and portfolio managers or brokers must justify their investment choices.

Trust is built through understanding the outputs of an AI model and therefore the decisions that result from it — this touches upon explainability (literally explaining the mechanisms of a model) and interpretability (discerning the mechanics of a model by understanding cause and effect). It answers the question: how do I ensure that my governance principles are respected? Of course, this can be highly context-dependent. For example, you would care less about why Spotify suggests one song and not another. However, it becomes more important when AI-driven decisions affect people’s lives, such as some decisions made by governments, employers, or financial institutions — for instance regarding immigration status, candidate selection, or credit products. Since AI models tend to be complex and opaque in nature, these concepts have been increasingly questioned, and catching the attention of ourselves and other researchers as priority fields to explore.

This is why, interpretability of AI in the financial industry is at the heart of large scope project that we are currently piloting, all the while mobilising academic and industry partners around it. Our goal, through use cases in the industry, is to learn how to better monitor AI models and improve risk management systems and corporate governance. This would result in an economic environment that fosters both safety and innovation.

An overlay of our ideas. Sometimes, an image speaks louder than words.

Once again, trust does matter

Canadians have a reputation for being caring and for expressing solidarity among each other. We try to be inclusive, and respectful of other cultures, and especially of every person’s rights. Perhaps these are reasons why AI has scared many of us since abusive use of AI technologies can disrupt those values. As a response, our society demands to be reassured.

Through our involvement in this ecosystem, we’ve witnessed the scope and depth of different initiatives, and the community being created based on exchange and trust. Even if the Canadian financial industry is very competitive, when it comes to topics that touch consumer protection — cybersecurity, data privacy, ethics — to name a few, there is a strong propensity to join forces, willingness to work together, to share ideas collectively to reach farther. We also saw the desire of different players to stay close to each other in these untapped fields, for mutual interests. Regulators want to stay close to practitioners, to better monitor developments. Experts in the industry lack the time and expertise to delve into modeling, and turn to academics and researchers. Startups are breaking rigid barriers by being fast movers without the same constraints to change. There is no wonder why all of them accept invitations to talk to each other, understand, and trust each other, during the various initiatives, communities of practice, panels and conferences organized nowadays. There is no wonder why they all want to unite and reach further in such a cohesive environment.

The workshop we organized in the fall of 2019 was the beginning of our journey, searching to lay down the common principles and recommendations around which the Montreal financial sector can agree on, hence setting the regulatory and governance foundations that would allow for large scale AI deployments in the financial sector.

About the authors

Manuel Morales, Associate Professor of Mathematics & Statistics at the University of Montreal, Chief AI Scientist at the National Bank of Canada, and Director of Fin-ML

Rheia Khalaf, M.Sc., FSA, FCIA, CERA, Director of Collaborative Research & Partnerships of Fin-ML, and member of the Partnerships team of IVADO

Dominique Payette, Lawyer, Legal Affairs at National Bank of Canada, and collaborator with Fin-ML

--

--