“Responsible AI: from theory to action”, a look back at IBM’s recent AI conference
“In Europe, artificial intelligence (AI) is being driven by human values, as illustrated by the recent AI for Humanity report. But the consensus within Europe is not necessarily shared by the G20,” explains Mathieu Weill, who heads the Digital Economy Department at the French government’s Directorate General for Enterprises. Questions surrounding the development of AI that is inclusive and ethical, or at least bias-free, were unpicked during the Responsible AI: From Theory to Action conference put together by IBM on 9 November at the Cloud Business Center in Paris. We take a look back at the discussions, which were attended by Renaud Champion, Director of Emerging Intelligences and Executive Director of emlyon business school’s Institute for Artificial Intelligence in Management.
Safeguarding digital ethics
“A broad-based discussion about digital ethics is overdue,” argues Véronique Torner, co-Chairwoman of Alterway, a web platform specialist, and Board member of Syntec Numérique, the union that represents French digital services companies. In October, Syntec Numérique and CIGREF, an IT network linking major French companies, co-published a master framework on the issue.
The framework provides a check-list of questions for any manager drawing up a company assessment in this area. Has the IS department set up training programs covering the ethical design of digital tools? Does the company identify and address risks of bias linked to the datasets used? Are digital tools designed in a way that considers accessibility for disabled people? Has an approach aimed at improving the IS environmental footprint been put in place? The master framework uses three broad themes to define digital ethics: ethics by design, which covers ethical issues associated with designing tools and algorithms; ethical use by employees and clients; and social ethics, which deals with questions about the impact of technology on society. “Companies can use the framework to adopt an approach based on transparency, explicability from the design stage, and checks right along the value chain,” stresses Véronique.
Betrayed by its biases?
More generally, with the spread of AI as a tool for learning and decision support in technical areas, its reliability is under scrutiny as never before. “How can you tell if an AI-based recommendation is better than one from an expert?” asks Francesca Rossi, AI Ethics Global Leader at IBM Research. In the medical field, for example, while experts generally outperform machines, their assessments will be more effective if they are supported by AI. Jean-Philippe Desbiolles, Vice-Chairman of Cognitive Solutions at IBM Watson France, argues that people are the primary cause of biases. “AI, people and biases are indissociable from each other” he says, “because people are the source of the machine and we bring our own biases. The human factor is the one that has to be corrected first and foremost.”
AI needs to reflect diversity
But it is not the only aspect that needs to be addressed. The potential increase in inequalities owing to AI must be integrated from the design stage of any new program or algorithm. As part of its input to the Digital Republic Act, France’s Data Privacy Authority (CNIL) held a discussion that it then detailed in its 2017 report on the ethical issues raised by algorithms and artificial intelligence in terms of safeguarding personal data when deploying new technologies. “CNIL’s role is to support public and private participants in their digital transformation. The project is a vast one and will be grounded in the basic principles of fairness and vigilance that we have laid down,” says Sophie Nerbonne, CNIL’s head of compliance.
Promoting more inclusive AI
Other social issues also have to be factored in. Could it be, for example, that AI will increase the inequalities that already exist in our society? French politician Céline Calvez says that AI cannot be designed using just 50% of the population, arguing: “If a database is made up exclusively of white men, how can the machine recognize black women? You have to anticipate AI biases so that parameters are fair and promote diversity and parity.”
Parity, in particular, is still some way off, when you consider that women make up just 27% of the population of engineering schools for instance. And the trend is not improving. “We have noted a significant decline in women in computing sectors, and especially in AI-related industries,” says Christine Hennion, a member of the French National Assembly who sits on the Assembly’s Economic Affairs Committee.
A number of important digital players, including the CIGREF, have thrown their weight behind Femmes@numérique, a group set up to highlight the question of women’s representation in the digital arena at the national level.
Ensuring that diverse profiles are taken into account when designing algorithms is vital to building responsible AI. With the Villani report published in March 2018, the government identified AI as an issue of national concern and set clearly stated inclusion targets. Goals include raising the proportion of female students in digital subject areas to 40% by 2020 through a policy of positive incentives and working with digital sector participants to develop a nationwide diversity initiative, which could set a short-term target such as a 30% increase in women in the sector over the next two years. The path towards truly responsible AI will be a long one, though, not least because efforts appear to be concentrated in Europe.
A key driver in transforming education
Responsible AI is also a technology that will benefit society as a whole. If 80% of jobs by 2030 are going to be shaped by technology and have not even been invented yet, AI must be able to play a role in transforming education, as we have already discussed here and here.
Pierre Dubuc agrees. “We are soon going to be facing a society-wide issue, where about a billion people need to be trained in digital skills,” says the CEO of OpenClassroom, Europe’s leading e-education site with around three million users every month and over 1,000 online courses. “Plus you have to consider the fact that young people entering the job market today will have retrain every five years, since it’s estimated that they will do between 10 and 15 different jobs over their working life.”
Rethinking AI management
With its brand-new Institute for Artificial Intelligence in Management, emlyon business school studies each sector and analyses job trends by combining entrepreneurial, anthropological, scientific and other viewpoints. The managers of the future will be expected to have hybrid skills blending management sciences, engineering and ethics. Renaud Champion, who directs the institute, says: “Research needs to be done so that we can understand in pragmatic terms how AI is set to impact our society and companies. AI is at the heart of what we do, which is to train the managers of tomorrow.”