Responsible AI: A Global Perspective

james wilson
Eliiza-AI
Published in
5 min readJun 26, 2018

--

Last week I spoke at the first Responsible AI event in Melbourne. The idea behind the event is to bring together people from a range of backgrounds to debate important issues associated with AI/Ml such as trust, transparency, fairness and ethics. With over 150 people registered and 100 turning up on the night I was blown away by the level of interest and passion for the topic. It was great to see people from such a broad range of backgrounds including philosophy, law, technology and ethics and the different perspectives they bring to the conversation.

The aim of my talk was to highlight some of the fantastic work being done globally on the topic of Responsible AI and get the audience thinking about how we in Australia can contribute.

Responsible AI Players can be broadly categorised as follows:

  1. Governments
  2. Public, private, academic partnerships
  3. Individual companies

1. Governments

There are a number of governments announcing strategies and investment in AI. Of these I’ve identified three who have placed Responsible AI considerations such as ethics, trust, transparency and fairness at the center of their strategy.

UK

In 2017 The British government published a review of the UK’s AI industry. Following the review the British government committed £300m in AI research which includes plans to establish a new £9m centre for data ethics and innovation to examine the possible structural changes to jobs, data privacy and safety.

“The government will create a new Centre for Data Ethics and Innovation to enable and ensure safe, ethical and ground-breaking innovation in AI and data driven technologies. This world-first advisory body will work with government, regulators and industry to lay the foundations for AI adoption”

UK Autumn Budget, 2017

Further reading

University of Cambridge, Center For The Study Of Existential Risk: https://www.cser.ac.uk/research/risks-from-artificial-intelligence/

Leverhulme Centre For The Study Of Artificial intelligence: http://lcfi.ac.uk

The Alan Turing Institute: https://www.turing.ac.uk/data-ethics/

The House of Lords, Artificial Intelligence Committee, AI In The UK: Ready & Able? https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10002.htm

France

In March 2018 the French President “presented his vision and strategy to make France a leader in Artificial Intelligence” titled “AI For Humanity”. The strategy consists of three pillars, one of which is “Establishing an ethical framework”. The French government has committed €1.5b of funding by 2022 to the AI for Humanity strategy

The President is committed to ensuring that transparency and fair use are central to algorithms…. These two priorities of transparency and fair use will be subject to education programmes so that our future citizens will be prepared for these transformations.”

AI For Humanity, March 2018

Further Reading

The full report is available in French and English here:
https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf

Canada

In 2017 Canda announced the Pan-Canadian Artifical Intelligence Strategy. Development of the strategy will be led by the Canadian Institute of Advanced Research (CIFAR)with investment of CAD$125m. One of its four goals is to “To develop global thought leadership on the economic, ethical, policy and legal implications of advances in artificial intelligence”.

“Canada and France wish to promote a vision of human-centric artificial intelligence grounded in human rights, inclusion, diversity, innovation and economic growth. The widespread use of these new technologies will have a profound effect on everyday life and societal progress, creating both opportunities and challenges”

Canada-France Statement on Artificial Intelligence, June 2018

Further Reading

Responsible AI in the Government of Canada: Responsible Artificial Intelligence in the Government of Canada

The Montreal Declaration on Responsible AI: https://www.montrealdeclaration-responsibleai.com/the-declaration

CIFAR AI & Society: https://www.cifar.ca/assets/artificial-intelligence-society/

2. Public, private and academic partnerships

AI Now Institute Interdisciplinary research center dedicated to understanding the social implications of artificial intelligence”

AI For AllA nonprofit working to increase diversity and inclusion in artificial intelligence. We create pipelines for underrepresented talent through education and mentorship programs around the U.S. and Canada that give high school students early exposure to AI for social good”

Partnership on AI “Multi-stakeholder organization that brings together academics, researchers, civil society organizations, companies building and utilizing AI technology, and and other groups working to better understand AI’s impacts”

Open AI “a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence”

3. Individual Companies

Google

In a blog post published in June 2018 CEO Sundar Pichai shares Google’s 7 AI principles: https://blog.google/topics/ai/ai-principles/

Microsoft

Microsoft has published its AI principles and values: https://www.microsoft.com/en-us/ai/our-approach-to-ai

What about Australia?

As part of the 2018 budget the Australian government investment of $29.9m over four years for projects that make use of the technology. The bulk of funding will be delivered through the Department of Industry, Innovation and Science's Cooperative Research Centres (CRC) program

The money will also be used to develop an AI ethics framework. Funding also allocated for PhD scholarships and school-related learning. Most of the funding will come in the 2019/20 financial year

The Australian Computer Society has established an Artificial Intelligence Ethics Committee. The makeup of this committee was announced in late 2017.

Australia’s Chief Scientist Dr Alan Finkel gave the keynote address at a Committee for Economic Development of Australia event titled ‘Artificial Intelligence: potential, impact and regulation in Sydney on 18 May 2018. In this speech he proposed “The Turing Certificate”: “A set of standards verified by independent auditors that certify the AI developers’ products, their business processes, and their ongoing compliance with clear and defined expectations.”

Summary

I enjoyed researching this talk. Governments, partnerships and individual companies are recognising the transformative potential of AI, the risks of getting it wrong, and investing resources into effective strategies to mitigate these risks. I’d love to hear from you if you have examples of other groups globally who are leading the charge on Responsible AI.

--

--

james wilson
Eliiza-AI

CEO @Eliiza-AI. Interests include AI, data science, machine learning, digital transformation.