Why Do Companies and Cities Invest in AI Ethics and Governance?

Meeri Haataja
Berkman Klein Center Collection
6 min readOct 30, 2019
Photo by Mimi Thian on Unsplash

As the CEO and Co-Founder of Saidot, a start-up with a mission for enabling responsible AI ecosystems, I get a lot of questions about the state of AI ethics and governance in large organizations, and the reasons why more and more companies are acting and investing on this domain. Below I share what I’ve learned from my own experience talking and working with dozens of organizations, mostly large or very small, from private to public, and around the world.

We started in this space 1.5 years ago because it became very clear that the ways in which algorithms shape the ways our societies operate, how people are influenced, and how power is exercised will require a major shift in how we govern our technology. As a business owner of an AI portfolio in a large corporation I became uneasy, and I know many of my AI co-leads in various companies have become uneasy too.

This progress has been faster than we expected.

Why?

Finances at risk

According to Gartner, the business value created by AI will reach $3.9T by 2022 while according to IDC’s forecast, worldwide spending on AI will reach $77.6B by 2022 [1]. Each invested dollar is expected to deliver roughly 50X business value.

One of the industries with high expectations on AI business case is financial services. According to Deloitte, frontrunner financial services firms are achieving companywide revenue growth of 19% directly attributable to their AI initiatives. Already 70% of all financial services firms are using machine learning in production environments today [2].

This business case is at risk due to growing concerns about the unintended consequences of AI and lack of trust. No wonder financial services companies seem to be the most active in recruiting people specifically to AI governance roles. Last year in Finland, the first company to receive fines on their discriminating algorithm was namely from the financial sector [3].

Financial risk, tightly connected to a company’s capability to manage its reputation, is the strongest driver for action. Stronger than compliance.

Trust, social responsibility and reputation

Recently, the leaders of the largest companies in America received a lot of attention for their statement about the new purpose of a corporation: beyond shareholder value, companies should commit to deliver value to all of their stakeholders from customers to employees to wider society [4].

Corporate responsibility of a digital company is largely focused on responsible AI. H&M Group is one example of major consumer businesses active in responsible AI. To my knowledge, H&M was probably one of the world’s first companies with a job role solely focused on AI governance. Linda Leopold, Head of AI Policy at H&M, says the discussion has shifted from compliance to doing the right thing; “AI is a powerful tool and allows many new possibilities. But that we can do something doesn’t mean we should do it.”[5]

Employees are increasingly watching over corporate AI ethics policies. One of the noteworthy examples of this was Google’s choice to withdraw from the US Department of Defense’s bid on $10 billion JEDI contract. Google withdrew after its employees signed a petition and resigned, claiming Google’s involvement in military projects conflicts with its corporate values and principles over the ethical use of AI [6]. Google has been criticized also for their short-lived AI ethics board, as well as their involvement in building infrastructures for smart cities.

Discussions about social responsibility wouldn’t be complete without a brief look into cities. Cities are both important and active influencers in AI governance because of their intersection of digital markets, quickly digitizing public space, and citizens. While some cities have received harsh criticism for their opaque AI initiatives, many of them are active in AI responsibility. For example, New York is planning to invest $7M over three years on its Center for Responsible Artificial Intelligence. Earlier this year, Amsterdam, New York and Barcelona formed Cities Coalition for Digital Rights to “protect and uphold human rights on the internet at the local and global level” [7].

Businesses with the highest business value expectations of AI, the ones using AI for high-stakes decisions in trust-based consumer industries and local government, also have the strongest incentive to take initiative in AI ethics. Players in smart cities, health, transportation, financial services and education are some examples that fall into this category.

Compliance

Compliance is a powerful motivation too. Let’s make one thing clear right away: AI regulation exists, and more is coming.

In Europe, GDPR requires data protection impact assessments and transparency of systems that are used for automated decision-making and profiling. A look into the current informing practices of companies makes it obvious: more enforcement and specific rules are needed. One of the problems GDPR isn’t addressing is the apparent risk of intended or unintended discrimination through algorithms.

These are some of the concerns behind the expansion of facial recognition bans across US cities with a pace that no-one expected. Now, four major cities have banned the technology, and many more are considering similar bans. While facial recognition is widely used in surveillance and e.g. airport controls (see an interactive map [8]), this will be among the most disruptive and fastest evolving areas of AI regulation in the near future. But that’s not the whole story of AI regulation.

This year we have also come to know California’s ‘bot law’, which bans the use of a bot to interact with another person with the intent of misleading the other person about its artificial identity. The Government of Canada requires algorithmic impact assessments from all their machine learning based decision-making systems. The FDA released their preliminary regulatory framework for ML based software as a medical device.

The EU Commission’s digital department recommends a regulatory framework for AI that would set transparency obligations on automated decision-making, as well as assessments to ensure they do not perpetuate discrimination or violate fundamental rights such as privacy [9]. Upcoming EU regulation, promised to be revealed by early 2020, will benefit from the recent recommendations of German Data Ethics Commission suggesting strong measures against ethically indefensible uses of data [10].

Ok, you got it, more regulation is coming. I predict the year 2020 will be the big year of AI regulation.

Contractual requirements

Finally, AI vendors and technology providers will soon need to prepare for reading new types of procurement terms for laying down guarantees for AI responsibility. This has been anticipated by many vendors already, but now we’re seeing intentions turn into actions. Most recently the World Economic Forum published its guidelines for public procurement of AI, aiming to help governments with the goal of safeguarding public benefit and well-being [11].

As more lawsuits are raised against algorithms, like the one of an investor who sued trading AI after its fully automated loss of $20M, we will see more and more contracts tied to transparency and real-world evidence on the behavior of AI systems [12]. I believe litigation attorneys already see what the rest of us in the industry will get to learn over the next few years; opacity works until something breaks.

____

While I think these are the most important drivers for why many companies and cities are already taking initiative on AI ethics and governance, I warmly welcome feedback and recognize there is a long list of other — and related — motivations. More influential AI, more visible problems and more data shared between organizations outline an interesting reality, where companies ask for standards and regulation to bring some much-needed clarity and predictability into the risky business of algorithms.

Written by Meeri Haataja, CEO & Co-Founder of Saidot & Affiliate at Berkman Klein Center for Internet & Society at Harvard University

____

References

1: Forbes 3/27/2019: Roundup Of Machine Learning Forecasts And Market Estimates For 2019

2: Forbes 8/15/2019: Why AI Is The Future of Financial Services

3: AlgorithmWatch: Automating Society - Taking Stock of Automated Decision-Making in the EU (pp.59–60)

4: Business Roundtable 8/19/2019: Business Roundtable Redefines the Purpose of a Corporation to Promote ‘An Economy That Serves All Americans’

5: H&M 6/26/2019: Meet Linda Leopold, Head of AI Policy

6: Business Insider 10/9/2018: Google drops out of contention for a $10 billion defense contract because it could conflict with its corporate values

7: Open & Agile Smart Cities: 3/1/2019: Can Cities Be Guardians of Digital Rights?

8: Ban Facial Recognition interactive map

9: Politico 7/18/2019: Next European Commission takes aim at AI

10: Fortune 10/24/2019: A.I. Regulation Is Coming Soon. Here’s What the Future May Hold

11: World Economic Forum 9/20/2019: AI Government Procurement Guidelines

12: Bloomberg 5/6/2019: Who to Sue When a Robot Loses Your Fortune

--

--

Meeri Haataja
Berkman Klein Center Collection

CEO & Co-Founder, Saidot | Affiliate at Berkman Klein Center at Harvard | Chair of IEEE’s Ethics Certification Program for Autonomous & Intelligent System