In search of the ‘Good AI society’

Ghost town tool shed, Lachlan Donald

Artificial Intelligence (AI) is a widely applicable technology that has the potential to contribute to better individual and societal outcomes. Some advocate the development of an ethical framework for AI, to guide us towards a ‘good AI society’, as one study put it.

However, this begs the question what is good, who decides and how AI ethics can be reconciled with a market economy and institutions that allow innovation without permission subject to ex ante and ex postchecks on harm.

The application of specific ethical principles to AI may, paradoxically, lead to unethical outcomes. In particular, reduced innovation and productivity growth, which would constrain the resources and options available to society; and the concentration of power in the hands of those charged with deciding what is good, which would narrow the scope for individual agency.

Given that AI is a general-purpose technology like steam, electricity and computing; new ethics, rules and institutions focussed on AI may either be too broadly targeted i.e. they should be focussed on particular applications such as military use of AI, or too narrowly targeted i.e. they should apply to all technologies and human decisions. But this also begs the question, are new ethical principles necessary, or should existing frameworks be adapted as required in an age of AI?

Therefore, before applying new ethical principles to AI, we should assess the efficacy of the counterfactual, namely whether exiting frameworks are robust and fit for purpose in view of technological change generally, indeed whether in some respects they may constrain beneficial applications of AI.

In evaluating the merits of proposed ethical principles for AI, the counterfactual Pareto principle and ethical underpinnings of intervention in markets should be considered. In considering new ethics for AI versus the counterfactual, scope for innovation without permission, which has underpinned economic progress over the past few centuries; and implications for the concentration of power, not just in markets but in institutions and groups of experts, should be taken into account.

Change is likely required, but ethics for AI may not be the change we are looking for.

Ethics for AI

A number of studies have addressed the question of ethics in relation to AI.[1] Whilst they vary, they have elements in common. I consider key and recurring elements of these various proposals, both in terms of the objectives and means of delivering those objectives which have been proposed.

Proposals in relation to ethics and AI may, at least implicitly, be conceived as applying to all AI based applications; but not necessarily to non-AI based applications, in particular human-based decisions and services. Further, their interaction and reconciliation with existing public policy is not typically spelt out.

Objectives

Examples of objectives proposed in relation to AI include benefit society as a whole, contribute to the good of society, promote the common good and benefit humanity and promote well-being.

Whilst these objectives appear unobjectionable, what precisely is meant as a matter of principle is typically left unclear, though specific aiims may be set out. For example, Floridi et al (2018) mention, in a review of other studies: a fairer or more efficient distribution of resources, more societal cohesion and collaboration, eliminating all types of discrimination, contribution to global justice and equal access to the benefits of AI. Floridi et al(2018) go on to set out the following objective:

“…AI should be designed and developed in ways that decrease inequality and further social empowerment, with respect for human autonomy, and increase benefits that are shared by all, equitably.”

Means

Proposals for an ethical approach to AI also typically include procedural elements and preferences for the way in which AI operates.

These include, for example, explicability, a right to explanation and a right not to be subject to a decision based solely on automated processing of data (the latter is incorporated in Article 22 of the European GDPR).

Further, proposals set out by European Political Strategy Centre (EPRC)[2]include a ‘human in the loop’ principle that AI should augment human abilities but not substitute for them, and periodic tests and retraining to ensure that humans would still be able to perform the task in question in case of a technology breakdown (principles that would not be met for many existing technologies and applications).

Another recurring focus is potential for bias in AI systems, with EPRC noting that “While augmenting humans’ capabilities, AI can also exacerbate existing power asymmetries and biases.”

Some studies also include specific policy and institutional proposals. For example, Floridi et al(2018) propose financial incentives for the development and use of AI technologies that are ‘socially preferable’ and potential certification for ‘deserving’ products and services. A European oversight agency for the evaluation and supervision of AI products, software, systems or services is proposed (Recommendation 9).

Options and criteria for evaluation

Broadly, there are two distinct approaches that could be adopted in relation to AI:

· Allow innovation without permission subject to economy wide competition, consumer and data protection law and specific regulation of some applications (in line with existing services, or adapted given the nature of AI-based services).

· Adopt a distinct ethical approach to AI, potentially with an oversight agency, involving specific goals and procedures.

In evaluating these alternatives, consideration is given to their ethical underpinnings, the scope to increase social surplus via productivity growth, the distribution of social surplus, other specific goals such as unbiasedness and consideration of the degree to which power is concentrated.

Ethical underpinnings of alternative approaches

Innovation without permission places weight on the market. According to the Arrow Debreu theorem[3], under certain idealised conditions, a competitive market leads to a Pareto-efficient outcome (an allocation of goods where there is no possibility of redistribution in a way where at least one individual would be better off while no other individual ends up worse off). The Pareto principle is silent on the distribution of resources.

In welfare economics and public policy evaluations potential Pareto improvements are considered, namely if one state provides an improvement for one party but causes deterioration in the state of the other, it will be chosen if the winner can compensate the loser’ losses until the situation is at least as good as in the initial situation. In practice, change generally makes some worse off and some better off and is pursued if there is a net benefit.

There are, of course, moral limits to markets.[4] These include externalities, inequality and concentrations of power, and these are subject to ‘orthodox’ public policy intervention. Further, distributional goals are in general addressed via broadly based policies independent of specific applications of technology. The overall approach involving markets and targeted intervention does have an ethical underpinning.

The alternative AI-specific ethics approach allows the adoption of whatever set of ethical goals are thought desirable. However, the specific ends and means that have been suggested in relation to ethics and AI may come at the cost of foregone innovation and application of AI, leaving less social surplus to pursue social goals. Further, the direct application of principles that appear ethical to AI may also result in unintended consequences, for example, continued reliance on human decision making which may be less transparent, more biased or less safe than an automated system.

Ethics and what constitute ‘good’ are not new considerations, but features of existing law, institutions, markets and market interventions. If an alternative ethical standard is proposed, it should be assessed against the status quo Pareto standard applied in welfare economics and the ethical principles underpinning law and distributional goals.

Moreover, it needs to be tested relative to the existing standard, not just in terms of the attractiveness of the ethical principle itself, but also in terms of the anticipated consequences of implementation in practice. In particular, what impact would it have on the default position in markets of innovation without permission which has underpinned economic progress; and are the anticipated consequences of implementation consistent with the proposed set of ex ante ethical principles?

Further, if the principle is superior in some respects, should it be applied more generally to all technologies and human decisions; or is it relevant to a sub-set of applications of AI? In other words, is AI per setoo broad a class of potential applications to be the relevant category for ethical and policy analysis?

Productivity growth and social surplus

“Productivity isn’t everything, but in the long run it is almost everything. A country’s ability to improve its standard of living over time depends almost entirely on its ability to raise output per worker.” Paul Krugman, 1994[5]

There are multiple dimensions to what individuals or society might consider good, and what is good may be decided via individual choice or collective decision processes (in practice both operate). However, having more social surplus increases the scope to pursue specific aims, and is therefore an overarching consideration (it is not the only consideration[6], but it is an important one).

Productivity growth is the driver of growth in real income per hour worked, and productivity growth is driven by harnessing innovation. Productivity and income growth have, in turn, helped enable gains in leisure and life expectancy.

In 12 Western European countries from 1870 through to the year 2000 productivity increased 10-fold, split between 5-fold real income growth and increased leisure, with no change in employment per capita (see following figure).[7]

Productivity growth and employment

These gains were driven by the application of innovation via a process of innovation without permission and ‘creative destruction’ with general purpose technologies including steam, electricity and computing playing a particularly prominent role. AI can be viewed as a continuation of this process.

Whilst technology continues to contribute to productivity growth, the overall rate of growth over the past decade has been sluggish, which is puzzling.[8] This slowdown, and its consequences for income growth and the ability of governments to pursue social objectives, has likely contributed to anxiety regarding the future.

AI can play a role in restoring productivity growth, if we allow it to. We should therefore approach the notion that new and additional ethical requirements should apply to AI with caution, particularly if making these operational would undermine freedom to innovate without permission and individual agency to choose what services to consume.

Distribution of surplus

The process of ‘creative destruction’ and productivity growth does not benefit everyone, and certainly not immediately. But over time, and with appropriate involvement of the state in providing social insurance, universal services such as education and a degree of income redistribution, almost everyone benefits.

However, it is not required of each and every innovation that everyone benefits (they don’t), nor that benefits are equally distributed (they never are). Indeed, productivity growth would be stopped in its tracks if these requirements were applied to AI or other technologies, and that hardly seems ethical.

Trust and explicability

Trust is possible even without understanding fully how something works, provided it works. Clinical trials illustrate how we can validate an approach even though our understanding of the underlying (biological) system is incomplete.[9] Amongst those advocating ethics for AI, not all agree that it should necessarily be explicable:[10]

“We should avoid efforts to pry open “black boxes” of algorithms; this is a fool’s errand.”

Further, it would be unwise to promote trust in a general-purpose technology such as steam, electricity, computing or AI. Rather, consumers need to be able to discriminate between trustworthy and untrustworthy applications and providers; and brand, ratings and reviews are amongst the mechanisms that can help users identify what is trustworthy.[11]

It would hardly be ethical to deny the use of AI in lifesaving settings because it was not fully explicable, and there are a growing number of applications where the way in which AI solves a problem may not be fully explicable, yet we can demonstrate that it works (DeepMind AlphaFold[12] protein folding AI and navigation for Google Loon internet balloons[13] are examples).

The idea that a human is necessarily in the loop, and that we should as a general requirement insist that humans can operate a system should AI fail, seems particularly ill-conceived. Humans are not necessarily in the loop in any meaningful sense in relation to many existing technologies and there are countless existing systems a human could not possibly operate manually.

One can also envisage applications of AI, for example for medical diagnosis, where it might be unethical not to refer some patients to a specialist automatically and promptly without a human in the loop. Cancer survival rates are, for example, dependent on rapid referral.[14]

Bias

Whilst efforts should be made to ensure that AI does not reflect existing bias[15], it is not clear that AI should be subject to additional requirements that do not apply to other technologies, institutions and human decisions (existing law in relation to bias should, of course, apply to AI).

The risk is that the potential of AI is held back by additional requirements, and that includes the potential of AI to do better than us in reducing bias. Automation may actually help eliminate bias in some contexts, by ensuring there is no human in the loop. For example, a customer with experience of bias and the personal costs of long-standing behavioural adjustment to bias, made the following remark in relation to the automated Amazon Go store:[16]

“…everyone is just a shopper, an opportunity for the retail giant to test technology, learn about our habits and make some money. Amazon sees green, and in its own capitalist way, this cashierless concept eased my burden a little bit.”

However, whilst it is likely that biases of various kinds will arise in relation to AI services (given human bias in training data), they may be easier to detect and correct that bias in individual decisions and institutional systems (AI may at least be less of a black box than people and does not have an incentive to deceive itself or others regarding bias). Re-educating AI may also prove easier and more reliable than re-educating humans, and the results easier to verify.

AI itself is also not corruptible, whereas humans may bias their decisions in return for reward. For example, a London based peer-to-peer car service driver mentioned that he preferred the service to previous work for a taxi company, because pick-ups were algorithmically determined based on proximity and willingness to accept a request, rather than kick-backs to call dispatch operators.[17]

Concentration of power

Power can become concentrated in markets, though power can be contested and is subjected to potential checks via competition law. In a market anyone with an idea, and every consumer has a voice. Markets also harness self-interest for the common good, as Adam Smith noted:

“It is not from the benevolence of the butcher, the coffee brewer, or the baker that we expect our dinner, but from their regard to their own interest.”

Markets also help ensure that multiple judgements and bets regarding the future can emerge and are tested, and that decentralised information is harnessed. Regarding prediction, there is a long history of people, including the inventors of powerful technologies, making wildly wrong predictions, as this illustrates:

“The coming of the wireless era will make war impossible, because it will make war ridiculous.” Guglielmo Marconi, 1912

One should therefore be cautious about vesting too much power with an individual or committee and ensure that prediction is contestable and tested. A framework is required that allows experimentation and tests ideas. Both the scientific revolution and markets embody these ideas.

Evolution by natural selection also illustrates how even a blind process of genetic trial and error, provided there is a correction mechanism, can achieve good design. Markets more closely mimic this process than centralised judgement, since the set of innovations is larger and the feedback process resulting in growth or elimination is swift and ruthless.

The market process is not blind as it is in the case of evolution, with competing visions of the future (as opposed to random mutations) put forwarded and tested. A committee of experts, no matter how well intentioned and informed, cannot replicate this process; and if power becomes entrenched, may no longer serve the public interest. Benevolence is no longer optional. Stakeholders may be consulted regarding their preferences, but this is no match for the power of individual consumer choice in revealing preferences.

Whilst an explicitly ethical approach may have utopian appeal, we should, as The Economist noted in an essay on Liberalism[18], be cautious of utopianism:

“Unlike Marxists, liberals do not see progress in terms of some Utopian telos; their respect for individuals, with their inevitable conflicts, forbids it. But unlike conservatives, whose emphasis is on stability and tradition, they strive for progress, both in material terms and in terms of character and ethics.”

From a liberal perspective, the rights of the individual, individual preferences and competing ideas are central to a dynamic process that brings about better outcomes. Further, a key element of liberalism is a distrust of power, particularly concentrated power.

Reappraising policy, law and regulation

If a new and different ethical standard is proposed, it needs to be tested relative to the existing standard not just in terms of the attractiveness of the ethical principle itself, but also in terms of the anticipated consequences of implementation in practice. In particular, what impact would it have on the default position in markets of innovation without permission which has underpinned economic progress; and are the anticipated consequences of implementation consistent with the proposed set of ex anteethical principles?

Further, it is unclear why ethics for, and potentially regulation of, AI per seshould be the focus. As Tom Standage (2018) put it:[19]

“…given how widely applicable AI is — like electricity or the internet, it can be applied in almost any field — the answer is not to create a specific set of laws for it, or a dedicated regulatory body akin to America’s Food and Drug Administration. Rather, existing rules on privacy, discrimination, vehicle safety and so on must be adapted to take AI into account.”

A re-examination of existing rules should arguably go beyond adapting them to AI (and other new technologies and business models) and include; examination of whether existing rules represent a barrier to potentially beneficial AI[20], consideration of whether existing rule are applicable or should differ[21]; and consideration of whether the underlying mechanisms for creating legal infrastructure — the platform on which we build everything else in our economy — are up to the task[22].

A way forward

In assessing proposals for ethics for AI the counterfactual is not a state of the world absent ethics; but the ethical underpinnings of a market economy subject to interventions to address the moral limits of markets (including externalities, market power and distributional concerns).

We should also consider whether proposed new ethical principles are superior generally, or in some domain, and if so whether they should apply generally or to a specific application — whether AI or non-AI based.

We should also consider the more orthodox question of whether existing law and regulation needs to change in view of technology and market change, including that driven by AI. A re-examination of existing rules should include; examination of whether they represent a barrier to beneficial applications of AI, whether existing rule are applicable or should differ in relation to new technology or applications; and whether the underlying mechanisms for creating rules are fit for purpose.

In considering these questions the scope for innovation without permission, which has underpinned economic progress over the past few centuries; and implications for the concentration of power, not just in markets but in institutions and groups of experts, should be taken into account.

We should also anticipate that AI will likely prove superior in some applications not just in terms of efficiency, but also judged against criteria including unbiasedness and safety. The relevant question is then likely to shift from ‘should there be a human in the loop?’ to ‘should we prevent humans from undertaking or intervening in an activity?’ Debate over this question is likely to prove predominantly political rather than ethical.

Change is likely required, but ethics for AI may not be the change we are looking for.


[1]Including the following:

AINOW, AINOW 2018 Report, December 2018. https://ainowinstitute.org/AI_Now_2018_Report.pdf

Floridi et al, An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations, November 2018. https://www.researchgate.net/publication/328699738_An_Ethical_Framework_for_a_Good_AI_Society_Opportunities_Risks_Principles_and_Recommendations

Paul Hofheinz, The Ethics of Artificial Intelligence: How AI Can End Discrimination and Make the World a Smarter, Better Place, May 2018.https://www.lisboncouncil.net/publication/publication/148-the-ethics-of-artificial-intelligence-how-ai-can-end-discrimination-and-make-the-world-a-smarter-better-place.html

House of Lords, AI in the UK: ready, willing and able?, April 2018. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf

Cédric Villani, AI for humanity, March 2018. https://www.aiforhumanity.fr/en/

European Commission communication, Artificial intelligence for Europe, April 2018. https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe

[2]European Political Strategy Centre, The Age of Artificial Intelligence — Towards a European Strategy for Human-Centric Machine, March 2018. https://ec.europa.eu/epsc/sites/epsc/files/epsc_strategicnote_ai.pdf

[3]Arrow and Debreu, Existence of equilibrium for a competitive economy, Econometrica, Volume 22, 1954.

[4]Jean Tirole, Economics for the Common Good, Princeton, 2017.

[5]Paul Krugman, The Age of Diminishing Expectations, 1994.

[6]For example, see Fukuyama, Identify, 2018.

[7]Maddison, The World Economy — A Millennial Perspective, OECD Development Centre Studies, 2000.

[8]Byrne and Sichel , The productivity slowdown is even more puzzling than you think, August 2017. https://voxeu.org/article/productivity-slowdown-even-more-puzzling-you-think

[9]For example, the mechanism by which anaesthesia works has been poorly understood, but we happily make use of it. Perkins, How does anesthesia work?, Scientific American, February 2005. https://www.scientificamerican.com/article/how-does-anesthesia-work/

[10]Paul Hofheinz, The Ethics of Artificial Intelligence: How AI Can End Discrimination and Make the World a Smarter, Better Place, May 2018. https://www.lisboncouncil.net/publication/publication/148-the-ethics-of-artificial-intelligence-how-ai-can-end-discrimination-and-make-the-world-a-smarter-better-place.html

[11]Onora O’Neill, TED Talk, June 2013. https://www.ted.com/talks/onora_o_neill_what_we_don_t_understand_about_trust

[12]DeepMind, AlphaFold: Using AI for scientific discovery, December 2018. https://deepmind.com/blog/alphafold/

[13]Wired, Machine learning invades the real world on internet balloons, February 2017. https://www.wired.com/2017/02/machine-learning-drifting-real-world-internet-balloons/

[14]Richards, Thorlby, Fisher and Turton, UK cancer survival rates are poor, not because of poor treatment but because of delays in identification and referral for treatment. Unfinished business — An assessment of the national approach to improving cancer services in England 1995–2015, November 2018. https://www.health.org.uk/sites/default/files/upload/publications/2018/Unfinished-business-an-assessment-of-the-national-approach-to-improving-cancer-services-in-england-1995-2015.pdf

[15]Forbes, Google’s DeepMind has an idea for stopping biased AI, March 2018. https://www.forbes.com/sites/parmyolson/2018/03/13/google-deepmind-ai-machine-learning-bias/#7efb56b46829

[16]CNET, In Amazon Go, no one thinks I’m stealing, 26 October 2018. https://www.cnet.com/news/amazon-go-avoid-discrimination-shopping-commentary/

[17]Pers comm, 2018.

[18]The Economist, The Economist at 175 — Reinventing liberalism for the 21st century, 13 September 2018. https://www.economist.com/essay/2018/09/13/the-economist-at-175

[19]Tom Standage, Regulating artificial intelligence, December 2018. https://worldin2019.economist.com

[20]Hosuk Lee-Makiyama, Briefing note: AI & Trade Policy, 2018. https://euagenda.eu/upload/publications/untitled-189277-ea.pdf

[21]Brian Williamson and Mark Bunting, Reconciling private market governance and law: A policy primer for digital platforms, January 2018. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3188937

[22]Gillian Hadfield, Rules for a flat world, Oxford University Press. 2017.