Opinion

AI Ethics: Conflicting Visions of Algorithmic Personalization

Personalization as a tool for optimal control of humans or a path to human emancipation?

Travis Greene
Towards Data Science

--

Photo by Kalea Jerielle on Unsplash

In an ironic shift, the self-expression, freedom, and mutually-beneficial collaboration promised by the adaptive web and Web 2.0 technologies have been one-sidedly co-opted by corporations. Meanwhile, contentious assumptions of economic theory filtered into personalization research with little critical pushback. (Note: Readers interested in understanding the role of competing ideologies in AI research can read our open-access Perspective published in Cell Patterns).

Personalization is ostensibly a suite of technologies designed to filter out irrelevant information and provide recommendations adapted to an individual person’s tastes and needs. But personalization on commercial platforms such as Facebook, Google, and Amazon is increasingly viewed through the lens of engineering optimization and control (I discuss the control implications of reinforcement learning-based personalization here). Large corporate platforms approach personalization through a self-interested, neoliberal economic logic of competition, catalyzed by technological imperatives of innovation and creative destruction.

Applying engineering formalism and economic theory to billions of human users on a global scale raises ethical issues around personal autonomy and human self-determination, some of which are now being translated into formal legislation in places such as the European Union with the GDPR, and the recently proposed AI Act and Digital Services and Markets Act.

Economic Justifications of Personalization

In economic theory, platforms are conceived as multi-sided markets aimed at the commodification of user activities and content. Interested third parties, particularly advertisers, benefit from “thick markets” of human attention provided by the platform’s algorithmic standardization and governance structures. Specialization serves mutual advantage through trade. Advertisers contract out the complex work of machine-learning driven market segmentation to the platform, with its top-tier data scientists, computational power, and massive volumes of implicit behavioral data, leaving them to more optimally invest their scarce resources into developing personalized messages and offers served via the platform’s recommendation infrastructure.

Neoclassical economics celebrates markets’ ability to naturally generate optimal allocations of goods and services because all transactions are assumed to be voluntary and mutually beneficial. If they were not, rational actors would not transact. Hence, personalization empowers rational consumers (and by extension, advertisers) by allowing them access to larger choice sets (i.e., markets), facilitating information exchange among peers and service and product providers, and reducing search, transaction, and decision-making costs.

Personalization is thus the technical realization of this market-driven view of economic freedom in which persons can get what they personally prefer — no matter the genealogy or legitimacy of those preferences. Personalization, so viewed, promotes freedom because it is based on voluntary, and thus fair, exchange: rational users are assumed to voluntarily interact with the platform and leave behavioral trace data which can then be processed and exchanged with interested outside parties on the basis of users’ revealed wants and needs. As Nobel laureate Milton Friedman argues in his 1980 book Free to Choose, this market-based process of voluntary transaction promotes economic exchange, prosperity, and ultimately human freedom.

But the rosy story backed by decades of Nobel-prize-winning economic theory leaves out a lot of important details. As the Facebook whistleblower Francis Haugen describes, economic incentives can motivate corporate platforms to obscure personalization research that could negatively affect its bottom line. This includes finding out that applications of personalization technology can have damaging psychological effects on individuals and destabilize societies.

The Aim of a Science of Personalization: Control or Freedom?

In modern times we narrowly equate knowledge with science. Might this be a symptom of a larger ideology of scientism self-interestedly promoted by powerful technocrats, economic experts, and data scientists, to justify their elevated social status and decision-making power?

According to German philosopher Jürgen Habermas, the answer is yes. In his 1971 book Knowledge and Human Interests, he claims that human knowledge falls into three broad categories, each expressing a basic human interest. His later theory of communicative action contends that neglect of these other interests beyond prediction, manipulation, and control of objects leads to cultural, social and personal developmental pathologies. Until we realize the aim of science is not exclusively control and prediction, we remain unfree to determine our own future. We risk becoming morally stunted by our own technology.

Habermas insists empirical-analytical knowledge serves our instrumental interests in prediction, manipulation, and control of nature. Practical knowledge and mutual understanding of our fellow humans serves our hermeneutical interests. And finally, emancipation via critical, self-reflective, and deliberative thought serves our critical interests. For Habermas, a critical science reflects back on itself and its methods to see its own limitations and thereby grows and enlarges its epistemological scope in the process. Critical science is meta-cognition at the level of the human species: it not only tells us where our scientific knowledge is likely robust, but it also tells us where it is likely frail and subject to revision in the near future.

Drawing on the model of Freudian psychotherapy and the stages of logical and moral development proposed by Piaget and Kohlberg, emancipatory science aims to free us from our baser and often repressed compulsions, instincts, and neuroses — aspects of our earlier and often ugly animalistic nature when physical power still governed social relations and might made right. Today we have progressed beyond the Hobbesian state of nature, partly thanks to our ability to control and manipulate nature to serve our basic material needs. Empirical-analytical knowledge to the rescue!

But Habermas worries that in our increasingly globalized world dominated by the neoliberal ideology of free-market competition, money increasingly coordinates our social interactions and reduces social coordination to a game-theoretic, consequentialist logic of egocentric utility calculation.

The Person in Personalization: Where art Thou?

Personalization, as the word itself suggests, rests on an implicit notion of the person. According to various philosophers, the person may be one or all of the following:

  • political animal
  • moral agent
  • rational, self-conscious subject
  • possessor of particular rights
  • being with a defined personality or character
  • a narrative-driven and uniquely self-interpreting animal
  • a self-organizing informational system capable of evolving and changing over time in uniquely self-determining ways

Kant: Freedom Through Adherence to Universal Moral Law

The Enlightenment philosopher Immanuel Kant saw the human condition as inherently conflicted, in a way pre-dating modern dual-process psychology. Kant’s tombstone famously reads:

Two things fill the mind with ever new and increasing admiration and awe, the more often and steadily we reflect upon them: the starry heavens above me and the moral law within me.

On the one hand, we are physical creatures, objects at the mercy of Newton’s universal laws of motion. In this respect, we are no different from the clumps of mindless atoms that make up rocks floating in space. Yet, we also possess a capacity for reason, a nearly limitless capacity to be aware of and articulate the very rules governing our actions. This normative, self-referential, and “higher” aspect of human existence is what grounds the value and unique moral status of the human person. It is what gives us a unique personality and is the focus of what we typically refer to as a “self.” Most addicts will agree that although we may have momentary lapses in our willpower, we usually associate our “true selves” with the outcome of conscious, reasoned self-reflection, and not with our instincts, desires, and immediate needs.

Kant believes actions done from instincts and self-interested motives have, strictly speaking, no moral value. Ever the rationalist, Kant believed moral principles must be derived prior to any empirical experience, interest, or desire we might have. Indeed, the good will — the will that does duty for duty’s sakeis the only unconditional or intrinsic good there is, but it requires autonomy (that is, self-legislating ability) to have any teeth. The will must simultaneously create its own laws and bind itself to them for it to be good. Objectively valid for all rational beings, Kant calls such universal laws the categorical imperative. To be rational means to effectively make one’s will and the categorical imperative one and the same.

Freedom thus paradoxically results from binding one’s will to the universal moral law within, a law which all rational beings are innately capable of following.

This categorical imperative is the bedrock of all moral duties, Kant claims, and functions as a decision procedure about our motives for action. Similarly to the notion of logical consistency, the categorical imperative endorses those rules for action that would, when universalized by all rational agents, not result in a contradiction. In other words, only duties that can be translated into a categorical imperative — without being self-defeating or resulting in a contradiction — count as moral duties. Do we have a duty to make false promises, for instance? No, because the concept of promising itself would thus disintegrate if no one kept their promises.

In contrast to Kant’s abstract and logical approach to morality, the orthodox economic thinking that increasingly drives algorithmic personalization follows the empiricism of fellow philosopher David Hume, who famously argued that reason is and should be a slave to the passions. This division is what I refer to as a conflict of visions regarding the foundations of algorithmic personalization.

Photo by GR Stocks on Unsplash

Conflicting Visions of Personalization

To better make sense of this tension between humanistic and economic perspectives relevant to personalization, I’d like to adapt a distinction made by Thomas Sowell in his insightful book A Conflict of Visions.

Sowell, an economist by training, distinguishes between two conflicting political and moral visions in Western thought. These visions are all-encompassing worldviews that not only bias one’s ethical and political theories, but also one’s understanding of the nature and scope of scientific knowledge. Yet these two visions are mutually incompatible. In effect, where one sees a duck, the other sees a rabbit. For instance, the standard economic thinking behind the notion of consumer sovereignty implies that because we desire something, it must be good. Yet Kant claims the opposite: something is good, therefore we must desire it (insofar as we are rational beings).

I surmise the emergence of the field of AI ethics is a manifestation of this conflict of visions. As such, the rapid growth of work and interest in AI ethics should be interpreted as expressing dissatisfaction that personalization technology has up to now neglected essential elements of the unconstrained vision. We can roughly associate these unconstrained elements with what Habermas calls our hermeneutical and emancipatory interests in achieving mutual understanding and freedom from our baser, animalistic nature.

The Constrained Vision

Up till now, personalization technology has largely been developed in pursuit of the constrained vision. The constrained vision follows an intellectual thread first articulated by Scottish Enlightenment thinkers such as Adam Smith and David Hume. It prioritizes correlations over causation, elevates intuitive, empirical knowledge implicit in habits and traditions over explicit reason, regards observable consequences over unobservable intentions, and weighs ideals against the costs required to achieve them. The perfect should not be the enemy of the good. Scarcity and finitude are viewed as fundamental aspects of human existence, and thus questions about the relative efficiency of various ways of distributing such scarce resources are prioritized. The constrained vision accepts tradeoffs as an inevitable fact of life.

From an ethical perspective, the constrained view relies heavily on consequentialist, instrumentalist reasoning about right action and values, and takes a “system-level” utilitarian view favoring properties of aggregates over individuals. The constrained vision shares a philosophy of science similar to both behaviorism and (logical) positivism in that it looks for universally unchanging law-like explanations of human behavior by reference to observable, verifiable empirical features of the environment, avoiding dubious metaphysical claims of unverifiable, unobservable internal causes. As with neo-classical economics, the constrained vision seems to suffer from a clear case of physics envy, whereby the materialist science of physics embodies the methodological and epistemological ideal of human investigative inquiry.

The Unconstrained Vision

I claim that personalization technologies have largely been developed independently of considerations of the unconstrained vision, whether for practical or ideological reasons. But this may slowly change as the field of AI ethics grows.

The unconstrained vision descends from French and German Enlightenment-era ideas developed by Condorcet and Immanuel Kant and elevates reason and conscious deliberation above intuition, habit, and tradition, highlights abstract ideals over the actual sacrifices needed to achieve them, and marvels at the power of the cultivated human mind to arrive at self-knowledge and universal truths.

The unconstrained vision views certain actions and states of affairs as intrinsically good, owing to their objective properties as seen by ideal impartial observers, or as the result of an idealized procedure of deliberation or universalization. The instrumental or consequentialist reasoning of the constrained vision is generally viewed as inferior because it tends to uncritically take ends as given (but by whom?) and can justify treating persons as objects, mere means towards someone else’s end.

In its political, ethical, and legal forms, the unconstrained vision often endorses placing strict rules on what can be done to individuals in pursuit of the common good, often employing the concept of rights as “trumps” to delineate this sphere of individual sanctity that cannot be crossed no matter the utility or consequences.

In terms of a philosophy of science, the rationalism of the unconstrained vision goes beyond observable correlations and explains human behavior by reference to internal causes of action, which involve unobservable intentions, and whose contents may only be fully accessible and intelligible to those participating in a shared form of life or culture. The unconstrained vision endorses a realist or even transcendental realist metaphysics.

AI Ethics: An Oxymoron?

Some philosophers have called the concept of business ethics an oxymoron. Does this also apply to idea of AI ethics? AI embodies cutting-edge knowledge in statistics, computer science, mathematics, and engineering. Yet ethics is inherently conservative. There is a reason why philosophy students still read 2000 year old texts, while engineers don’t.

But the apparent conflict of visions at the heart of personalization is not a reason to give up hope about the future of personalization. The idealism of the unconstrained vision declares that technological and moral progress are not and should not be mutually exclusive. Expanding the education of data scientists and engineers to include engagement with key ideas from philosophy and social science is an important first step. So is including AI researchers from a variety of cultural and interdisciplinary perspectives. These changes will be necessary if personalization is to be (re)imagined as an emancipatory technology, not merely an engineering tool of instrumental control and optimization of human objects.

We must ensure that in developing and deploying technologies of personalization we do not sacrifice our intrinsically good capacity for recognition of the “moral law within” for the mere instrumental goods of efficiency, convenience, or profit. Critical science and AI ethics play a role in directing us towards applications of technology that advance our hermeneutical and critical interests and further — not stunt — human moral development.

--

--

No responses yet