The Case for the Cardinal Voter
The following is not evidence that philosophical zombies are real — just evidence that I can turn myself into one in order to turn something in that looks like what modern philosophy is supposed to look like. Like, it reads okay and should make sense if you’re into the lore. But I’m arguing with ghosts here, throwing an argument out into the aether and hoping someone out there says “oh hey, I didn’t know I was wrong to think about pain this way — now I’m going to do marginally better work as a result of this insight”. Anyway, here’s something I did for a grade for an audience of a few people who really seem to like this sort of thing.
Modern societies have made a number of sweeping changes to the organisation and structure of democratic governance, but surprisingly little attention has been paid to the mechanisms for selecting representatives — from culture to brains to votes. Perhaps a tacit, convenient admission lies at the bottom of this: democracy seems to be working well enough, in spite of it all. Yet we are also aware that the average voter is dissatisfied with the outcomes of elections and unlikely to participate directly in the democratic process. Perhaps this is inevitable, but there is evidence that what we’re missing might not be human spirit, but rather a voting system that better serves our interests. But first, we should determine whether humans are even capable of rational voting behaviour.
So, can humans vote rationally? It depends on where and how voting decisions are made and manifested. Generally, humans can make simple ordinal (order-based) judgements and act on those. I prefer the colour red to blue and green to red; therefore, you should also expect me to prefer green to blue, and to act in accordance with this preference. Yet there is evidence that humans do not hold this kind of logical consistency — transitivity is often violated. Most of this evidence flows from choice theory research, a branch of economics that deals with predicting and modelling how humans make decisions in different contexts and under different pressures. And there’s also a great deal of corroborative evidence for this from neuroscience, where we even have the luxury of watching competing values clash across brain regions. In many cases, complex deliberations boil out such complexity, once it becomes necessary for a human to make a choice.
Regardless of what is truly the case inside our heads and hearts, most modern voting systems begin with ordinality, the safe default. First-past-the-post (winner-takes-all) tallying is a particularly blunt ordinal method, in which there’s only one winner and all the other ordering is simply discarded. This is by far the most common single-winner voting method (IDEA, 2008). Beyond that, some areas use more literal ordinal systems such as ranked choice voting (e.g., instant runoff), where you order each of your candidates.
Moving slightly away from full ordinality, there are representative elections, where party affiliation guarantees representation — giving a layer of insulation from pure ordinality. Then there are the more ambitious Condorcet-friendly, utilitarian-friendly systems, which diminish or outright discard ordinality in favour of cardinality (absolute values, which can overlap or greatly diverge). Approval voting is one of the more innocuous of these, allowing for voters to approve of any number of candidates on a ballot. With approval voting, the implicit cardinality given by voting is either 0 or 1. Still more ambitious systems grant voters greater degrees of cardinality, such as STAR voting, which allows voters to give a score to each candidate from 0 to 5, 6 glorious levels of cardinality.
Which of these voting methods is best, if any? Is there an objectively optimal method? Before we can hope to we can even begin to rally for any of these methods, we must first ask another, more fundamental question: would a more effective method improve or even optimise our democracies? Unfortunately, an answer to this question is still not quite within our grasp, not fully. This does not however mean that we should not at least prod around the edges of the question, or that we should hold off of testing our voting methods to get a good sense of the overall direction toward which we’re headed.
To get at the nature of the big-picture question, I’ll borrow a term from the artificial intelligence safety research stable: Coherent Extrapolated Volition (CEV). CEV is described as follows by its creator, Eliezer Yudkowsky:
In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted (Yudkowski, 2004).
This idea is not new but is just as powerful and pressing as it has ever been. What decisions might we make were we better versions of ourselves? Democracy can readily be tied to this question, as an instrument in effecting such a collective volition. Whether it is up to the task may in fact be a matter of minds.
Cardinality in motion
Neuroscience has been curiously unconcerned with the science of decision-making at scale, so much so that Economists have felt a need to invent a new interdisciplinary field just to tackle these questions more fully: enter Neuroeconomics, which, like onomatopoeias, is exactly what it sounds like. Economists aspire to make wide-scale predictions about human behaviour, particularly behaviour that entails moving resources about. Neuroscientists aspire to understand the brain as a system — what it can do and how it does those things. Naturally, a marriage of the two is understanding how humans make economic decisions, from the outside to the inside and back out again through behaviour.
This marriage has paid off already, giving birth to dozens of publications both directly experimental and meta-analytical — all with the hopes of finding exactly how (and where) it is that humans make decisions. Many of the early results are summarised in Paul Glimcher’s book “Foundations of Neuroeconomic Analysis” (Glimcher, 2010). To summarise a summary, I’ll take just a few low-resolution snapshots of that which concerns my interests here — namely, whether humans make decisions ordinally or cardinally.
According to Glimcher, the answers (as you would expect) vary. For example, we store the values of our choice options relative to our subjective baseline (where preference is positive, neutral/indifferent, or negative) in our sensory systems:
The shifting baselines, or reference points, of all sensory encoding systems require that vertebrates produce some degree of irrationality in their choices — some violations of axioms such as transitivity are unavoidable in system that operate using these informationally efficient techniques for encoding the properties of the outside world (Glimcher, 2010).
There are a few takeaways here. First, these values are highly subjective and unreliable — relative values are not constantly tracking differences in the baseline, leading to randomness and noise that would not be present in a perfectly rational, stationary system but which make sense for a system that navigates complex, shifting physical environments. This does not doom democracy by itself, of course. The kinds of choices being made in these experiments are necessarily more intuitive and less deliberative than voting entails, meaning the variance over time might be accounted for when it comes time to anchor ourselves to actual candidates through voting. Second, more importantly, this research was not done specifically on voting behaviour — just on perceptual and motor-driven actions. Voting preferences go through additional layers of screening before manifesting themselves as the voter’s hand checking yes or no.
Another key takeaway is that values are not encoded ordinally but can be outputted ordinally. This lets us perform clean pairwise comparisons. For example, if I prefer A over B and B over C, it is not necessarily the case that I prefer A over C. It would be logical, but this is not always true in our brains or our behaviour. This would be quite troubling to democracy, as it predicts that increased choice complexity will induce cognitive randomness. Such a thing might not be fatal to the overall system (which can accommodate noise), but this noise turns out to be capable of stochastically flipping the overall outcome — an effect called (no surprise) Flipping(Hild, Jeffrey, & Risse 1998). Still, these values are not always folded into ordinal comparisons — they can be represented cardinally in behaviour, without losing all of their resolution. In other words, while it is sometimes the case that we round to simple order-based comparisons, this is not always the case. Humans can, under the right conditions, express fine-grained absolute values.
Returning to Glimcher’s analysis, when we model expected subjective values (i.e., utility), the frontal cortex and striatum funnel generate non-relative values within our choice set. This allows us to compare objects we have never before compared directly to each other. If true, this proves that humans can encode and express cardinal preferences. And if we assume these preferences can be drawn out in a voting system, we can (and should) adopt a voting system which properly harvests this richer data.
There are, however, risks with this approach. Cardinal models such as the Von Neumann–Morgenstern utility theorem (Neumann & Morgenstern, 1953) demand certain constraints be met in order to guarantee optimal utility. Among these are completeness, which means (for voting purposes) that the voter must have well-defined preferences for each candidate. And transitivity expects the aforementioned A > B, B > C ➔ A >> C outcome, for every case. Still more constraints remain, but even if just one of these first two is continually unmet, we are guaranteed suboptimal utility outcomes and precipitate flipping, or worse (Hild, Jeffrey, & Risse, 1998).
Alternatively, we can stick with the tried-and-true ordinal voting standard, even if we think cardinally. Even with a rich set of preferences to draw upon, we always have an out to just convert those into ordered preferences — but I advise against this. Intentionally discarding real data about human preferences in interest of avoiding disrupting the status quo is voting the baby out with the bathwater.
First-past-the-post voting is constantly failing us
Recall that I described a broad goal of democracy in terms of capturing our coherent extrapolated volition — creating the world we would want if we wanted what was best for us. As is all too clear, there are obstacles to this. One of these is partisanship, which suppresses individual preferences and promotes tribal divides. First-past-the-post (FPP) systems are uniquely effective at stoking this tendency by pushing out viable third-party alternatives. This is actually rational voting behaviour. In the FPP system, a single-winner election with more than two candidates will rationally demand strategic voting. A failure to vote for the most electable candidate whom you don’t hate will result in a higher chance of the candidate whom you hate winning (i.e., losing the game in game theory). A third party cannot win unless there is a general perception that the third party is a majority party. This is exacerbated by wider candidate fields.
FPP also actively promotes divisive and unlovable candidates and strategies, as it’s not actually necessary to get anywhere near a majority in 3+ candidate races — it’s only necessary that one candidate gets more votes than the other candidates. As a result, a common outcome that is a significant percentage of the population is completely dissatisfied with the winner. This can potentially be devastating for the project of democracy, which relies on optimistic, engaged citizens who feel capable of appointing representatives who represent their CEV (Anderson, 2009).
FPP voting is the default mode for voting in the Western world, yet you would be hard-pressed to find a political scientist, game theorist, or neuroscientist who will defend it on its own merits — its flaws are numerous and well-documented documented (Hamlin, 2019). These flaws are ameliorated by its more broadly ordinal cousins (such as instant-runoff voting), but they do not disappear completely. The reason has nothing to do with weaknesses in human rationality. To the contrary, the weaknesses is inherent to the system itself. Even a perfectly rational population would find itself squeezing out third parties and voting strategically — and this doesn’t get better when the population is less than perfectly rational. Game theory predicts this — Arrow’s Impossibility Theorem predicting such weaknesses (Arrow, 1950). And while it is beyond the scope of this piece to explain the mathematical support in detail, let it suffice to say that all ordinal systems have been shown to produce imperfect utilities in realistic environments. This at least puts them on the same imperfect ground as cardinal systems. However, there’s reason to believe cardinal systems are both instrumentally and non-instrumentally preferable to ordinal ones — and on more solid ground.
The case for cardinality
As already discussed, cardinality is objectively more data-rich than ordinality. You can always fold cardinal data into ordinal data, but the reverse is not true. And while formal models for maximising utilitarian preference expression remain thorny obstacles to practical reform — often on the grounds that going from a flawed system to another flawed system simply isn’t worth the effort — it must be noted that there’s no guarantee that these formalised models accurately capture the whole of voting reality. In fact, it would be surprising were that the case.
Across several publications, Hillinger argues as much, demonstrating what he considers to be the weaknesses in the canonised models, and promoting cardinal systems. Speaking of these formalisms, Hillinger is incisive:
The claim that the formalism is about an empirical phenomenon, such as collective choice, can only be substantiated by examining the conceptualization that led to the formalism (Hillinger, 2005).
In other words, the power any model has is always grounded in its isomorphic-ness to the thing it’s modelling. Arrow’s Impossibility Theorem was specifically built to model agents with of weakly held preferences, as is the case in ordinal voting systems (where candidate ordering is always close, forcing weak preferences). But it does not fit strongly held preferences, such as those we now know we encode in our brains and, just as importantly, desire to express through democracy. This desire is non-trivial — a system that discards it as noise or flattens it to a marginal input is hardly representative. Moreover, tallying is based on cardinal values, even if individual votes are presented and captured ordinally. As with what happens when you expand a static jpeg or png image, expansion of ordinal data into cardinal data produces preference blurriness.
Hillinger suspects that our continued obeisance to ordinal systems lies in a misunderstanding of Arrow’s Impossibility Theorem and others of the era, which produced a chilling effect on reform. While these theories are mathematically sound, they are not perfect fits for voting — again, they are not isomorphic to voting. For example, in approval voting, the model expects our preferences to be bound to each other. If I like candidate A, then this means something about how much I like candidate B. And this would indeed be the case if we thought brains modelled preferences in the same way that they model motor actions in response to emotions. But Glimcher and co.’s work demonstrates that we can encode preferences independently and absolutely. If this is true, Arrow’s Impossibility Theorem does not map to cardinal voting. And if that is true, we should be able to demonstrate that cardinal voting produces a better outcome for democracy than ordinal voting, assuming no further demons manifest themselves. After all, we can always reduce our resolution, should higher resolutions prove problematic.
More than this, Hillinger argues that we have no justification for discarding absolute voter preferences, all else being equal. If democracy really is supposed to be representative, we need to justify cases in which we override individual interests for the sake of the democracy. For cardinal voting, that justification can be found in the claim that non-Pareto optimal systems must necessarily give way to value judgements (Robbins, 1932). In other words, where math cannot give us a perfect answer, we rely on assumed collective values.
It should become immediately apparent that these collective values must somehow become collected — what better way than through an effective voting system? Here we have an obvious catch-22 situation — we need votes! — unless we decide that there’s no need to systematically collect values. Whatever mechanisms we use will be subject to the same scrutiny, charged with the same mission a good voting system would be charged with — track our coherent extrapolated volition. And so far, we have done just that with our ordinal systems, flawed as they are. We built robust democracies on these — therefore, it’s not impossible to get some value out of game-theoretical dropouts. But ordinal systems are both technically worse and morally dubious. I propose build out cardinal methods, more mathematically sound and — necessarily — cognitively enriched. A fundamental change to democracy might prove the most effective tool we have — it’s easier (and cheaper) to prevent disaster than to clean it up afterwards.
Anderson, E. (2009). Democracy: Instrumental vs Non-instrumental Value. In T. Christiano, & J. Christman, Contemporary Debates in Political Philosophy (pp. 213–227). Chichester, United Kingdom: Blackwell Publishing Ltd.
Arrow, K. J. (1950). A Difficulty in the Concept of Social Welfare. Journal of Political Economy, 58(4), 328–346.
Glimcher, P. (2010). Locating and Constructing Subjective Value in the Front of the Brain. In P. Glimcher, Foundations of Neuroeconomic Analysis. New York: Oxford University Press, Inc.
Hamlin, A. (2019). Spoiler Effect: Top 5 Ways Plurality Voting Fails. Retrieved from The Center for Election Science: https://www.electionscience.org/voting-methods/spoiler-effect-top-5-ways-plurality-voting-fails/
Hild, M., Jeffrey, R., & Risse, M. (1998). Preference Aggregation After Harsanyi. In M. Fleurbaey, M. Salles, & J. A. Weymark, Justice, political liberalism, and utilitarianism: Themes from Harsanyi and Rawls (pp. 198–219). New York, USA: Cambridge University Press.
Hillinger, C. (2005). The Case for Utilitarian Voting. Homo Oeconomicus, 295–321.
IDEA. (2008). Electoral System Design. In A. Reynolds, B. Reilly, & A. Ellis, The New International IDEA Handbook. Stockholm, Sweden: International IDEA.
Neumann, J. v., & Morgenstern, O. (1953). Theory of Games and Economic Behavior. Princeton, NJ: Princeton University Press.
Robbins, L. (1932). An Essay on the Nature and Significance of Economic Science. London: McMillan & Co., Limited.
Yudkowski, E. (2004). Intelligence Research Institute. Retrieved from http://intelligence.org/files/CEV.pdf