Obvious Manipulability of Voting Rules

Alexander Lam
Games, Agents, and Incentives
4 min readApr 29, 2020

The paper can be viewed here.

A presentation of the paper can be found here.

TL;DR: In the paper “Obvious Manipulability of Voting Rules”, we analyse the strategic behaviour that can occur in voting systems when the voters lack information of the other voters’ preferences. To model this, we use the concept of “Obvious Manipulations”, recently proposed by Troyan and Morrill. They assume that agents are only aware of the possible range of outcomes that could arise from each report, as they are unaware of what other agents have reported. We found that almost all voting rules cannot be manipulated under these assumptions, and we identify certain rules which can still be manipulated under certain pathological scenarios. This contrasts with the famous Gibbard-Sattherthwaite theorem, which states that no non-dictatorial voting rule is strategyproof.

People have been voting on public decisions for centuries, and it has recently found usage in AI and multiagent systems to aggregate the preferences of multiple agents. Generally, the voters submit a ballot which specifies a complete ordering of the available alternatives, ranked from their most preferred outcome to their least preferred outcome. A voting rule is then applied to these ballots to select an alternative. For example, the plurality rule chooses the alternative that has been ranked first by the most voters.

As previously stated, the Gibbard-Sattherthwaite theorem states that no non-dictatorial voting rule is strategyproof. In other words, if the voting rule does not allow a unique agent to choose the winner, there will always exist an instance where an agent can vote untruthfully to improve the chosen alternative (with respect to their truthful preferences). However, we note that this observation has some limitations. Specifically, strategyproofness implies that agents have sufficient information of other voters’ preferences, so that they can compute a profitable manipulation. This assumption deviates from the common real-world scenario where agents have little to no information of what other voters have reported.

To overcome this, we apply the concepts proposed in “Obvious Manipulations” by Troyan and Morrill (2020). In the paper, the authors consider the ‘cognitively-limited’ agent, which lacks information regarding other agents and the capability to compute a profitable manipulation. They assume these agents only know the range of possible outcomes that could occur from each specific report. The range of outcomes is defined as an interval with endpoints as the best and worst possible outcomes, taken as the supremum and infimum result over all other possible reports from other agents, respectively. For example, if an agent has preference order a ≻ b ≻ c, the best case outcome can be described as every other agent voting such that outcome a wins, while the worst case outcome has every other agent voting such that outcome c wins. If somehow it is impossible for a or c to win, then both the best- and worst-case outcomes have every other agent voting such that outcome b wins.

Now we say that a voting rule is “Obviously Manipulable (OM)” if there exists an instance where an agent can improve either its best or worst possible outcome through an untruthful report. If there does not exist such an instance, we say that the rule is “Not Obviously Manipulable (NOM)”. Returning to our example, if outcome c can be chosen under the agent’s truthful report, but cannot be chosen under some untruthful report, then this is an obvious manipulation, and the voting rule would be described as OM.

This image illustrates NOM and OM. The voting rule is NOM if no untruthful vote improves the best or worst case outcome, regardless of what other agents vote. If there is an instance where an untruthful vote improves either the best or worst case outcome, the rule is OM. Note that strategyproofness implies NOM, but not vice versa: as illustrated, there is a voter profile where the outcome is improved by an untruthful vote, despite the rule being NOM.

Intuitively, it is easy to see that most common voting rules are clearly NOM. We will skip the discussion of how an agent could improve its best-case outcome, as if every voter has the same first preference, then any reasonable voting rule would select that outcome, which cannot be improved. Now we will define an ‘almost-unanimous’ rule as one which selects an outcome if all but one of the voters has it as their first preference. Such a rule is NOM: if everyone else votes your least preferred outcome first, it will be chosen regardless of how you change your report. In other words, you cannot improve your worst-case outcome by any manipulation. Consequently, we find that many voting rules, such as Condorcet-extension, STV and Plurality with runoff satisfy NOM. We have also found that most reasonable positional scoring rules are NOM, including the Borda rule and the Dowdall rule.

However, not every voting rule is NOM; for example, we have found that the k-approval rule is OM under certain conditions. In the k-approval rule, each voter gives 1 point to their k most preferred candidates, and 0 points to the remaining candidates. Denoting m as the number of alternatives and n as the number of voters, we require a sufficiently large m, relatively large k, and a small n for this rule to be OM. Specifically, the k-approval rule is OM if and only if n<(m-2)/(m-k).

In summary, we have identified several sufficient conditions for a voting rule to be NOM, while identifying that the k-approval rule is OM under certain conditions. These conditions imply that most reasonable voting rules are NOM, contrasting with the impossibility result of the Gibbard-Sattherthwaite theorem. This is a result of our zero information assumption, and hints that in reality, voter manipulation is insignificant when the ballots are kept hidden. There are several further research directions to pursue. A Bayesian approach in which agents are assumed to have prior beliefs on other voters’ preferences would lie between our zero information and the perfect information assumption of strategyproofness. Furthermore, the concept of obvious manipulability could be applied to other fields of mechanism design, and computational aspects could be considered: given an instance, what is the complexity of computing whether or not it is NOM?

--

--

Alexander Lam
Games, Agents, and Incentives
0 Followers

I am a PhD student in Computer Science interested in AI for Social Good and Computational Social Choice.