INNOVATE

Coercion Resistant Cast-as-Intended Verifiability for a Computationally Limited Voter

Voting’23 Workshop Paper Summary

Scytl
Published in
3 min readMay 9, 2023

--

Coercion, vote selling, and bribery have been regarded as threats to the freedom of will since the early days of democracy. Over the years, great effort has been put into limiting this threat: a secret ballot, the controlled environment of polling places, the privacy of voting booths, etc. The problem was contained for a while, but technological progress never stopped and constantly challenged the security and convenience of the established practices. For example, advances in video recording decreased the privacy of polling places. At the same time, human mobility questioned the convenience of in-person voting. It was only natural that people would bring technology into the electoral process to address rising concerns and usability demands.

The introduction of computers allowed paper ballots to be replaced with digital envelopes and voting from virtually anywhere. However, online and internet voting brought new challenges — an uncontrolled voting environment and the need to prevent a malicious voting device from misbehaving. Specifically, ensuring that a computer has not altered a voter’s vote became crucial — a procedure also known as cast-as-intended verification. Fortunately, there are many ways to do so: tracking numbers, return codes, QR codes holding encryption randomness, zero-knowledge proofs of plaintext correctness, etc. Yet, all these cast-as-intended verification methods give voters some amount of extra information, which can enable vote-selling or coercion. The contradiction is unavoidable: one property requires outputting as little feedback about the vote’s content as possible, while the other demands proving that the voter’s intention is encrypted correctly.

In our paper for the 8th Workshop on Advances in Secure Electronic Voting, we focus on this contradiction and outline the definition of coercion-resistant cast-as-intended verification in the simplest (but most realistic) case for electronic elections: the only capability voters have is remembering and comparing strings they see. For simplicity, we do not restrict the length of memorized data even though, realistically speaking, there is only so much information the voter can safely remember. We assume a voter cannot perform complex computations without using some device. For example, the voter would require assistance to compute hash functions, perform re-encryption, verify zero-knowledge proofs, etc. Also, we exclude the possibility of randomness or other non-output information extraction from an e-voting system. While in some cases, the coercer might demand voters extract randomness used for encrypting their choice, we believe those instructions are hard to follow for an average non-technically savvy voter.

While our settings seem too restrictive, we believe this simplest possible case is the most realistic assumption one can make about voters. It is true that, with fewer assumptions about voters’ abilities, we can construct more elegant and secure systems. However, one may wonder — if the voter already has a trusted device or can do cryptographic operations in their head, why not use it for vote-casting directly?

After presenting our formal definition of coercion-resistant cast-as-intended verification for a computationally limited voter, we propose a simple protocol for achieving both properties simultaneously. The voter needs the help of external aid — an official election device (OED) — for string generation, as string generation is beyond the capabilities of voters in our model. The OED is expected to participate in the protocol honestly and only when the voter indicates (and not a moment before). Overall, the scheme is simple and only requires voters to remember and compare strings of numbers, on the one hand, and press a button at the appropriate moment, on the other. This button activates the participation of an online entity, which is trusted to choose a random nonce for each voter and to publish it only when that voter presses the button (and not before). After the voting, the voter only has to ensure that the published strings are correct. Other ballot correctness verifications can be done by any (powerful enough) external verifier without breaking voters’ privacy.

This article was written by Tamara Finogina, Cryptography Researcher at Scytl.

--

--

Scytl

The global leader in secure online voting and election modernization software solutions. www.scytl.com