How Inaccurate Elections Just Might Fix Our Democracy

An exploration of how, by introducing random errors into the results of elections, our government might actually become more representative of its people

Nir Zicherman
13 min readJul 14, 2022
Photo by Element5 Digital on Unsplash

Cookies and Democracy

In Mr. B’s third grade class, Fridays are celebrated with an afternoon cookie treat. Students have the choice of either having chocolate chip or snickerdoodles. And Mr. B, being a proud American and believing in the principles of democracy, always leaves it up to a vote. Of his 20 students, 11 are super fans of chocolate. Only 11 out of 20. And so it doesn’t matter what the other nine want, because, predictably, chocolate chip wins. Every single time. With more than half the popular vote. And it has for the entire school year.

One can imagine that extrapolating this out to other classrooms across the country would cause the snickerdoodle factory to go out of business. That’s because Mr. B’s cookie election process doesn’t seem to respect the basic laws of supply and demand. In particular, it ignores the demand of anyone who doesn’t fall into the majority. On a one-off basis, that might be fine. But over a long enough period of time, something seems off when nine out of 20 students never get to eat their choice of cookie over the entire school year.

If this sounds familiar (and you live in the United States or another federal republic), it might be because representative democracies tend to work the same way. I could write a lot about why this phenomenon can lead to problematic consequences. But instead, I’m going to focus on solutions. In particular, I’m going to look at other fields where the same exact problem exists, and where it’s already been repeatedly solved in one specific way.

Most people won’t like this proposed solution though. In fact, most people will probably dismiss it as ludicrous. But bear with me, because there is an argument to be made for why the solution to an inherent problem in representative democracies is having elections that are more inaccurate.

Gradients

Let’s set aside the third grade class and politics for a moment and consider a seemingly different topic (that is actually the same thing in disguise). Imagine a gradient transitioning from black to white that looks like this:

And now imagine that we had a retro printer that could only print things in pure black or pure white. No gray, no subtlety. If you were to instruct the printer about how to put this gradient on a piece of paper, with its binary color options, you would probably tell it to do something like this: “If the pixel you need to print is closer to white, make it white. If it’s closer to black, make it black.” In other words, if a specific dot in the image is closer to one extreme than the other, color it at the closer extreme. (Sound familiar? Each pixel is changing according to the results of a winner-take-all election.)

The printer does what it’s told, and what comes out is this:

Clearly, something has been lost. If you were to show a stranger this printed picture, he or she would never think it originated from the gradient above. In a way, this is kind of tragic. We took what was complex and nuanced and natural, and we forced it to be something simple and artificial.

This retro printer has a fundamental limitation called quantization. You’ve probably heard the word quantum before, meaning the smallest discrete amount that something can be. Quantization means that you’re taking a certain amount of information and trying to make it fit within fewer options than can actually represent all the information you have. In the case of the printer, it’s a black-or-white choice that has no awareness of all the shades of gray in between.

This concept primarily arises in the study of digital signal processing. But, I would argue, it manifests itself in many other important ways. We’ve already seen how it can be applied to voting on cookies in a classroom. Quantization is a general issue arising any time you attempt to take a lot of information and reduce it to a limited number of options. Most often, this happens when you attempt to take something analog (like a smooth gradient of grays) and reduce it to something digital, binary, multiple choice.

Ever since I learned about quantization, I see it everywhere. Its impact on us is massive, not only at the level of printed gradients and Cookie Fridays. It even rises all the way up to the level of American Democracy. And frankly, I don’t understand how more of us aren’t talking about this problem.

Broken Elections

One might wonder what my political agenda is. So let me first state that I don’t have one. This is not an issue attributable to one ideology or another.

To establish that this is not a liberal problem or a conservative problem, but a root problem that affects all of us, I’ll offer two somewhat arbitrarily chosen states that fall on either side of the political divide in America. Let’s dig into some data about recent congressional elections.

On the one hand, we have New York, which has only had Democratic Senators for the past 24 years. On the other hand, we have Tennessee, which has only had Republican Senators for almost three decades. That means that for an entire generation, the legislators elected to fight for the population of these states only represented the interests of the left and the right respectively. (It’s worth noting that New York and Tennessee are not even the most extreme examples of this, but the point can be well argued without even needing to rely on the worst cases on either side.)

Now let’s dig in deeper, just as Mr. B chose to do with his class to learn more about cookie sentiment. In the past two decades of Senate elections in both of these states, the results ranged from the winner leading by a roughly 2–1 margin to as close as a 54–46 margin. In both of these states (which are representative of many states) somewhere between a third to nearly half of every eligible voter has had zero representation of their political ideology in the United States Senate for over two decades. That is staggering. If you live in New York or Tennessee, your Senators are the extreme pixels on a retro printer that has no awareness of the fact that your state’s population is a nuanced gray.

Even if you’re a Democrat living in New York, or a Republican living in Tennessee, you should be alarmed by the fact that your fellow Americans in any partisan state might feel completely unrepresented by their leaders and have no recourse to fix that simply because of where they live. People often talk about a system that favors the rich over the poor, or one socioeconomic group over another. What about a system that can favor a certain class of people because of their geography, or more explicitly, because of the fact that their neighbors happen to share their political ideology? This nation was founded on principles of healthy discourse and debate, and yet there are ways in which it favors creating pockets of ideology.

If there existed a state that had close to 100% membership in one party, one could wrap their heads around being philosophically and morally all right with having the Senators from that state be from that party all the time. But, as the party split in the population moves closer to the halfway mark, that becomes harder to stomach. Keep in mind that there are two Senators per state. Doesn’t it seem like, in a region with a 54–46 distribution of Democrat-Republican, there should be one Senator from each party most of the time? That seems conceptually correct, but rarely happens in practice. In fact, our system is built to discourage this from happening in many ways.

This analysis focuses on the Senate, because it’s the easiest way to see this problem manifest itself with real world results. In reality, this same logic can be linked to many parts of our system: at the federal, state, and local levels; in both houses of congress, the presidency, and even the courts; with the electoral college, gerrymandering, and just generally a two party system.

In thinking about this problem, I realized that even though it’s never really positioned this way, it is effectively the same quantization error problem that our retro printer has. Infinite complexity is being mapped to a handful of options (usually two political parties). This issue of political theory has a direct analogy to an issue of digital signal processing. So the natural evolution in thinking is to ask, “Well, if there is an analogous problem, is there an analogous solution?” And there is. And it’s wild.

Could the way to improve our democratic system be to make elections less accurate?

Make Some Noise

Let’s travel back in time to World War II. And let’s stop worrying about elections or democracy for a moment, but think about the war raging in Europe and the Pacific. Among the many other things they did, Allied military engineers spent countless hours developing mechanical computers to perform complicated calculations to be used in aircrafts. These machines would be put on planes and used to precisely determine bomb trajectories.

The engineers were bewildered to discover that the calculations were more accurate in practice on the planes themselves than they were in the labs. How odd, given that perfect lab conditions should make conditions ideal for testing and development. As it turns out, these machines had quantization issues arising from their various pieces occasionally sticking together. But up in the air, there was a consistent rattle of parts that arose from the vibration of the plane during flight. The parts would constantly move around, making sticking less likely, making calculations more accurate.

This accidental discovery is one of the most beautiful and counterintuitive breakthroughs I’ve ever learned about. This is how mankind learned that if you introduce a base level of random noise to a bunch of data, it can cause the aggregation of that data to be more accurate, not less. Put differently, adding a random layer of intentional error into a data set can actually make the overall data set closer to reality.

This amazing concept is called dithering. It is the process of adding noise to reduce the errors introduced by quantization. To see how it works, let’s revisit the gradient we looked at earlier:

Let’s introduce a whole bunch of random noise. That means that for every pixel, we’re going to randomly make them darker or lighter. And the result looks like this:

Compared to the original gradient, this looks awful. But what happens when we tell the retro printer (the one that must round every pixel to black or white) to print this new version? In other words, what happens when we quantize the original gradient with added noise? What happens is this:

Isn’t that amazing? The outputted photo is more accurately a reflection of the original. And this was accomplished by first making it less accurate (by adding noise).

The principle of dithering states that introducing random errors to an input prior to quantization can actually result in an output that is more representative of the original input.

The applications of dithering are typically seen in digital signal processing, where they are ubiquitous. But once you realize the similarities between the quantization problems of gradients and those of our elections, you can naturally see the similarities between their solutions.

A Modest Proposal

Let’s explore one such possible solution. Imagine that every election had a two step process. First, as now, the votes get tallied. But then, the results actually get modified randomly, in a systematic methodology that is equally applies across all states, via an algorithm agreed upon in advance. In the short term, every election will be less accurate. But in the long term, the aggregate of all elections will be closer to the true distribution of political views within the constituency.

Take New York as an example. If in each of the past two decades of elections to choose Senators, there was a random adjustment to the vote such that the outcome had a slight chance of changing each and every time, then on a long enough timeline, the distribution of political party representation would more closely represent the ratio of all votes cast for either party across all of the elections.

For example, to keep it simple, suppose that each of the past 100 elections in State X had a 51–49 Republican-Democrat breakdown. The Republican would win every time in the real world. But in this dithered world, suppose we introduced a randomly selected adjustment between -2% and +2% to the vote every time. Statistically, that would mean that roughly 25% of the elections would have an adjustment that would bring the Republican vote down far enough below the Democratic vote. Or put differently, that would mean that 25 of the 100 elections would be won by a Democrat. That seems to be a bit closer to the “will of the people”, does it not?

What if there was even more noise? In a -4% to +4% adjustment, 37 of the 100 elections would favor the Democrat. We’re getting closer to 51–49.

Of course, if the gap between the two candidates were much wider, it would be significantly harder for a noise adjustment to bridge the gap. So the likelihood of a flip to the results of the election would go down. So the Republican would continue to win most of the time, which makes sense.

The suggestion is that we will be more accurately represented by our leaders over a long-term timeline if we are willing to accept that each individual election might seem completely unfair, because the real winner lost and the loser won.

It’s important to clarify the parallel to dithering in the digital signal processing world. The election is not the gradient. The election is one single dot in the gradient. All of the elections over many years are the gradient. The goal is to get that overall picture to be more accurate, and we do so by introducing random noise at each pixel, each election.

Now, there are certainly some things to note as problematic in this naive approach. First, the above example assumes that we exist in a pure two party system. One of the real benefits of an election process like this, however, is that it gives a real incentive for third party candidates to enter the race. And dithering still works whether you quantize things down to two options or any other number.

Second, in the type of dithering that takes place in digital signal processing, there are many ways in which this noise can be generated and applied; it is not always a mere random number. I can’t pretend to be qualified enough to speak to the pros and cons of each approach, as well as which should be used here.

And I intentionally am not being prescriptive about how much noise should be applied to each election to keep this fair and real. The more noise we add, the more we run the risk of disappointing single elections. And at a certain point, the scales tip so much such that the overall picture itself is distorted in unintended ways. I don’t know where that fine line is that renders the dithering helpful without being harmful. But it seems clear that, whatever the methodology is, it has to remain consistent over many elections.

This idea requires one fundamental rule to work, one that humans are notoriously bad at adhering to. It requires follow-through. It requires that the disappointing outcome of elections here and there not cause us to deviate from this long term strategy, because this long term strategy does theoretically work. Yet it only works if it’s truly long term. It’s the Law of Big Numbers; it takes large quantities of data to result in a truly accurate result.

So…

I imagine one common opposition to this idea would sound something like this: “But I have one vote, and my vote should count for something. You’re suggesting making my vote worth less.” But I would argue the opposite: If you only have one vote, why have it be drowned out by a broken process? Might this proposal actually make your vote worth more, if you’re willing to think about it on a long enough timescale? This type of process may in fact encourage more people to vote, even to vote for third parties, because it would mean that there is a chance, however small, of things working in their favor. Or for the party in charge, there is a chance, however small, of losing its position of authority. So voters that identify with the majority are more incentivized to vote to preserve that majority’s authority. And all elected officials are more incentivized to avoid extremist politics, to grab a larger share of the electorate, and not rely on winning by a slim majority.

It’s the same reason why people like to gamble or play the lottery. Hey, you never know.

P.S. This whole concept is, of course, riddled with assumptions and inconsistencies. But nevertheless, it’s a line of reasoning that has forced me to think outside the box. It’s a thought experiment. Einstein never did intend to put a twin on a high-speed rocket. Schrödinger never intended to poison a cat (I hope).

--

--

Nir Zicherman

Writer and entrepreneur. Former VP of Audiobooks at Spotify; Co-Founder of Anchor; subscribe to my free weekly newsletter Z-Axis at www.zaxis.page