# When drawing a line is hard

## Connecting the dots between math, technology, and law in the challenges of gerrymandering

The Supreme Court will soon hear oral arguments in *Gill v. Whitford**, *a case that confronts the question of when partisan gerrymandering is, or is not, constitutional.

This case could be pivotal in drawing a bright line between acceptable and unacceptable partisan gerrymandering, and much is at stake. The upcoming 2020 decennial census will determine how congressional seats are apportioned to each state, and guide how states redraw voting districts. The Voting Rights Act was significantly diminished in 2013, and advocates fear that millions of people will be effectively disenfranchised as a result. And the country has become highly polarized, with many Americans frustrated that their voices, and their votes, are not being heard.

In cases leading up to *Whitford*, Justice Kennedy has left open the possibility that courts could help police gerrymandering “if workable standards do emerge.” Such standards may now be closer than ever, thanks in part to advances in mathematics and computing. The outcome of *Whitford* will likely revolve around at least one such measure.

Some are optimistic that computers can help solve the problem:

But it’s not *nearly* as simple as the headline above suggests.

Gerrymandering isn’t just a math problem — it’s a policy fight, legal quagmire, mapping challenge, and statistical puzzle, all wrapped into one. As Tufts University professor of mathematics and gerrymandering scholar Moon Duchin has explained, much of this quantitative and technical work “isn’t mathematicians trying to liberate us from politics. This is mathematicians trying to be in conversation with politics.”

Gerrymandering isn’t just a math problem — it’s a policy fight, legal quagmire, mapping challenge, and statistical puzzle, all wrapped into one

I recently attended Geometry of Redistricting, the first of several workshops of mathematicians, legal advocates, and computer scientists devoted to working through this very conversation, and learned about some of the complexities of the issue firsthand. From the workshop sessions and questions that participants raised, it’s clear that making these interdisciplinary discussions productive and **making progress on gerrymandering will require policy professionals to get more than a bit comfortable with math and computers, and vice versa**.

In the spirit of bringing advocates and technologists closer together on this critical topic, here are some of the key technical ideas that were covered at the conference, and some of their caveats. I hope this gives you a better sense of the different ways that math and technical researchers are thinking about this issue, since these concepts will be important to the *Whitford* case, post-census redistricting, and beyond.

# What is gerrymandering?

The word “gerrymandering” was coined in 1812, inspired by an oddly-shaped district in South Essex, MA that resembled a salamander. Critics argued that those in power clearly drew the district to benefit their own party in future elections. The practice is still quite common today — so much so, some fear it has reduced electoral competition and amplified ideological polarization.

In the 1960s, a series of Supreme Court cases held that electoral systems must allocate voting power according to population — “one person, one vote” — which means that political districts must be redrawn periodically as cities grow, people move, and communities change. But with gerrymandering, even when districts include roughly equal numbers of votes, the lines can be drawn in a way that dilutes some voters’ voices.

By **packing** voters of a certain political party or demographic group into a few districts and **cracking** the rest of those communities across districts where they will be the minority, those drawing the maps (usually state legislatures) can construct districts in ways that can change the entire outcome of a vote.

But what in one case might appear to be pernicious electoral scheming can also be precisely the opposite in another: As a court-mandated remedy to historical discrimination, “majority-minority” districts are sometimes crafted specifically to give a distinct, politically cohesive minority group the opportunity to elect a preferred candidate. And on the flip side, less overtly discriminatory **partisan gerrymandering** — where the party in power tries to draw districts that preserve their political advantage — can turn out to be a proxy for racial animus, since minority groups tend to vote along party lines.

Even before we try to *measure* gerrymandering, the complexity of the issue is clear. Motivation and context are key to judging whether districts are fairly drawn.

# The many ways to measure gerrymandering

It’s a lot less straightforward to measure gerrymandering than many might think. The experts at the conference were clear that no one-size-fits-all mathematical measure can capture all of the relevant information. But a range of different techniques have been developed, each with their own strengths and imperfections. Here are a few the conference speakers highlighted:

## The “efficiency gap”

A simple formula to detect partisan gerrymandering, the efficiency gap metric, was proposed by scholars in 2015. It measures the difference between the number of “wasted” votes in a particular district, by party. Wasted votes include all votes for the losing party, and any votes for the winning party that were not necessary for victory.

The formula produces a percentage value that can be used to quantify the political asymmetry between two competing parties.

The inventor of this metric is the first to admit the measure itself is not the solution, but that it’s still useful as an analytical tool. Its utility is clear in *Whitford*, where experts were able to use the efficiency gap measurement to show that compared to a geographic and historical array, Wisconsin’s 2012–2014 districting plan “lies at the very bottom of the historical distribution, at the extreme pro-Republican edge.”

Consider, then, what the efficiency gap alone tells us: just that the Wisconsin plan favored Republicans in 2012, 2014, and 2016. Contrast this with what we learn from the analyses: 1) that the plan’s pro-Republican bias is nearly unprecedented historically; 2) that this skew can’t be attributed to Wisconsin’s political geography; and 3) that the tilt is likely to persist under any electoral environment. (Vox)

On the other hand, some critics of this metric claim it does not allow for the fact that voters can change their mind from election to election, and that it presumes people vote by party rather than by candidate. Some regions are naturally skewed by party (urban areas, for example, lean heavily toward Democrats), they point out, and the efficiency gap method fails to take such local realities into account.

## Compactness

*Compactness* is an established redistricting principle that districts should not be highly irregularly shaped.

A district that clearly violates this principle, the thinking goes, presumably reflects a map-drawer's desire to link certain communities together while sequestering them from the rest of the electorate.

Courts have favored compactness as a concept, and have thrown out maps that violate the principle on their face, but no strict standards have yet been adopted (it’s more of an “I’ll know it when I see it” phenomenon). In part, this may be because irregular shapes can reflect specific local geography like a coastline, mountains, or islands. But some classical concepts of compactness are also just objectively difficult to quantify. Because of these particularities, lack of compactness has been treated more as a red flag rather than a strict rule, a trigger to ask why the district is strangely shaped and whether or not it is legally justified. Nevertheless, many have tried to come up with ways to quantify this principle so that stricter standards might be adopted.

One way to measure compactness of a shape is to compare the area of the shape with the length of its edges (*isoperimetry*), since shapes with highly irregular edges which are clearly not compact have a low area-perimeter ratio.

This might seem pretty straightforward, since gerrymandered districts tend to look more like the complicated shapes on the right than squares and circles on the left — very low scores for a district on this measure could suggest questionable motives of the people who drew that map.

But it turns out it’s actually *really* hard to accurately measure perimeters of geographic areas.

The coastline paradox holds that the actual length of a curved edge changes as you zoom in and change the scale of measurements.

This paradox certainly applies to measuring political districts, because it can be trivially easy to manipulate this particular compactness measure of a district simply by zooming out on the map.

Another way to measure compactness is *convexity*, a metric that penalizes irregular edges and indentation and rewards circular shapes, while *dispersion *measures how spread out a district is by taking an average distance between all points within an area. However, these metrics too are imperfect tools to analyze actual voting districts, since they are unable to make clear exceptions for districts with naturally irregular edges caused by boundaries like rivers, or with donut-like holes caused by lakes.

## A new technique

The Tufts-based research group that put on the conference — the Metric Geometry and Gerrymandering Group — is exploring a new measure based on a geometric characteristic called *curvature, *which doesn’t use any information about geometric borders. Instead, this method looks at census tracts and graphs them as if they were nodes in a network, connecting the nodes when the tracts are adjacent.

When many census tracts in a district touch one another, like the clusters in Charlotte and Greensboro below, it indicates a higher degree of compactness, whereas a narrow strip of census tracts linking several areas would have a weaker structure (like the strip between the two cities).

The curvature method solves the problems inherent in measures that rely on irregular geographic borders. But even here, it might not necessarily be straightforward to decide whether census units are actually adjacent, depending on how those maps are drawn.

Regardless of which compactness measure ends up gaining traction, as speakers at the conference put it, all of these methods to measure compactness are “19th century mathematical.” But we’re now seventeen years into the 21st century — so where are the computers?

# Why supercomputers can’t solve this for us

Applying the metrics in the previous section, among others, computers can help map drawers and voting advocates compare the relative merits of various districting plans. Alternative maps can provide powerful critiques to dubious districting plans, illustrating the range of possibilities and the tradeoffs — involving compactness, efficiency, minority voting power, district competitiveness, and other factors — that each option would present.

In the end, though, the choice among redistricting plans depends on values, not on math. Computers can create districting options, and then compare them on specific dimensions. But the holistic question of which district map is “best” is a question that requires human judgment.

With this in mind, research scientists have developed new algorithms that run on massive supercomputers to generate as many maps as possible and surface options that measure up as favorably as possible on each of these metrics (and across combinations of metrics). That way, advocates will have alternative maps to point to, if they object to a state’s proposed map — and, clear ways of talking about what’s better in the map they favor. By quickly generating sets of example maps following the 2020 census, researchers can open up the debate over how districts should be drawn.

Computers add their own quirks to the* *gerrymandering problem, of course. Messy data and varying map resolutions mean that many of the ways to measure borders and compactness can be easily manipulated. Collection of the data that is fed into these computer systems is a highly political process, which can also skew results down the line — for example, if vulnerable populations are deterred from participating in the census, the places where those people live will have less voting power. And even seemingly minor programming decisions like how to round numbers can introduce unintended bias into later calculations.

(Even with some constraints, there are so many possible ways to divide up a geographic area that numbers get too big even for supercomputers to deal with. With roughly 11 million census blocks, even after eliminating many options, over 7×10³⁰⁰⁰⁰⁰⁰ possibilities of different arrays remain; for comparison, the sun will swallow the earth in 5×10⁹ years.)

Because of all these challenges, social scientists, advocates, and expert witnesses will need to tread carefully when talking about computer models and gerrymandering, because **characterizing them incorrectly could lead to problems in the courtroom**. As the conference organizers recognized, calling a map “best,” “optimal,” or “high quality,” suggests that there is an agreed-upon, standard mathematical measure against which these maps have been evaluated, when the reality is there is not one.

Moreover, while the methods described above involve some randomness in generating new map iterations, none of today’s techniques can generate a *truly* random sample of optimal plans — so mathematicians will need to be careful not to overstate the statistical robustness of their methods. (That’s because mathematicians can’t even say for certain how many maps would be necessary to constitute a statistical sample — and part of the answer to that question is one of computer science’s most famous unsolved problems.)

And of course, like many other cases of algorithms in the public sphere, algorithms that draw new maps will need to be sufficiently transparent, accountable, and explainable that the public and lawmakers are confident they are conforming to agreed-upon measures of fairness, and that the software is doing what it claims.

Because map drawing for redistricting is always a negotiation process among legislatures and local stakeholders, experts are skeptical that we’ll ever see any sort of fully automated districting programs. But they do believe that computer sampling will be central to the future of legal standards to investigate gerrymandering and to identify fairer districting plans.

In voting, equal treatment has been defined: one person, one vote. When it comes to gerrymandering, the solution isn’t nearly so simple. Algorithms and supercomputers can’t reconcile thorny social and legal issues, so law and advocacy will remain critical, even as computers play a growing role in redistricting processes.

It was invigorating to see so many technical experts get excited about coming up with new approaches to solving such a critical social issue, especially one that will be core to the functioning of our democracy in years ahead. But the technical community knows they will need help from legal and policy experts as they bring their research out of the lab, and that tackling the issue of gerrymandering will require an incredibly interdisciplinary approach. In the meantime, *Whitford* will be a case to watch — for math geeks and legal nerds alike.