Partisan Gerrymandering is a Nebulous Concept


Justice John Paul Stevens has no love for gerrymandering. It’s a practice he dismissed as “outrageously unconstitutional.” If given the chance, he most likely would abolish the practice outright. The absurdity of congressional districts with convoluted borders solely meant as a political tool is a well-known and well-derided concept, but Stevens has signaled his intent to do something about the situation.

A 1986 Supreme Court ruling already set partisan gerrymandering as an illegal practice, but determining that a district is gerrymandered in a partisan way is not so easy to define as one might think. It seems easy enough to say that a district that a district whose boundaries loop around in a small, wiry tendril was probably contrived for political gain. Humans don’t naturally settle in such bizarre fashions, yet creating a definition of a partisan gerrymander is tricky. Naturally occurring boundaries can be abstract, non-sensical shapes that just happen to lean to one political persuasion or another. It’s only when those shapes become particularly disjointed or just too conveniently beneficial to one party, like when a district just happens to include a candidate’s house, that the corruption becomes obvious.

Which is why it’s such a difficult definition. Multiple times cases have been brought before the supreme court, with justices unable to decide on a reasonable basis to go by. Much of this has to do, not with politics or inaction, but the mathematical complexity of subdividing a geographic area. In a story I previously wrote for The Atlantic Cities on computational gerrymandering, I spoke with professor Micah Altman who works extensively on the subject. He detailed how there are an incalculable number of ways to define an area such that algorithms can’t be used automatically to determine a single optimized design. Not simply complex, defining optimum districts is an unsolvable problem.

And even if there were an algorithm that could spit out a single ideal district, that algorithms could easily auto-generate a district with the same outcome: one party winning over another. This is exactly is what happened in Mississippi in the 60s. Authorities chose a basic algorithm that automatically split up the state based on average population size that effectively eliminated all minority representation.

Because there is no absolute way to define what the boundaries of an area should be, it’s difficult to say when they’re wrong. And most anti-gerrymandering efforts lean towards basic principles of contiguous and compact–that a district should be blobbish, not stringy. But there are many different ways to define a blobbish district and a whole host of other factors like historic relevance and minority representation that become impossible to define on an abstract level.

Which is why redistricting needs some amount of manual control. Computers and algorithms can give a good approximation, but there needs to be human involvement at some point to decide that a border is or isn’t right. And that’s why the solution is open source redistricting.

To ensure fairness, or what everybody conceives of fairness, the most number of eyes alongside the computer assisted estimation of what these borders will do is the best way to go about the process. Right now, politicians in power already go through this process with marketing data and high powered software, but behind closed doors.

If that process could be unveiled to the public at-large with advanced software to provide heuristics – a best guess at an ideal district segmentation – and a way to vote and resolve disputes (not a simple task, but plausible), it would be a large step towards a much fairer approach to redistricting. Even if the party in power was still able to wield the redistricting knife to their own benefit, the results would be readily apparent to all parties involved.

Email me when Llewellyn Hinkes-Jones publishes or recommends stories