Redistricting as an NP-Hard Problem

NYU Center for Data Science
Center for Data Science
3 min readOct 11, 2018

Exploring the Computation of Fair Maps

Shrouded in the kind of ennui-inducing terminology only politicians can dream up, gerrymandering is the process of redrawing district boundaries with a mind for political gain. The hand redrawing district lines holds immense power in deciding the outcomes of future elections. Gerrymandering is central to the game of political scheming because it heavily influences election results on the state level. The recent Supreme Court case, Gill v. Whitford, placed the traditional method of redrawing district lines for political gain under fire. Though the Supreme Court has not yet put forth a ruling regarding fair, bipartisan redistricting, research has capitalized on the unanswered but implicit question, “What is an appropriate metric to detect partisan gerrymandering?”. In their recent publication, Moore-Sloan Data Science fellow, Soledad Villar, Richard Kueng of the Institute for Quantum Information and Matter and California Institute of Technology, and Dustin G. Mixon of the Ohio State University’s Department of Mathematics sought to explore this question by determining if it is computationally feasible to find ways to fairly and legally redistrict.

Typically, two techniques, packing and cracking, factor into gerrymandering. Packing refers to concentrating the opposition party’s constituents in one district, thereby reducing their sway in other districts, where cracking refers to spreading the opposition party’s constituents thinly throughout districts with the same effect. Among other approaches to counter this kind of strategizing, Gill v. Whitford leveraged the proposition that election results should approximate the partisan proportional breakdown of states, and use metrics like the “efficiency gap.”

Villar et al. “prove that deciding whether there exists a fair redistricting among legal maps is NP-hard.” NP-hard problems (non-deterministic polynomial-time hardness) fall under a class of problems that, if solved efficiently, can provide similarly efficient solutions to a wide variety of other problems in the same category. This is to say such problems are reducible to comparable problems, all potentially having the same time complexity. NP-hard problems are thought to be “at least as hard as NP-complete problems.” The heart of the issue is that there exists no known way to solve the worst case of these problems in polynomial time. It is widely held, though not proven, that it is impossible to compute them in polynomial time. In practice, the community at large pragmatically assumes this to be true to preserve the robustness and utility of pursuits like cryptography.

In concluding that finding a fair, legal redistricting map is NP-hard, Villar et al., have demonstrated that further research is required to solve this problem, since a computationally naive approach is insufficient. In their work, Villar et al. defined “fair” as meaning “that a party of interest receives at least some prescribed level of representation,” and “legal” as meaning “all districts have the same population” and “satisfy a mild notion of geographic compactness.” With these criteria in mind, Villar et al. determined that even when finding a legal redistricting was simple, finding a fair redistricting among possible legal maps was NP-hard.

By Sabrina de Silva

--

--

NYU Center for Data Science
Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.