Using Data to Improve Participatory Budgeting

Rebecca Silliman
On the Agenda
Published in
4 min readMay 27, 2016

Last Friday a positive buzz of energy reverberated off the walls of the Tobin Community Center as young Bostonians between 12- and 25-years-old gathered around tables filled with paper ballots and voting boxes. People enjoyed pizza and ice cream in between a dance performance and a raffle.

I wasn’t at a school function, community mixer or party. I was at the Participatory Budgeting (PB) Votefest event, and the youth were there to vote on how they wanted to spend $1 million of the city’s capital budget.

Votefest was also the kickoff event for the 4th International Conference on Participatory Budgeting, organized by the Participatory Budgeting Project (PBP) in partnership with the city of Boston. The Conference occurred on the heels of Public Agenda’s release of the first-ever comprehensive analysis of PB in the United States and Canada. It brought together members of the PB community, including evaluators, implementers, elected officials, scholars and activists, who shared their experiences with PB.

Everything about the conference exemplified the enthusiastic PB community and encouraged sharing of information, cultivating new ideas and excitement about possible long term social change outcomes. Conference panels included discussions about best practices for PB implementation, digital tools that have the potential to support the PB process, local PB advocacy and, central to our work in PB, the usefulness of evaluation and research.

Collecting data on PB can help communities understand and better meet their needs.

Our own Carolin Hagelskamp, who leads our PB work and is one of the authors of “Public Spending, By the People,” facilitated a panel that investigated how data collection and evaluation can inform PB implementation. Carolin has emphasized on this blog before the importance of data, research and evaluation in PB. Collecting data from different communities using PB not only gives us a broad view of what the short-term outcomes and eventual long-term impacts are of PB in the U.S. and Canada. It also helps individual communities understand where and how they ought to modify their implementation to improve their PB process.

Like most democratic processes that are designed to meet the needs of communities, no two PB processes are carbon copies. While most processes follow the same four phases (idea collection, budget delegate, voting, and implementation phases), the implementation of PB is constantly morphing to meet the individual needs of the communities it serves. Collecting data around how the PB process was implemented, who participated and what projects were funded can help communities understand and better meet those needs.

Five PB implementers, representing five different cities, participated in the conference panel that Carolin moderated, called “When, how and in what way does PB data matter in practice?” These implementers shared how data collected from their PB process helped inform the implementation of future PB cycles. Specifically, data enabled them to better serve the needs of the community, reach more individuals and engage underrepresented community members.

After collecting anecdotal and survey data from community members participating in PB during different phases in the PB process, some implementers realized that the PB process was not reaching as many individuals, either generally or from specific communities, as they hoped. This led implementers to change some of their practices.

For example, one implementer realized that certain community cultural and language needs may not have been met during a past PB process. The data helped to highlight the need for specific language translations and targeted assemblies that would reach specific populations and include their ideas in the process.

As another example, the Boston Youth process wanted to reach more young people. As a way to engage more youth, they designed the Votefest (which we were lucky enough to join) as a fun community gathering with multiple activities besides only PB voting.

At another PB site, implementers discovered, through data, that many individuals were not able to physically travel to different voting sites around the community, so the PB site now offers online voting to reach more community members.

At a fourth site, data collection led implementers to change their voting methods, placing voting stations in locations that were convenient for the populations they were trying to reach.

The panel also discussed challenges that accompany data collection and analysis. Sometimes the logistics around data collection are tricky. For example, PB voting stations are often placed on busy thoroughfares with a lot of foot traffic, or in places where people gather socially. Voters do not always have the space or time to complete a survey after they vote. Therefore the data collected may not always accurately represent the total voting population.

Moreover, data collected may not always look ‘good.’ The data may reveal outcomes that PB implementers, funders and policymakers may not want to see. For example, maybe fewer people voted than expected, or maybe PB participants aren’t representative of a community’s demographics. Yet this makes the data all the more important and beneficial. Data collection and analysis is essential for the PB community to refine and better reach the goals of PB.

This piece was first posted on the Public Agenda blog. Are you involved in PB implementation or evaluation in your community? Help contribute to data collection efforts — download our handy toolkit to get started!

--

--