Place-Based Indicators For Building Evidence

Joseph Scariano
Georgetown Massive Data Institute
4 min readAug 1, 2022

By Joseph Scariano

Federal officials who evaluate the impacts of policies and programs on communities need reliable data to understand outcomes. These evaluators sometimes struggle to access restricted microdata, or use open data that may lack the content or geography they need. Despite open data requirements in the Evidence Act, open data policies across federal agencies, and further open data aspirations outlined in the federal data strategy, many agencies produce a limited set of statistics, often on a lag and with little or no disaggregation to understand equity. Similarly, while the Evidence Act strives to improve access to restricted government data through the Standard Application Process, access to administrative data can be a legal and logistical challenge.

The Place-Based Indicators Project promotes access to aggregated statistics about our communities on a regular basis, seeking more detail than many current open data catalogs contain without requiring access to restricted data files. For example, the Department of Health and Human Services (HHS) houses the Administration for Children and Families (ACF). ACF maintains administrative records for its services which, if combined with sufficient privacy protections, could be transformed into consistently produced indicators on the social well-being of children, families, and communities. We are exploring how to liberate administrative data from departments across the federal government, seeking consistent indicators from federal datasets that can help monitor progress and changes in our communities.

Agencies, non-profits, and academics already prepare and release many indicators and indices. We are expanding on those indicators, not replacing them. In our process of pointing people towards existing high quality indicators, while also outlining processes for developing new indicators, we have initiated conversations with data users and data sources to understand best practices as well as content and methodological gaps to address.

To assess how our project’s goals align with the research and data strategies being employed at various agencies, we met with federal evaluation and chief data officers from agencies including the Census Bureau, the Social Security Administration, the Department of Health and Human Services, during a small group convening organized by the Massive Data Institute. Conversations with federal evidence builders revealed that support exists across agencies for creating privacy-preserving, reliable, and consistent indicators — when resources permit. We acknowledge that production of a new data series is costly to develop, test, and maintain.

Evaluators recognize Place-Based Indicators as important tools for evaluating the distribution of resources and to evaluate program outcomes and impacts. During the small group convening, they shared examples of indicators in production or development in their own departments and agencies. For example, the Census Bureau Community Resilience Estimates are released at the tract level by the Census Bureau, sharing estimates of the risks neighborhoods face from natural disasters. The National Telecommunications and Information Administration updates its broadband access map every 6 months, providing an understanding of broadband availability at the census tract level. Other evaluators noted that they only publish statistics at the county or state levels because of re-identification risks of releasing data at tracts or lower levels of geography.

We intend to learn more about evaluators’ data needs and the barriers to publishing more statistics from federal administrative records, including:

  • What geographies are appropriate for understanding place-based policies? Are ZIP Code or tract level statistics sufficient? Are county level statistics useful?
  • Are current agency staff who produce open data able to produce new measures or different geographies for existing statistics? What tools or practices could reduce the burden of producing more statistics?
  • Could agencies work with partners? Which research programs and hiring authorities could support the data cleaning, documentation, and testing of new statistical series? The Harvard-Brown Opportunity Insights project was discussed. Is that a possible model to follow?
  • Are agencies aware of indices that academics develop using their open data? Do they seek feedback on ways to improve utility of their data?
  • Which agencies or programs have produced statistics joining data across departments? Can those lessons inform future cross-department partnerships?

We look forward to more conversations with federal, state, and local officials, and with experts in non-profits and industry, to understand the opportunities and obstacles to produce more granular and more frequent data.

--

--