Building Data Attitude & Data Capacity

A Gardner Perspective by Hadar Baharav


Hadar Baharav

Today, more than ever before, funders and stakeholders expect community-based organizations to collect and analyze data to report on their outcomes (Carmen, 2007). The “outcome investing approach” and “outcomes-based approach” set forth by the Bill and Melinda Gates Foundation and the Lumina Foundation, respectively, are just two expressions of such expectation for practice. At the Gardner Center, we believe that meaningful data use is vital to community initiatives in achieving desired outcomes to realize their purpose.

But research has found that there is a lot of room for improvement in how community-based organizations perceive data and evaluation and in the ways they collect and use information. The reality is that many community-based organizations collect a decent amount of data, but fail to approach data strategically and, therefore, become producers and consumers of detached data and reports of little use (Carmen, 2007).

A project I am currently working on has given my colleagues and me the opportunity to observe and think deeply about how community organizations use data. The College Futures Foundation, a private foundation committed to increasing the number of low-income students who attend and complete college across the state, recognized that data might serve as a powerful leadership tool when they are utilized strategically — for learning, improvement, and strategy execution and refinement. The College Futures Foundation has invited the Gardner Center to work alongside their community initiatives to build their capacity to use data intentionally and systematically to better define, measure, and make progress toward achieving goals, and to help increase stakeholders’ buy-in and commitments.

Our initial work with community initiatives supported by the College Futures Foundation revealed variations across initiatives in their data use. Similarly, Carmen (2007, 2008) found that organizations varied in what the data they collected were used for and how, and in their attitudes to program evaluation. These attitudes ranged from viewing evaluation as a resource drain and distraction, to viewing evaluation as a promotional tool, to viewing evaluation as a strategic management tool (Carmen, 2008). It makes sense that perceptions and practices are connected. But what is the nature of the relationship? And, what about capacity?

We traced the source for variations in data use across the community initiatives to differences in data attitude and data capacity. We define data attitude as the centrality of data in the organization and the purposes it serves (e.g., accountability or decision making). Data capacity refers to data availability, ownership, linking across partners, data management systems, human capital for data management and analysis, and the connections between data and the organization’s stated goals. The relationship between data attitude and data capacity defines an organization’s meaningful data use. More specifically, reinforcing relationships exist between data capacity and data attitude, and high capacity is not sustainable in the absence of strong positive attitude.

The figure to the left illustrates how I think about the relationships. At the bottom left-side of the figure (Cell C) are organizations that are characterized by low data capacity and unfavorable attitude toward data. Unless an attitude change occurs, such organizations will not prioritize building their data capacity and are unlikely to incorporate data in their operation and strategy. However, once the organization’s attitude is improved (cell D), it will likely lead to investments in capacity building and ultimately to meaningful data use. Organizations characterized by unfavorable attitude toward data alongside strong data capacity (Cell B) do not stay in this position for long. Either data are utilized and serve to positively transform attitudes or the organization’s capacity deteriorates due to lack of appreciation and little investment in maintaining capacity.

Meaningful data use is vital to community initiatives to progress their work. It will increase initiative-wide awareness and commitment to partnerships’ goals; facilitate cross-system collaborations and the development and adjustment of programs and policies; inform strategic planning, implementation, and decision-making; and, advance partners’ accountability for outcomes. In its fullest form of utilization, data are linked across agencies within a network to create knowledge and gain insight that extends and expands much beyond the sum of the parts of standalone agencies. It is via strategic use of data that community initiatives will lead to impact. Clearly, it is not an easy assignment considering the complex realities in which community initiatives operate including, but not limited to, the fragmented funding (Lenczner & Phillips, 2012) and competing accountability demands fostered by multiple actors (Ebrahim, 2005) that impact the initiatives’ focus and strategies. The first step might be aligning an initiative’s strategies and activities with desired outcomes and identifying indicators to guide the way and to measure progress and success. At the Gardner Center, we call this system strategy mapping and we use our tri-level framework to identify strategies and indicators at the individual, setting, and system levels (Dukakis et al., 2009). Others may simply call such an outline a logic model. It is unfortunate to discover that only a fraction of community-based organizations use logic models (as low as 17% according to one study, Carmen 2007), but this finding might also support the notion that creating one would be a good starting point for organizations to achieve meaningful data use.

Hadar Baharav is a Research Associate at the John W. Gardner Center for Youth and Their Communities.

REFERENCES

Carmen, J. G. (2007). Evaluation practice among community-based organizations: Research into the reality. American Journal of Evaluation, 28(1), 60–75.

Carman, J. G., & Fredericks, K. A. (2008). Nonprofits and evaluation: Empirical evidence from the field. New Directions for Evaluation, 119, 51–71.

Dukakis, K., London, R. A., McLaughlin, M., & Williamson, D. (October 2009). “Positive youth 
 development: Individual, setting and system level indicators.” Issue Brief. Stanford, CA: The John Gardner Center for Youth and their Communities.

Ebrahim, A. (2005). Accountability myopia: Losing sight of organizational learning. Nonprofit and 
 Voluntary Sector Quarterly, 34
(1), 56–87.

Lenczner, M., & Phillips, S. (2012). From stories to evidence: How mining data can promote innovation in the nonprofit sector. Technology Innovation Management Review, 2(7), 10–15.

Like what you read? Give John W. Gardner Center a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.