How to measure success of civic-tech projects?

Fedja Kulenovic
TransparenCEE network
11 min readDec 22, 2017

People before Technology

Ever since Bill O’Reilly and Dale Dougherty coined the term Web 2.0 to describe “the web as platform” the world has been living in an increased mode of optimism and trust that democratized technology will bring us to a better future. This optimism lasted well after we all analyzed results of Arab Spring attributing its initial success more to technology than to a community of people. Then Evgeny Morozov brought us back from the delusion and with his critique of cyber utopianism has reminded us of one very important fact — liars can code too.

What makes a civic-project successful and how do we measure this success?Many have tried to tackle this question — some from a purely technological and planning standpoint while some others attempted to analyze it more from a civic point of view. All of them, however, approached it from a universal side at the same time avoiding the specifics of regions where different organizations are attempting to implement civic-tech projects.

This article aims to establish what universal elements work in CEE/SEE regions and what are those unique elements that make application of civic technology and building the community around it unique in their implementations and difficulties. I do not plan to get a final answer that can be applied for everyone but instead I am hoping that this article will help you to define your success better and maybe notice some gaps in how we all measure success of our projects.

As Christopher Whitaker noted in 2015 “civic technology is a new phrase” but the problem is “nobody has a standard definition” for this term. However, all of these definitions of civic technology have one commonality that “all of them offer new tools to inspire people to take action” (Assessing Civic Tech, p.1). Regardless of definitions, we tend to very often forget the purpose of civic technology. We should, as Laurenellen McCann says, “invest more in the ‘civic’ in civic tech.”

Laurenellen McCann developed, for the purpose of measuring community involvement in civic tech projects, the Criteria for People First Civic Tech that prioritize people and builds the civic tech projects with them and has presented them in a series of blog posts that she later turned into a book. These criteria are:

Laurenellen McCann developed, for the purpose of measuring community involvement in civic tech projects, the Criteria for People First Civic Tech that prioritize people and builds the civic tech projects with them and has presented them in a series of blog posts that she later turned into a book. These criteria are:

  • Start with people
  • Cater the context
  • Respond to need
  • Build for best fit
  • Prove it.

Through use of these criteria McCann analyzed civic tech projects and found diversity “in terms of technology developed” but also “great number of similarities.” She classified these similarities as “The Five Modes of Civic Engagement in Civic Tech.” They are as follows:

  • Utilize Existing Social Infrastructure
  • Utilize Existing Tech Skills and Infrastructure
  • Create Two-Way Educational Environments
  • Lead from Shared Spaces
  • Distribute Power

The main question we should all ask ourselves is how many communities did we manage to inspire to take action based on our project?

The problem however is how do we measure this? Do we collect quantitative data, or should we go out to the streets and collect more qualitative data?

Knight Foundation and Smart Chicago Collaborative have tried to give an answer to this question two years ago. Their research resulted in a series of documents and articles as an effort to try helping different organizations how to measure success of civic-tech projects better. Jed Miller in his post on civichall.org stated that

“the expectations surrounding civic tech are not aligned with the realities of how it takes hold — or doesn’t. We expect that new tools will transform organizations, but too often projects stall on non-tech challenges, or teams lack the person-power to make a new system thrive.”

Both of these challenges are related to non-technical side of civic-tech and both of them are equally challenging to resolve, but even more, once resolved, measured in terms of success. This is something that both organizations did not try to address in their research but instead focused on a general methodology that more often can’t be applied to small scale projects implemented by small organizations. They also suggested a number of tools that can make a life for these organizations easier, but they did not take into account administrative burden these organizations have in order to give detailed reports to donors. Organizations are very often understaffed and it is difficult for them to implement the project, develop measuring devices, and collect enough data to create detailed reports that reflect success.

Knight Foundation and Smart Chicago Collaborative have tried to give an answer to this question two years ago. Their research resulted in a series of documents and articles as an effort to try helping different organizations how to measure success of civic-tech projects better. Jed Miller in his post on civichall.org stated that “the expectations surrounding civic tech are not aligned with the realities of how it takes hold — or doesn’t. We expect that new tools will transform organizations, but too often projects stall on non-tech challenges, or teams lack the person-power to make a new system thrive.” Both of these challenges are related to non-technical side of civic-tech and both of them are equally challenging to resolve, but even more, once resolved, measured in terms of success. This is something that both organizations did not try to address in their research but instead focused on a general methodology that more often can’t be applied to small scale projects implemented by small organizations. They also suggested a number of tools that can make a life for these organizations easier, but they did not take into account administrative burden these organizations have in order to give detailed reports to donors. Organizations are very often understaffed and it is difficult for them to implement the project, develop measuring devices, and collect enough data to create detailed reports that reflect success.

Quantitative vs. qualitative data or to collect or not to collect

In a time when on-line corporations are collecting immense amount of data to learn about their users, in order to promote and sell their products better, it is becoming increasingly hard for anyone to justify collection of any identifiable data. Most of the implementers of civic tech projects are aware of this issue and are trying to gain trust with users in different manners and in the same time avoid large personal data collection that cannot be justified.

While this is very good it is at the same time making any possibility of measuring offline impact of an online project impossible. Some EU and neighboring countries legal frameworks are very strong in protecting citizens’ private data, however, they are not preventing ethical data collection.

On the other side, because a lot of projects are working on anticorruption issues this very often puts them at odds with governments and makes storing any user data (identifiable or non-identifiable) increasingly difficult.

Sampled organizations are collecting some quantitative data through the use of Google Analytics or some other web analytics tool, but they are in the same time failing to capture qualitative data from users which would help them to better measure their impact on the ground.

In comparison to US, where it is easier to collect this type of data, the main problem that prevents better evaluation is “a greater share of society that is disconnected [from socio-political agenda].”, says Sandor Lederer of K-Monitor. He continues with saying that one of the main problem of evaluating projects is measuring offline impact due to the lack of data or impossibility “to receive feedback from a wide share of users”.

This more than anything else makes it impossible to conduct almost any successful evaluation of projects in CEE/SEE compared to US experiences where connecting users online activities and their actions on the ground is somewhat easier because of citizens willingness to share their data with the organization and somewhat easier way to identify these users. How do we together overcome this issue?

Knight Foundation suggested several ways how data can be collected and how users’ online actions can be connected to their offline actions but most of these are useless to a small organization outside of the US because of different treatment of personal data on one side and human resources on the other. Still some of their suggestions can be used, but organizations need to allocate some time and resources to them.

In order to measure success from an audience-oriented point of view we first have to think about different profiles of users we are serving, and we have to track their participation. Knight Foundation offers a solution through Forrester Research system called the social technographics ladder. This helps classifying people “by their social technology activity” that allows any organization to identify those users that are most active and based on that assess how many users are indirectly getting organization’s messages and acting upon it. It also helps us to assess the numbers of potential users that are simply inactive and do not participate in any activity thus making it possible to quantify total number of users any organization can reach. It is worth noting Krzysztof Madejski’s response that “it would be wise to set some user reach targets related to the audience of popular websites in the country and ultimately placing your target on a scale of 0 to number of Facebook users in the country.”

In some cases, it is hard for projects to evaluate their offline success because the end goal for most of them is to make some change in the society but the fact that, as Madejski says, activists in the US are managing to convince decision makers to listen to the people, while in CEE/SEE region this is often not the case, makes this task much harder. Jonathan Sotsky in his post emphasizes this as an important point to consider when implementing a civic-tech projects through the statement that users “still want to know that their input can make a difference and that local government will consider it. Otherwise, these deliberative democracy-planning tools merely provide ‘more opinions to ignore.”

One possibility is that civic-tech projects in CEE/SEE region implement a non-identifiable evaluation feature with questions such as: was this information helpful or did this help you to start an offline action, and then let a user grade it from 1–5. This alone could give more insight into usefulness of projects without invading users’ privacy. Naturally this system is not without faults, and needs to be planned carefully, but it could be a good way to collect some more data this way.

This all does not mean that people do not care, but it is very difficult to convince them to help you with answering survey questions. Madejski pointed out that they managed to collect most feedback when their site was down. As he said “then you knew who really cares.” Second point he makes is that through the donate button you can also find out who really cares, but also adds that this feature “is not really popular in the civic-tech sector.”

Should we focus on collecting more data or should we try to find another way of measuring success? My answer is that we have to continue collecting data but do it responsibly and ethically. Users should always be aware that you are collecting data on their behaviour but more importantly why are you doing that and how will that data be used and stored.

Organizations in a collaborative environment should also focus on trying, as Madejski says, “to quantify knowledge gain and networking effect.” He is right when saying networking “builds trust between people” but also that it is hard to measure. With this in mind these meetings should be used to find solutions to getting more qualitative feedback from users, especially related to measuring impact whereas Sandor Lederer says “quantitative indicators are not relevant.”

How do we continue from here?

Even though to some the suggested actions from this overview could seem as an insurmountable obstacle for moving forward and measuring success in a better way I believe this region can do better and achieve better results in measuring and presenting their worth and success to donors as well as their target audience. Here are several options you can use in the future when approaching measuring success of your projects. This way you will most certainly, as Katarzyna Mikołajczyk pointed out, achieve more than you expect or planned.

Define what is success for you, members of your team and community you are working with.

Make sure you have defined properly what success means for you on strategic and operational level as well as how will you measure it in the end.

Sandor Lederer of K-Monitor made an excellent point in stating that organizations have diverse activities and based on that all organizations should have a clear definition:

  • What is the goal that should be reached in the project?
  • How much does a project contribute to the overall strategy of your organization?
  • What is the added value of the project unforeseen by the original project proposal but is a positive side effect and how this should be collected and measured?
  • What lessons were learned from the project and how to implement them in future projects?
  • Was the project innovative and aimed at changing the system?

Make sure your team understands what success should look like

If your team does not understand what the success should look like they cannot successfully move towards it. They will simply wander around, will not be able to present achievements clearly to target audience, and this in turn will make it difficult for everyone to measure success.

Network and share knowledge and experiences of measuring success in your projects more often.

While many of you are already doing this, there is still more ground to cover and make knowledge sharing better. Have in mind that all of you should think of small differences between countries but also of similarities and this way get your message across effectively.

Establish better communication with most active users and make them your allies.

Measuring offline impact is one challenge that all of the surveyed organizations emphasized, but they also pointed out that they have good communication with small but most active group of users.

All organizations should allocate more human resources to communicate with these users and make them your ambassadors, celebrate them more and make sure they have a better understanding what is it that you are trying to achieve. They will be the one who will explain to users online and offline why they should be more active in giving feedback through more detailed surveys your organization is preparing.

Collect more stories on how your organization or a particular project have been making change in the society.

As Krzysztof Madejski pointed out activists in CEE/SEE region are struggling to “convince decision makers to listen to people” but this does not mean they have been completely unsuccessful in making this possible. These success stories should be pointed out in a more effective way in order to show users what can be achieved when citizens and activist work together on a common goal.

This article was originally published at http://peoplebeforetech.transparencee.org

--

--

Fedja Kulenovic
TransparenCEE network

Information scientist | #startups #knowledgemanagement #data #informationliteracy #datavisualization #nonprofits