A first run of the DSI Index

DSI4EU
DSI4EU
Published in
5 min readFeb 27, 2019

Matt Stokes shares key findings from the prototype DSI Index, as well as the new questions it raises and next steps for the months to come. Plus, have you got a good overview of DSI, tech for good and civic tech in your country? Then please fill in our 5-minute survey (link at bottom).

What system factors help DSI to grow and thrive?

That’s the question at the core of our work on the experimental DSI index, which aims to understand the macro-level conditions which support the creation, growth and sustainability of DSI initiatives, and to understand how cities across Europe are performing on these conditions.

In July of last year we published the theoretical framework for the index, made up of 32 factors grouped into seven themes. We decided on these factors following extensive consultation with a range of stakeholders involved in DSI.

We spent the second half of 2018 sourcing, processing and analysing data for the Index from different sources (from Eurostat data to web-scraped data we collected ourselves from social media and other platforms), deciding which cities to include, assessing data quality and standardising indicators.

The indicators used in the DSI Index.

The prototype index

We’re now pleased to have published a first prototype of the Index and to share our preliminary findings.

We were relatively surprised by the first run of the data. We envisaged the results correlating strongly with our knowledge of DSI activity: it follows logically that the more supportive the ecosystem, the more DSI activity there will be.

For some cities, this held true: Amsterdam, London and Berlin, for example, all ranked high in the index. But overall, we did not see a strong correlation. Some cities which we consider to be leaders in DSI (such as Barcelona, Madrid and Paris) were relatively low down the list, while others we did not think of as hotspots performed well (such as Utrecht and Karlsruhe). Northern European cities also tend to fare much better than Southern European cities, which surprised us.

What’s behind the results?

That’s now another core question we are addressing, and we’re looking forward to delving in much more deeply over the coming months. We’ve got a few ideas for why the expected correlation doesn’t exist, although in all likelihood it’s a combination of some of these (and maybe also others we haven’t thought of yet).

Three hypotheses concern methodological and/or quantitative aspects of the Index:

  • Firstly, we have faced challenges in the quality, availability and accuracy of data. We had to remove two indicators completely, and have had to use proxies for others, while in some cases we have found good data but at a less-than-ideal level of granularity (e.g. country level only) or age. This is an unavoidable consequence of developing an experimental index for a field which receives limited attention from government and academia, and we believe there is a strong case to be made to policymakers and researchers about the need for better data in these areas.
  • Secondly, while we consulted as wide a range of voices as possible, some decisions in our theoretical framework may not be as accurate or representative as we would like. We are confident in our research methods, but any process of this type entails subjective decision-making, and we’re keen to explore whether the indicators, themes and weightings need to be altered.
  • Thirdly, our understanding of DSI activity around Europe isn’t perfect. We know our database is neither comprehensive nor truly representative: it is largely crowdsourced, and it is biased towards regions where DSI4EU partners have knowledge, networks and linguistic access and to the social areas they work in. It may be that areas we hadn’t thought of as DSI hotspots before actually are hives of activity.

The other possible reasons why our results are surprising relate more to DSI concepts and theories:

  • Maybe it’s simplistic to think the ecosystemic factors and activity should correlate closely. For example, many DSI initiatives emerge as a response to unfavourable social, political and economic contexts, in an attempt to address social issues that have been overlooked by traditional institutions. (See, for example, the large number of digital democracy initiatives that have emerged in Eastern and Southern Europe in response to localised issues such as corruption and lack of government transparency.) Some of the cities which are high-ranking in the index might simply have less need for DSI; and some of the cities which are low-ranking in the index might have conditions which lead people to develop DSI against the odds.
  • The ecosystemic factors themselves are influenced and shaped by policy over many years, but our Index doesn’t really take into account how governments can actively support DSI through policy and funding. It’s possible that such initiatives and action plans (whether using the term DSI or not) play a far greater role in supporting DSI initiatives than the wider ecosystemic factors. If this is true, it strengthens the argument even further that governments should have explicitly DSI-focused policies and budgets. We’re cataloguing such policies through our work on the Ideas Bank, and look forward to seeing how they compare with the Index scores.
  • The final hypothesis is that the diversity of DSI as a field means a traditional index will never be sufficient to measure its supportive ecosystemic factors. The breadth of activity encompassed under the term DSI is vast: different actors and stakeholders, different social areas, different motivations, different socio-economic contexts and so on. Most indexes focus on much more clearly-defined fields, and ones where we have a much larger body of literature to draw upon. It may be that the usual method of creating indexes is not quite up to the job we’re asking in this case.

Next steps

Our first task is to ensure that we’ve done whatever we can reasonably do methodologically to ensure our Index is as accurate as possible. To that end, we’re revisiting a few of the data sources, refining the theoretical framework (including potentially rejigging some themes and their component indicators) and weightings, and carrying out sensitivity analysis (to understand the effect of subjective decisions in the framework development) and cluster analysis (understanding how cities share similar successes or challenges).

Once we’ve got there, we’ll be thinking in more depth about the theoretical and conceptual issues to understand how the Index can be of most use to policymakers, practitioners and researchers. We’ll also be creating interactive visualisations of the Index to help people understand the data most relevant to them, and we’ll be launching the Index at our final event in Warsaw in June.

Get involved!

We’d love to hear your thoughts on the prototype index — particularly on any data sources or on the hypotheses we’ve come up with so far. Are there other things we’re missing? Email us on contact@digitalsocial.eu with your feedback.

Finally, we’ve also launched a very quick (5 minute!) survey to try and understand more about seven of the trickiest indicators in the Index. We want to hear from anyone with a good general understanding of DSI, tech for good and civic tech in any EU country to fill it in. Please also share it with any other experts you know. If you know anyone in the following countries, we’d be especially grateful: Austria, Belgium, Bulgaria, Croatia, Cyprus, Czechia, Estonia, Finland, Greece, Hungary, Ireland, Latvia, Lithuania, Luxembourg, Malta, Portugal, Slovakia, Slovenia.

Thank you!

Originally published at digitalsocial.eu.

--

--

DSI4EU
DSI4EU
Editor for

We support digital social innovation, #TechForGood and #CivicTech in Europe. Join the community at http://digitalsocial.eu/register now!