Weeknotes: 18/2/19

Lauren Quinn
Designing Good Things
4 min readMar 18, 2019

Backdating old projects

Welcome to the second instalment of our weeknotes. This week we’re backdating the count of some of our older projects (as well as the weeknotes). We are specifically focusing on projects where data has been collected outside our standard management information systems between 2015 and now.

The background

Our strategy aims to support 3 million socially excluded people by 2020. In 2015 our digital team did a count of how many people we’d supported up until that point. Sadly this has been an area that has been somewhat neglected ever since due to the broad range (and reach) of our work. Despite collecting beneficiaries for all of our projects, the main figure that we tend to refer to only looks at those who we have supported through our flagship project, Future Digital Inclusion.

This is tricky because we’ve run lots of projects other than Future Digital Inclusion since 2015. Because we haven’t always structured the way in which we’ve measured these projects, we have been consistently underestimating our reach.

Many of our projects are delivered in partnership with other organisations, which means that data is also collected on systems outside of our own. So, without searching through our systems and collating all this data, we have been unable to count these people accurately. Until now, that is.

So what did we learn from the process this week…

The blessing of a well organised Google Drive

Thankfully, we have good records of all our archived projects. What seems quite simple and common practice, made our data lives so much easier. Without context, all of these project documents still wouldn’t have got us to a number. This brings me to my next point…

You can’t rely on numbers alone

After looking through our project files and multiple spreadsheets, we quickly realised it was difficult to establish the final figures for each project by looking at the data alone, as many projects had multiple KPIs. We spoke with project managers to understand how different beneficiaries were supported for each project and to gain more context. For example, what was the time frame for support, what did support look like and can we be sure the project did not make use of our current learning platforms. For this task, we decided to seek wisdom from long-standing colleagues within the organisation, from project officers to senior management. Discussions with around 8 experienced colleagues saved hours of scrolling through folders, highlighting good communication between teams is essential.

Count EVERYTHING!

It sounds self-explanatory but ensuring you’ve included every beneficiary in a count is so important. We found that some of the projects we backdated impacted our count significantly (by at least 10,000 for one project!). Not all projects were like this, the smallest project we accounted for had 20 beneficiaries but even counting these smaller projects had an impact on our total number of beneficiaries. Once aggregated, we managed to capture an additional 26,404 beneficiaries since 2015, which highlights the importance of counting everything, even if it initially seems small or insignificant.

Consistency is key

Consistently using your own system understandably makes data analysis so much easier. Unfortunately, this isn’t always as simple to implement as it seems. Funder requirements can result in beneficiaries being recorded in a way that fits the needs of the funder but often not our own. As offering the best support to our beneficiaries is our main priority, we’ve sometimes sacrificed how this data has been collected. For example, on one of our projects the data was aggregated and delivered as an overall figure each month without a breakdown for the beneficiaries (no UID or name). As a result, overlap between projects could not be completely ruled out.

We also found this problem occurred when reviewing the data we had been collecting ourselves. As previously mentioned, we needed to collect the number of beneficiaries for projects where support did not make use of our learning platforms. However, as these projects collected beneficiaries through surveys or reports without a UID, we were unable to cross-reference these people against our learning platform and ensure they were not included in our count from last week already. Yet, it is even more problematic that some learners may have benefitted from more than one of the projects that didn’t use our learning platform. This means that we cannot rule out the possibility of overlap and we are unable to estimate the proportion of double counting.

Now, this also raises the problem that we’re unable to automate the count for these projects as they are outside of our system and some are still ongoing. This means that despite having an automated process for the projects that use our learner management systems, we’ll still need to manually update the numbers on a monthly basis.

Our plans for next week

Clearly, we have a lot to consider when counting future projects so we will need to have wider discussions with the rest of Good Things Foundation to establish how we can improve this method. In the meantime, we will continue to work on our count by attempting to break this down geographically.

--

--