Need vs. Speed: how to get great user feedback and still deliver on time
This blog post is the second in a series that talks about how we developed the COVID-19 Demand Modelling tool with funding from MHCLG, and in collaboration with GMCA, Data to Insight, and a number of other local authorities. The model is now available for download at the ChAT group library.
This is a great example of how collaboration can rapidly develop and scale a widely usable tool in response to an urgent need. We hope sharing our learnings might be useful for others in future work.
Last week we published a blog explaining the context behind a project we undertook to try and address one of many challenges faced by local authorities as a result of the COVID-19 pandemic. With students about to return to the classroom for the first time in six months, what would happen to the volume of incoming children’s services referrals and would authorities be able to handle it?
With rapid funding from MHCLG, an engaged user base among the 151 local authorities, and dedicated central development resource from Social Finance and Data to Insight, we had all the ingredients in place to deliver a valuable tool.
To get there, however, posed a different set of challenges. We needed to:
- Understand a diverse range of user needs and prioritise accordingly
- Build a tool that was intuitive to use for 151 local authorities but that could be easily adapted to local circumstances
- Iterate rapidly while maintaining quality
To address these challenges, we piloted a new approach that would enable us to engage with a community of developers, stakeholders and end users, each of whom had different levels of knowledge of the tool and investment in its development.
At the centre was a core development team of analysts from Social Finance and Data to Insight, responsible for building the tool and setting priorities on a weekly basis. This stable, focused group always had feature design and implementation in their sights. They were also responsible for synthesising the other feedback heard, allowing for rapid turnaround in quick sprints.
- In the next layer were two local authorities who agreed to be regular testers and were reached out to multiple times during development. Their familiarity with the tool became an asset and allowed swift testing of new features.
- Outside this layer were a group of approximately 10 local authorities who each provided user experience feedback once during the process. Their ‘fresh eyes’ perspective provided the core team with regular insight into how the tool would land with its intended audience — Local Authorities who would be seeing the tool for the first time — upon release.
- In the final layer are the 151 local authorities connected to Data to Insight’s distribution network. By embedding development within this network, it should be possible to achieve a rapid take-up of the tool upon launch.
By facilitating regular interactions between these layers we cultivated a sense of shared ownership of the project between developers and users. This was particularly useful given the time-sensitive nature of the project, as it meant we could reach out to a wide range of stakeholders at any one time. This resulted in feedback iterations that took days, rather than weeks, to complete.
Engaging with so many stakeholders, however, meant a constant flow of feedback towards the centre. This increased the risk that development would be pulled in multiple directions at once or that individual voices would be drowned out. To avoid this, the development team took a day out of development at key junctions to step back and aggregate the different views. Using a virtual whiteboard, the team carefully sifted and grouped the different views into common themes, finally distilling half a dozen user experiences into two or three key stories to drive the next round of prioritisation.
By engaging a wider community of users in development, we had an audience that were engaged in the product before release. The prioritisation process also made us sensitive to changing underlying user needs.
The impact of children returning to schools on school referral volumes may have been the key driver for the tool’s creation, but a month into development, local authorities were increasingly concerned with the impact of lockdown on social worker caseloads in the medium to long-term. This resulted in a pivot in the overall focus of the tool, from predicting CS referral volumes to modelling the impacts of lockdown on the numbers of open Child Protection Plans and children in care.
Additionally, as time went by, prospects of a second lockdown loomed ever closer. Users made it clear that the ability to forecast the impacts of future lockdowns would be key to ensuring the tool’s longevity.
It was only due to close engagement with a broad feedback community that the core development team could feel confident in making such significant changes in feature prioritisation. It has (hopefully!) resulted in a tool with greater long term use potential that was laid out in the project brief.
The next challenge
With the tool released, our next challenge was to enable as many local authorities as possible to make best use of the tool as possible. Look out for a final post about how we have promoted and facilitated ongoing community development post release.