Making data work… within a network of schools
The DfE recently published an important new report — “Making data work” — by the Teacher Workload Advisory Group, chaired by Professor Becky Allen. At Ark, we believe it’s possible to use data more effectively while also reducing workload. That’s why we were pleased to participate in the group and why we support the report.
The first principle listed in the report is the most important — that the purpose and use of all data must be clear. At Ark, we believe that improved outcomes are made possible by informed action; that informed action is made possible by insightful analysis; and that insightful analysis is made possible by accurate data. The fundamental purpose of our assessment data is therefore to inform the teaching and leadership actions that will improve student outcomes.
But our assessment data has not always fulfilled this purpose. A few years ago, Ark was at risk of being one of the multi-academy trusts (MATs) implicitly criticised by the report. We centrally collected teacher-assessed sub-levels six times a year and many of our schools locally tracked additional lesson grades and/or granular checklists. Our analysis tools did not give teachers all of the answers they needed, so they created their own spreadsheets. The result was a lot of work for (at best) limited value.
Around this time, Ark’s then Head of Assessment, Daisy Christodoulou, was researching how to increase the quality of our assessments. Her book, “Making Good Progress?”, describes her conclusions in detail, but the main principles we derived from this research were:
1. Different assessments for different purposes
Summative and formative serve different needs and should be clearly delineated
2. Common summative assessments
The most accurate comparison is to ask students the exact same questions
3. Cumulative tests for summative assessments
Testing only what has recently been taught works for formative but not for summative
4. Age Related grading bands for summative assessments
Comparing to the national peer-group negates ‘need’ for fictional flightpaths
5. Frequent, specific, non-graded formative assessments
Ongoing checks for understanding are vital, but grading this is meaningless
When turning these into practice, we had to consider the third and fourth principles listed in “Making data work” — ensuring that the volume and frequency of data collection was proportionate and that the collection and analysis processes were as efficient as possible. We needed to minimise the time teachers spent on collection and analysis, freeing up their valuable time for informed action. To achieve this, we chose to leverage:
Consistency: Common definitions, Common assessments, Common calendar, Common measures, Common dashboards/tools
Scale: Large sample size (~2,000 students per year group), Central administration, Targeted network resource, Network collaboration
Technology: “Enter once, use many times”, Single data warehouse — integrating multiple sources, Automated calculations/logic, Interactive analysis tools
In practice, this led to Ark:
- Reducing the number of summative assessments from 6 to 3 (and now in many cases 2) per year
- Introducing nationally standardised tests for all KS1&2 Reading and Maths assessments
- Developing curriculum-aligned network tests for core subjects at KS3 (annually sense-checked against a sample of nationally standardised English and Maths test results)
- Using common exam board materials for curriculum-aligned network tests at KS4&5
- Bringing together all network-wide subject teachers once per assessment window to align on assessment marking/moderation and post-assessment action planning
- Building network-wide systems that automatically calculate all raw marks and age related grading bands (post-hoc), as well as performance vs. baselines and (teacher-facing) targets
- Creating visually consistent dashboards and analysis tools, tailored for different audiences (e.g. interactive teacher tools that drill down to individual questions/students vs. more high level one-pagers for management and governors)
This new approach to assessment is now in its third year at primary level and its second year at secondary level. The impact at primary level has been most notable, with the vast majority of school leaders feeling that the increase in data quality has contributed to improved student outcomes. Most also feel that it has enabled them to discontinue other time-consuming assessment activities.
The approach is less mature at secondary level and is subject to additional challenges, including curriculum alignment, marking consistency and the complexity of entry patterns and tiering. However, we believe that these challenges can be addressed through further collaboration within and beyond the network.
In the meantime, we must heed the second principle in “Making data work” — that the precision and limitations of our data are well understood. We believe that the model we have developed provides improved trade-offs between accuracy and efficiency, but we don’t pretend that it provides a perfect measurement of student learning, nor does it completely eliminate the workload associated with assessment data. This model is a work in progress and we will continue to listen to our teachers and school leaders as we develop it further, as well as drawing upon research and evidence from elsewhere — including this week’s DfE report.
To reiterate, our assessment data’s main purpose is to inform teaching and leadership actions — i.e. which students need what teacher support, which teachers need what leadership support and which leaders need what network support. As long as it continues to serve this purpose, we will keep doing everything we can as a network towards making data work for our schools, our teachers and, most importantly, our students.