Oral Reading Assessments > Leveling Assessments

Robert Berretta
Breaking Ed
Published in
3 min readJun 22, 2015

In my last post, I argued that the time spent on leveling students — around 36 hours per class — is excessive and unnecessary. But not all reading assessments require such a massive time investment.

Oral reading assessments — probes that measure how many words a student can correctly read on a standardized, grade-level passage — can be had for a meager investment of just 15 minutes per student for the entire year. That’s around 6 total hours for a class of 24 hours students, and represents a time savings of 30 hours compared to leveling assessments. If you’re an avid shopper like me (Brooks Brothers is having their semi-annual sale right now, btw), that’s an 83% discount. It’s not money in the bank, but it’s certainly more time to spend actually reading with students.

Some of you are probably asking, “Who shops at Brooks Brothers?” I’d respond by saying, “I do. Their non-iron shirts, while pricey, are the most well-tailored and durable dress shirts on the market.” Others are probably asking, “Can such a short, narrowly-focused assessment yield enough valuable information?” I’d respond by saying, “Yes.”

A student’s oral reading rate is very indicative of their overall reading ability. One of the most notable pieces on this topic is a 2001 article written by Fuchs, et al. In their summary, the authors write, “A decade ago, Adams (1990) reminded the field that oral reading fluency is the most salient characteristic of skillful reading. Theoretical perspectives on the development of reading capacity and empirical databases support Adams’ claim. Yet, its use by teachers and researchers appears limited.”

The authors point out correlations between oral reading and comprehension as high as .91. Tim Shanahan, a noted literacy professor and blogger, points out that some estimates suggest that oral reading fluency can account for 72% of comprehension. In other words, 3/4 of all reading comprehension issues could be attributed to oral reading.

If that’s the case — and in my 11 years working in literacy education, it is — then why invest so much time in complex, comprehension-focused leveling assessments? A better use of time would be assessing students’ oral reading rate, comparing it national norms, and figuring out which students likely struggle with reading based on that data.

You could stop your assessing there and start your teaching, being mindful of the “dysfluent” students when differentiating your instruction. Or, you could invest a few more hours in some data analysis. That oral reading data can be compared with already-existing data, like performance on standardized, multiple-choice assessments to get an even clearer picture of students’ reading abilities.

In our network, we’ve been shifting to do just that. We use aimsweb to benchmark all of our students, and then we compare that data to results from state tests and Achievement Network assessments. This comparative analysis helps us understand which students struggle with comprehension because they’re dysfluent, and which might struggle for other reasons. We’ve found that oral reading data alone is indeed very predictive of our students’ reading abilities. That is, those students who read at or close to the national average for their grade perform well on comprehension assessments. This analysis, much more so than knowing an abstract reading level like (780 or K or 9), has helped us allocate more resources to the remediation of students that struggle with basic oral reading.

For those whose oral reading struggles don’t account for their comprehension struggles, then, couldn’t a reading leveling assessment help us understand their specific deficits? Yes, in fact, it could. For the 25% or so of students whose reading struggles aren’t due to oral reading, a leveling assessment might be worthwhile.

Still, in my experience these assessment systems rely too heavily on assessing a student’s reading strategies (here’s a good read on those), and not enough on more important factors for reading success: vocabulary and world knowledge. Instead of doing more assessing, I’d prescribe a program that focuses heavily on building vocabulary and world knowledge through authentic reading of challenging texts. If that proves unsuccessful after a few months, then a more robust assessment would be warranted.

Even if all the above data analysis and assessment took an additional six hours (it doesn’t), it would still yield 24 more instructional hours for — you guessed it — reading. That is, after all, the number one way to improve fluency, vocabulary, and world knowledge.

--

--