Many, many years ago, I was a reading teacher in a K-5 building. We revamped our report cards that year to be very standards focused. Indicators based on the standards were broken apart into the report card and each received their own score which then made up the overall grade. One of the teachers came to me and said…”How do you grade fluency?” We talked things through and couldn’t come to a solid opinion on the best way. My oral reading fluency rubrics were born out of that question.
What are the components of reading fluency?
There are 3 well-researched components to reading fluency: accuracy, automaticity, and prosody. Each component encompasses a different area of reading.
Accuracy: Including self-correction, how accurate was the decoding or word recall
Automaticity: How many words were read correctly
Prosody: How does the student sound? Is there phrasing and expression?
Each of these components is a factor in a student’s overall reading fluency. Obviously, accuracy and automaticity are the most important from a decoding standpoint. But, comprehension can also be directly tied to prosody as meaning is often tied to phrasing and intonations.
So, if we’re being asked to give a grade for fluency (not saying we should, just that it’s the ask of many teachers), how do we do that?
How do you measure reading fluency?
To my knowledge, there’s never been a way to measure reading fluency that includes all three components of reading fluency, and that was my mission. I found resources that measured each skill in isolation and I researched things to the best of my abilities as a reading interventionist (not a researcher).
One of the first things I found was the NAEP Oral Reading Fluency scale. I was excited to find something that gave tangible characteristics for how fluent readers should sound- their prosody. But, of course, I didn’t think that should be the only thing students were rated on.
At the time, all of the K-2 teachers were using DIBELS. However, this was the original DIBELS (before the accuracy rate was included). The teacher didn’t want to give the students’ grades just based on accuracy. She contemplated giving a percentage based on the students’ automaticity in relation to the norm. However, as she described, that score in no way reflects students reading more than the target to determine intervention. So, basically, if that was used, students would have been considered fluent readers as long as they weren’t in need of intervention. And while that’s not a crazy thought, it in no way incorporates how students sound, or if they read really, really quickly and made a ton of mistakes. I didn’t have a good answer, so I took to the internet. And found nothing.
I took my knowledge of accuracy rates, the DIBELS automaticity benchmarks, and the prosody scale from NAEP and turned it into a 12 point oral reading fluency rubric. I then emailed the fluency guru, Dr. Tim Rasinski to see if I could get his input or if he had any recommendations. I fully expected no response. I mean, who am I? And he’s an innovator.
Guess what? He responded. Not only did he respond but he said that they were “really, really good”. I died. I printed out that email to have forever and then I died. All these years later, I have since misplaced that printed email amongst my many moves, and I’m a little bit sad about that!
Fast forward many years and several things have changed. First, DIBELS now gives benchmarks for accuracy rates for each grade level, and those have been adjusted over the years. Automaticity rates have also changed over the years with words read correctly expectations increasing quite a bit from the early years of Oral Reading Fluency (or ORF) assessments.
I’ve updated the reading fluency rubrics a few times over the years. Most recently, the updates reflect the 2017 Hasbrouck & Tindal Oral Reading Fluency Norms. Several years back I updated the accuracy scores to reflect DIBELS expectations. Since that time, they have since changed. Other than a few tweaks in 2nd and 3rd grades, I left the rubrics as-is. DIBELS includes 95% accuracy as proficient, and it should be. But they don’t have variation for students working above average. My rubrics include expectations of 96%, 97%, and 98% accuracy with older students.
There are different reading fluency rubrics for the beginning, middle, and the end of the year for grades 2-6. This coincides with typical diagnostic windows. Between those benchmarks, the rubrics can also be used for progress monitoring or formative assessment purposes using the rubric from the previous benchmark period.
How do you use the rubrics?
Fluency assessments are typically done on one-minute cold reads, meaning texts the students have not previously read. Many traditional basal curriculums include cold read texts, or other texts that could easily be used. A previous basal text I had access to provided new texts for the weekly reading assessment. These were texts I sometimes used for the cold read.
To administer, I ensure there is a blank master copy for students to read from. Then I have one copy of the text per student that I use as my recording form. I tell students the title of the text and ask them to read it aloud for me. I record student miscues, and indicate the stopping place when the timer is up. If you’re concerned that students would be distracted with the timer, you can also just indicate their stopping point while they continue reading. Once the timing is up, I calculate the number of words read correctly and the accuracy rate (number of words read correctly divided by number of words read in all). Then, I record them on the reading fluency rubric and total up the score.
With one strand of the Common Core standards tied to fluency (RF.4) for each grade level, it’s important to me to have a tool that accurately depicts students’ fluency so that I can report the information accurately to parents.
Click here to head to TpT to download the oral reading fluency rubrics for use for your students or your school for free.
If you’re looking for reading fluency activities, check out the resources below.