Many, many years ago, I was a reading teacher in a K-5 building. We revamped our report cards that year to be very standards focused. Indicators based on the standards were broken apart into the report card and each received their own score which then made up the overall grade. One of the teachers came to me and said…”How do you grade fluency?” My oral reading fluency rubrics were born out of that question.
At the time, all of the K-2 teachers were using DIBELS. However, this was the original DIBELS (before the accuracy rate was included). The teacher didn’t want to give the students’ grades just based on accuracy. She contemplated giving a percentage based on the students’ automaticity in relation to the norm. However, as she described, that score in no way reflects students reading more than the target to determine intervention. So, basically, if that was used, students would have been considered fluent readers as long as they weren’t in need of intervention. And while that’s not a crazy thought, it in no way incorporates how students sound, or if they read really, really quickly and made a ton of mistakes. I didn’t have a good answer, so I took to the internet. And found nothing.
One of the first things I found was the NAEP Oral Reading Fluency scale. I was excited to find something that gave tangible characteristics for how fluent readers should sound. But, of course, I didn’t think that should be the only thing students were rated on. I took my knowledge of accuracy rates, the DIBELS automaticity benchmarks, and the prosody scale from NAEP and turned it into a 12 point oral reading fluency rubric. I then emailed the fluency guru, Dr. Tim Rasinski to see if I could get his input or if he had any recommendations. I fully expected no response. I mean, who am I? And he’s an innovator.
Guess what? He responded. Not only did he respond but he said that they were “really, really good”. I died. I printed out that email to have forever and then I died. All these years later, I have since misplaced that printed email amongst my many moves, and I’m a little bit sad about that!
Fast forward many years and several things have changed. First, DIBELS now gives benchmarks for accuracy rates for each grade level, and those have been adjusted over the years. Automaticity rates have also changed over the years with words read correctly expectations increasing quite a bit from the early years of Oral Reading Fluency (or ORF) assessments.
I’ve updated the rubrics a few times over the years. Most recently, the updates reflect the 2017 Hasbrouck & Tindal Oral Reading Fluency Norms. Several years back I updated the accuracy scores to reflect DIBELS expectations. Since that time, they have since changed. Other than a few tweaks in 2nd and 3rd grades, I left the rubrics as-is. DIBELS includes 95% accuracy as proficient, and it should be. But they don’t have variation for students working above average. My rubrics include expectations of 96%, 97%, and 98% accuracy with older students.
There are different fluency rubrics for the beginning, middle, and the end of the year for grades 2-6. This coincides with typical diagnostic windows. Between those benchmarks, the rubrics can also be used for progress monitoring or formative assessment purposes using the rubric from the previous benchmark period. They can be used with any one minute cold read of a grade level text.
My basal includes a set of assessments for cold reads, so I use those along with the rubric to score my students, because the grading piece is still missing from the basal. With one strand of the Common Core standards tied to fluency (RF.4) for each grade level, it’s important to me to have a tool that accurately depicts students’ fluency so that I can report the information accurately to parents.
Click here to head to TpT to download the oral reading fluency rubrics for use for your students or your school for free.
If you’re looking for tools to help your students become more fluent readers, check out my