7 years ago (um, wow!) I was a reading teacher in a K-5 building. We revamped our report cards that year to be very standards focused. Indicators based on the standards were broken apart into the report card and each received their own score which then made up the overall grade. One of the teachers came to me and said…
At the time, all of the K-2 teachers were using DIBELS. However, this was the original DIBELS (before the accuracy rate was included). The teacher didn’t want to give the students’ grades just based on accuracy. She contemplated giving a percentage based on the students’ automaticity in relation to the norm. However, as she described, that score in no way reflects students reading more than the target to determine intervention. So, basically, if that was used, students would have been considered fluent readers as long as they weren’t in need of intervention. And while that’s not a crazy thought, it in no way incorporates how students sound, or if they read really, really quickly and made a ton of mistakes. I didn’t have a good answer, so I took to the internet. And found nothing.
One of the first things I found was the NAEP Oral Reading Fluency scale. I was excited to find something that gave tangible characteristics for how fluent readers should sound. But, of course, I didn’t think that should be the only thing students were rated on. I took my knowledge of accuracy rates, the DIBELS automaticity benchmarks, and the prosody scale from NAEP and turned it into a 12 point rubric. I then emailed the fluency guru, Dr. Tim Rasinski to see if I could get his input or if he had any recommendations. I fully expected no response. I mean, who am I? And he’s an innovator.
Guess what? He responded. Not only did he respond but he said that they were “really, really good”. I died. I printed out that email to have forever and then I died.
Fast forward 6 years and a few things have changed. First, DIBELS now gives benchmarks for accuracy rates for each grade level. The automaticity scores have also increased as demands have increased. While I do not think DIBELS is the end-all-be-all of reading, I do think that their figures have been normed and are research based so I used them as the basis for reworking my rubrics.
The automaticity rates vary from the DIBELS benchmarks in a couple ways. First, students need to score ABOVE the benchmark to receive the full points in this area. This is because DIBELS is intended to identify struggling students, and not students who are above grade level. Also, the automaticity level 3 incorporates all of the ‘strategic’ area from DIBELS. This is because students can be fluent readers, with a high accuracy rate, and still be a bit slow during an assessment. It doesn’t mean that students aren’t fluent. If students are not reading with proper prosody, and are also a bit slow, then their score more accurately reflects that they need some strategic assistance with their fluency.
I’ve recreated the rubrics for grades 2-6. There are different rubrics for the beginning, middle, and the end of the year. Between those benchmarks, the rubrics can also be used for progress monitoring or formative assessment purposes using the rubric from the previous benchmark period. They can be used with any grade level one minute cold read. My basal includes a set of assessments for cold reads, so I use those along with the rubric to score my students, because the grading piece is still missing from the basal. With one strand of the Common Core standards tied to fluency (RF.4) for each grade level, it’s important to me to have a tool that accurately depicts students’ fluency so that I can report the information accurately to parents.
If you’d like to download the rubrics for use for your students or your school, click the image below.
If you’re looking for tools to help your students become more fluent readers, check out my