Assuming that you’re asking about how we calculate the grade level for our reading passages as we are developing the measures, here is what we do.
The process is described in the technical reports published on our website (brtprojects.org) that
Alonzo, J., & Tindal, G. (2008). The development of fifth-grade passage reading fluency measures in a progress monitoring assessment system (Technical Report No. 43). Eugene, OR: Behavioral Research and Teaching, University of Oregon.
The following excerpt from that technical report gives a basic description of the process we use:
“The passages used in the Passage Reading Fluency measures were all written specifically for use in this progress monitoring assessment system. All 20 passages were written by graduate students enrolled in College of Education courses in the spring and summer of 2007. Passage writers followed written test specifications (see Appendix A). All passages underwent a four-stage review process. First, the lead author, who holds a Bachelor’s of Arts degree in English and is a National Board for Professional Teaching Standards certified English teacher, reviewed each passage. She edited the passages for grammatical correctness and grade-level appropriateness. Then, two graduate students edited for formatting consistency. They divided each passage into three paragraphs of approximately even length and checked the readability of each paragraph using the Flesch-Kinkaid readability index feature available on Microsoft Word. Each fifth-grade paragraph was adjusted as needed to create three paragraphs with a readability level between 5.4 and 5.6. Third, each passage was reviewed by a teacher with a minimum of three years’ teaching experience at that particular grade level to ensure the topics, wording, and style were appropriate for the target grade levels. Finally, passages were sent back to the lead author for a final review Fluency to ensure that they still met test specifications. Once the review process was complete, the passages were printed on 8 ½ by 11 inch paper for use during the pilot testing process.” (p. 6-7).
We used the same basic process to calibrate (bring ‘on grade level’) the passages used as the basis for the MCRC and CCSS Reading Comprehension measures, with the CCSS Reading Measures — designed, as they were, to be super sensitive to low-performing readers’ growth in literal comprehension — revised to be accessible to students at the start of the year/grade level. The MCRC measures, in contrast, are designed with an emphasis on being reliable assessments for universal screening, including helping to identify students for talented and gifted programs. As a result, they are written to conform with the 5th – 8th month of a given grade level, in terms of their Flesh-Kinkaid Readability Index.
When discussing ‘readability’ and ‘grade-level’ of the measures, it’s also important to consider interpretation of performance. Our National Norms were all calculated using the responses of on-grade-level students assessed in the fall, winter, and spring on the grade-level measures (2000 students at each time point on each measure, sampled from across the four regions of the country to match national school demographics).