Understanding Percentile Rank

Understanding percentile ranks can be a little confusing. Sometimes performance on assessments across the year sometimes defies logic a tad. To demonstrate, let’s use 8th graders’ performance as an example.

In the world of statistics, percentile rank refers to the percentage of scores that are equal to or less than a given score. In the case of national norm scores, such as those provided as a reference on the easyCBM system, this refers more specifically to the percentage of scores in the national norm sample that are equal to or less than the score your student received.

Let’s take the example of an 8th grade student who scores 11 out of 20 correct on the fall, winter, and spring Proficient Reading measures on easyCBM.

In this case, the 11 in the fall corresponds to the 18th percentile.  What this means is that 18 percent of the national norm sample of 8th graders scored 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, or 0 on the fall Proficient Reading measures.

By the time winter came around, though, the 8th grade students in the national norm sample actually, as a group, had worse scores. On average, 8th grade students in the national norm sample scored lower in the winter than they did in the fall.  As a result, the same score of 11 out of 20 correct results in a percentile rank of 24 — translated as “24% of 8th graders in the national norm sample scored 11 or lower on the winter Proficient Reading test.” Thus, although your student’s score remained the same (11 out of 20 correct), more students in the national norm sample scored either the same (11) or lower (10, 9, 8, etc.) on the winter Proficient Reading test.  And, in the spring, students in the national norm group experienced another slight slump, such that that same score of 11 (or lower) was now ‘earned’ by 25 percent of students in the national norm group.

When we observed this slump in 2006, when we launched easyCBM for the first time, we initially thought that either (a) we had computed the norms wrong — at the time we didn’t have national norms, but we had computed regional norms based on the 10 districts that were the initial users of easyCBM) or (b) the test forms somehow got more difficult over the course of the year. Accordingly, the next year we switched the test that had been given in the spring to the fall and the one that had originally been given in the fall to the spring for the next group of 8th graders. Much to our surprise, we found the same result in Year 2.  We had several colleagues check the computations for the regional norms, and we all reached the same results.

With the national norms, we sampled 2000 8th grade students in fall, winter, and spring, with proportional representation across the regions, and proportional representation of race/ethnicity, SPED, ELL, and gender status to match the National Center of Education Statistics’ sampling plan. And, again, we found the same results. 8th grade students, on average, tend to score highest in the fall, and they tend to score lower with each subsequent benchmark assessment.

In the end, we are left with the conclusion that this phenomenon may have more to do with 8th grade students, hormones, and natural development than with the assessments per se.

That said, We have seen numerous occasions where 8th grade students ‘bought in’ to the idea of improving their reading comprehension and defied the group norms, increasing their Proficient Reading performance not only across the benchmark assessments in fall to winter and winter to spring, but also on the progress monitoring assessments given monthly throughout the year.  Thus, we know it is possible for 8th grade students who have decided to try to improve their reading comprehension to do so over the course of the school year, and for that improvement to be documented by their performance on the easyCBM measures.

Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.

Still need help? Contact Us Contact Us