How do I find out if a student is at risk?

Overall, risk is not an either/or situation as both the raw score and percentile rank viewed over time are important when interpreting students’ performance (and their growth) from one administration to the next, and thus their risk. So, given that premise...

While student growth can be observed by increases in raw scores (or % correct) over time, and this indeed would demonstrate improvement (e.g., more math problems answered correctly), other students are also growing their knowledge and skills (as is evident in our nationally-representative norming calculations). Some of this growth is “naturally-occurring” due to maturation — in other words, on average, raw scores go up over time across all students.  But a portion of the observed growth in an actual student, and we would hope most of it, is also due to students learning based on teachers’ instruction, practicing calculations, solving problems, reading more material, etc. — in other words, the effects of teachers and schools on student learning is present and measurable.  Thus, if raw scores are going up (improving) for a given student, but their performance relative to their peers (i.e., their percentile ranks) remain low, nearly flat, or even go down, then they are falling further and further behind their grade-level peers who are continuing to grow at higher rates toward meeting grade-level expectations.

Another way of saying this is that raw scores can often go up, giving the appearance of meaningful growth, when we “know” that the observed growth is not enough because it is quite low relative to students’ grade-level peers. Raw scores don’t really tell you that much unless you also look at the percentile ranks.  Some growth over the course of the year is expected, but you need to also check the percentile ranks to see if the growth that students made was actually steeper than the growth made by same-grade peers over that time period (i.e., to see if they are catching up and getting closer to the goal of grade-level proficiency).  This is the reason that both raw scores and percentile ranks must be considered to interpret student performance.

Another thing to consider…Changes (or even jumps) in percentile rank can also be a function of a measure’s scale. For example, the three Math progress-monitoring measures at Grade 8 (Algebra, Geometry and Measurement, and Data Analysis) have only 16 items each. Thus, a change in 1-2 pts probably means big jumps in percentile rank over time, even though the change in the raw score was minimal. The same situation would likely be true for the Basic and Proficient Reading comprehension measures, which have 25 and 20 points possible, respectively. Thus, once again, raw score and percentiles both need to be considered.

All of this together harkens back to an important principle for determining students’ knowledge/skill using assessment(s):

Using multiple pieces of evidence is pretty critical to get a rich picture of students’ strengths and weaknesses. Here, within easyCBM you are benchmarking and progress-monitoring, and you have some evidence about students’ reading/math knowledge/skill over time from those results. But, is a drop in percentile rank indicative of the student truly still struggling in one or more areas that instruction is targeting, or rather, is the drop more a function of the tight measure scale and they are getting 80+% of the items correct, while doing well on other measures (i.e., other assessments, such as teacher-created quizzes or performance tasks that are likely more tightly aligned with your instruction), and responding well in class when informally prompted to answer questions and demonstrate knowledge/skill. There is both science and art within the framework of using assessment results within a constellation of formal/informal evidence to guide decision-making.

Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.

Still need help? Contact Us Contact Us