An integral part of the easyCBM system is the empirical foundation on which it is built. All of the measures have been developed with rigorous attention to ensuring the technical adequacy of the measures — their reliability of the scores as well as their validity for screening and progress monitoring. We have discussed the possibility of adding writing measures, but have not yet been satisfied with the technical adequacy of writing CBMs. The most genuine writing measures depend on human ability to evaluate writing quality (although some use the metric of ‘words written per minute’ or ‘correct word sequences written per minute’, we’re not satisfied that those approaches to measuring writing adequately capture the construct. More authentic writing assessments may end up with less reliable scoring, however, as opinions on ‘quality’ can vary quite a bit depending on the person doing the evaluating.
As for behavioral interventions, many schools do use make note of behavioral interventions using the easyCBM system in an attempt to track their potential effect on students’ reading and mathematics performance.
For tracking the impact on specific aspects of behavior (such as speaking out in class, causing classroom disturbances, etc.), other methods of tracking that use a similar approach (single subject research design, tracking the behavior both during baseline and after the intervention has been implemented) are recommended. The way the easyCBM system works, though, is that graphs can only be generated when the system has a database of ‘test’ options to hold the data that is portrayed on the graph (sorry for the complicated explanation here!).
Most schools / districts have some sort of central student information system that they use to track a variety of data (attendance, behavior, assessment results, etc.). Although easyCBM is designed to integrate well with such data systems, it would be beyond the scope of its intended use to try to replace student data systems themselves.