The past ten years have witnessed an explosion in the use of interim assessments by school districts across the country. A primary reason for this rapid growth is the assumption that interim assessments can inform and improve instructional practice and thereby contribute to increased student achievement. Testing companies, states, and districts have become invested in selling or creating interim assessments and data management systems designed to help teachers, principals, and district leaders make sense of student data, identify areas of strengths and weaknesses, identify instructional strategies for targeted students, and much more. Districts are keeping their interim tests even under pressure to cut budgets (Sawchuk, 2009). The U.S. Department of Education is using its Race to the Top program to encourage school districts to develop formative or interim assessments as part of comprehensive state assessment systems.
Much of the rhetoric around interim assessments paints a rosy picture, often with the ultimate claim that such measures will lead to increased student achievement. Much of the belief in the potential of interim assessments to improve student learning comes from the growing body of research on formative assessment. However, the majority of this research has not focused on interim assessments themselves, but rather practices that are embedded within classroom instruction. Very few studies exist on how interim assessments are actually used, by individual teachers in classrooms, by principals, and by districts. Furthermore, we know little about how teachers and other educators use the results from such assessments, the conditions that support their ability to use these data to improve instruction, or the interaction of interim assessments with other classroom assessment practices. Our study begins to fill that vacuum.