Kristin Murphy
Carson Olsen Elise takes a practice SAGE test at Polk Elementary School in Ogden on Thursday, April 17, 2014.

It’s that season of the year that brings tulips, lilacs and Student Assessment of Growth and Excellence testing (SAGE). Soon, the news outlets will publish how students performed on SAGE tests compared to past years. School officials and parents will wonder, “Are we seeing student growth?” “Are kids getting better at math?” “Are they better writers?”

This week, I reminded my chemistry students about how to design an experiment. You have an independent variable, a dependent variable, a question, a hypothesis. And you have constants. If I want to see how concentration of hydrochloric acid affects a reaction rate, I can tinker with the concentration of acid, but the rest of the experiment needs to remain constant. This is so I can say, with confidence, that altering concentration is what changed the outcome as opposed to some other factor, like the temperature of the acid.

In Utah, we have created a culture of students — particularly in the older grades — where everything has to be done for “points.” If we have a learning activity, students want to know how many points it’s worth. In the initial years of SAGE testing, scores could be used for “points” on student grades, used in place of a final exam or as a reason to have a pizza party.

However, state education officials have changed the SAGE game. At least in the last two years, teachers have been expressly prohibited from using SAGE scores as incentives in any way. We cannot give out treats or hold parties. We can’t have friendly competitions between classes. We cannot exempt students from a final based on SAGE scores. Scores cannot impact a student grade in any way.

As teachers, our SAGE scores are scrutinized by various officials, including supervisors, district staff, legislators and parents. If our proficiency percentage is low, that will be a cause for alarm among data-driven education managers. SAGE results are also compared from school to school and district to district. Scores are analyzed closely, then used to make school and district-wide decisions about how to help students better learn curriculum standards.

Data is essential for decision-making in any organization, and it makes sense for education managers and teachers to analyze SAGE results to make improvements. However, a constant has been removed from the grand scientific experiment of standardized SAGE testing, particularly for older students. Do older students perform as well on an assignment when there are no points attached? Usually not. Do they do their best on SAGE simply for the joy of learning? Usually not. Granted, there are rock star teachers who can motivate students to do well on anything, or maybe a few who give threats that are taken seriously. But the majority of teachers would agree that how much an assignment “counts” is a serious driving force of student effort.

5 comments on this story

Consider how the ACT test that is administered to juniors statewide has scores that, for decades, have “counted” for admission to colleges or for earning scholarships. On the contrary, state education officials have nullified the scientific soundness of the SAGE experiment by changing the rules and yanking out a critical constant — the constant of how much the SAGE “counts.” As SAGE results are rolled out in a few weeks, be leery about comparing them with data from past years, and take them with a grain of salt. Better yet, a spoonful of salt.