Student growth measures can be the bridge to new assessments

0
Courtesy of Jessica Dobbs at DeviantArt.com

As I embark on 2014, I reflect upon the many competing, yet interdependent, tensions discussed in education circles in 2013. In conferences, classrooms and statehouses, adults who care about kids debated the best ways to implement:

  • New academic standards (Common Core State Standards or other College and Career Ready Standards)
  • New curriculum and technology-rich instructional resources
  • New assessments aligned to the new standards
  • New educator evaluation systems based in-part on academic achievement and/or growth on the new assessments
  • New school accountability systems based in-part on academic achievement and/or growth on the new assessments

In fact, eight out of ten of Education Week’s Top 10 State K-12 Blog Posts in 2013 fell into the above categories. The tension I hear most frequently as a former teacher is the connection of new teacher evaluation systems to new Common Core aligned assessments, as described well by Andrew Ujifusa: Common Core and Evaluations: Are Teachers 'Going Crazy'?

Since coming to SAS, I now understand how this tension and seemingly valid concern is rooted in common confusion between achievement and growth. As I have come to better understand growth models, I believe some of them can be the bridge we need to actually gauge student performance across standard and assessment changes.

Student achievement cannot be reliably compared from an old test to a new test. Educators and Ed Policy folk all expect that student achievement (test scores) will drop at the onset of more rigorous assessments that measure students against more rigorous standards. If teachers feel their evaluations will be tied to these lower test scores (achievement), they have reason for concern.

However, many evaluation systems rely more heavily on student growth measures than achievement. Student growth can be reliably compared from an old to new test. Some student growth measures already incorporate a wide range of assessments across grades and subjects that change over time. They look at students’ position in a statewide distribution from year-to-year to make apples to apples comparisons. While proficiency rates may drop, teachers and schools can still show high growth. For example, high growth could be shown when a group of students moves from the 50th percentile on the old test to the 55th percentile on the new test. Reliable growth/value-added measures can serve as this bridge.

The sky is not falling. This is not the first time states have changed standards or assessments. Student growth has been measured for over 20 years in Tennessee across many of these changes. See how in the blog post, Transitioning value-added and growth models to new assessments. Tennessee’s Commissioner of Education, Kevin Huffman, clearly established this point at a recent Annual Policy Forum put on by the Council of Chief State School Officers:

Everyone assumes that with new tests, achievement will be low, and then value-added will be low. That’s not exactly how it will play out. We now need to communicate that effectively….A few years ago, we used to have 90% of kids proficient on state assessments, but then performed very low on NAEP. We already raised the bar in 2010 (by establishing more rigorous standards and assessments) and we went from 90% to 30% proficiency overnight….There was initial noise and frustration, but then it stopped and everyone survived. By the time I got to TN in 2011, people weren’t even talking about the lower test scores, they were talking about- how do we now get better?”

And indeed Tennessee ‘got better.’ Public TVAAS reporting provides scatterplots of growth and achievement data that show achievement and growth rates improving since 2010. In November 2013, Tennessee showed the largest academic growth on the 2013 National Assessment of Educational Progress (NAEP) of any state, making Tennessee the fastest improving state in the nation. Leaders in Tennessee know that raising the bar and revealing initial lower achievement is a necessary step in the improvement process. It’s the right thing to do for kids. States that are about to embark on this work in 2013-14 should keep in mind that this has been done before, lessons can be learned, and the sky is not falling so long as we have reliable growth measures to serve as our bridge across assessments.

 

 

Share

About Author

Nadja Young

Senior Manager, Education Consulting

Hi, I’m Nadja Young. I’m a wife and mother of two who loves to dance, cook, and travel. As SAS’ Senior Manager for Education Industry Consulting, I strive to help education agencies turn data into actionable information to better serve children and families. I aim to bridge the gaps between analysts, practitioners, and policy makers to put data to better use to improve student outcomes. Prior to joining SAS, I spent seven years as a high school Career and Technical Education teacher certified by the National Board of Professional Teaching Standards. I taught in Colorado’s Douglas County School District, in North Carolina’s Wake County Public School System, and contracted with the NC Department of Public Instruction to write curriculum and assessments. I’m thrilled to be able to combine my Bachelor of Science degree in Marketing Management and Master of Arts degree in Secondary Education to improve schools across the country.

Leave A Reply

Back to Top