Busting myths of education value-added analysis, Part 3: Simple growth measures provide better information to educators.

1

Welcome to Part 3 of the value-added Myth Busters blog series. I have heard a variation of this many times.

“Why shouldn’t educators just use a simple gains approach or a pre- and post-test? They can trust simpler methodologies because they can replicate and understand them more easily.”

Simple growth measures might be sufficient if we were working with perfect data. However, student assessment data is far from perfect:

  • In a perfect world…
    • All students would start the year on grade level.
    • All students would progress at the same rate.
    • Student would never miss a standardized test.
    • Students would perform at peak levels on test day.
    • All large-scale achievement tests would be perfect measures of student attainment and would account for student progress.
  • But in the real world…
    • Not all students begin the year on grade level.
    • Not all students progress at the same pace.
    • Some students miss their standardized test and have missing data.
    • Student and teacher mobility exists within the school year.
    • Shared instructional practices exist, such as team teaching, push-in, pull-out, etc.
    • Tests are on differing scales, are not all vertically aligned, and change over time.
    • All tests contain measurement error and are just an estimate of what a student knows on that given day. Some may underperform on test day.

There is clearly some statistical rigor necessary to provide precise and reliable growth measures given the above analytical problems.  This is even more critically important in any reporting used for educator evaluations.

What is the downside to using more simplistic methodologies?

Growth estimates based on simple calculations are often correlated with the type of students served by the educators, rather than the educator’s effectiveness with those students. In other words, high-achieving students tend to show higher growth. Conversely, low-achieving students tend to show lower growth. This turns the growth model into more of a status model, which we already have by looking at achievement data alone. Such models often unfairly disadvantage educators serving low-achieving students and unfairly advantage educators serving high-achieving students. Empirical evidence from any growth model should be examined to see how strong a relationship exists.

The bottom line:

If we want growth and value-added models to level the playing field for all educators regardless of the students they serve, they must be rigorous enough to adequately account for students’ entering achievement levels and the various challenges associated with assessment data listed above.

Share

About Author

Nadja Young

Senior Manager, Education Consulting

Hi, I’m Nadja Young. I’m a wife and mother of two who loves to dance, cook, and travel. As SAS’ Senior Manager for Education Industry Consulting, I strive to help education agencies turn data into actionable information to better serve children and families. I aim to bridge the gaps between analysts, practitioners, and policy makers to put data to better use to improve student outcomes. Prior to joining SAS, I spent seven years as a high school Career and Technical Education teacher certified by the National Board of Professional Teaching Standards. I taught in Colorado’s Douglas County School District, in North Carolina’s Wake County Public School System, and contracted with the NC Department of Public Instruction to write curriculum and assessments. I’m thrilled to be able to combine my Bachelor of Science degree in Marketing Management and Master of Arts degree in Secondary Education to improve schools across the country.

1 Comment

  1. Pingback: Advocating for a robust value-added implementation - State and Local Connection

Leave A Reply

Back to Top