Welcome to Part 3 of the value-added Myth Busters blog series. I have heard a variation of this many times.
“Why shouldn’t educators just use a simple gains approach or a pre- and post-test? They can trust simpler methodologies because they can replicate and understand them more easily.”
Simple growth measures might be sufficient if we were working with perfect data. However, student assessment data is far from perfect:
- In a perfect world…
- All students would start the year on grade level.
- All students would progress at the same rate.
- Student would never miss a standardized test.
- Students would perform at peak levels on test day.
- All large-scale achievement tests would be perfect measures of student attainment and would account for student progress.
- But in the real world…
- Not all students begin the year on grade level.
- Not all students progress at the same pace.
- Some students miss their standardized test and have missing data.
- Student and teacher mobility exists within the school year.
- Shared instructional practices exist, such as team teaching, push-in, pull-out, etc.
- Tests are on differing scales, are not all vertically aligned, and change over time.
- All tests contain measurement error and are just an estimate of what a student knows on that given day. Some may underperform on test day.
There is clearly some statistical rigor necessary to provide precise and reliable growth measures given the above analytical problems. This is even more critically important in any reporting used for educator evaluations.
What is the downside to using more simplistic methodologies?
Growth estimates based on simple calculations are often correlated with the type of students served by the educators, rather than the educator’s effectiveness with those students. In other words, high-achieving students tend to show higher growth. Conversely, low-achieving students tend to show lower growth. This turns the growth model into more of a status model, which we already have by looking at achievement data alone. Such models often unfairly disadvantage educators serving low-achieving students and unfairly advantage educators serving high-achieving students. Empirical evidence from any growth model should be examined to see how strong a relationship exists.
The bottom line:
If we want growth and value-added models to level the playing field for all educators regardless of the students they serve, they must be rigorous enough to adequately account for students’ entering achievement levels and the various challenges associated with assessment data listed above.
1 Comment
Pingback: Advocating for a robust value-added implementation - State and Local Connection