Recently, the American Statistical Association (ASA) released a statement about value-added modeling. This statement was widely covered in the national press, some of which positioned the statement as a significant blow to value-added modeling. However, the ASA statement did not “slam” value-added modeling; rather, the statement’s authors advocated statistical rigor, responsible implementation, and expertise in providing the models. We at SAS agree with these principles, and the EVAAS models largely adopt ASA’s recommendations.
In response to such press coverage, the ASA clarified its intent in publishing the statement and what the statement actually recommended in one of its community blogs. The majority of the ASA’s blog is reposted here:
Last week, the ASA Board of Directors adopted an “ASA Statement on Value-Added Models for Educational Assessment.” What the statement says, and why the ASA makes such statements, are the topics of today’s ASA at 175 blog.
As noted in the ASA’s press release on the statement, use of value-added models (VAMs) has become more prevalent, perhaps because these models are viewed as more objective or authoritative than other types of information. VAMs attempt to measure the value a teacher adds to student-achievement growth by analyzing changes in standardized test scores. VAMs are sometimes used in high-stakes decisions such as determining compensation, evaluating and ranking teachers, hiring or dismissing teachers, awarding tenure and closing schools.
The ASA position statement makes the following recommendations:
- Estimates from VAMs should always be accompanied by measures of precision and a discussion of the assumptions and possible limitations of the model.
- VAMs should be viewed within the context of quality improvement, which distinguishes aspects of quality that can be attributed to the system from those that can be attributed to individual teachers, teacher preparation programs or schools.
- The ASA endorses wise use of data, statistical models and designed experiments for improving the quality of education.
- VAMs are complex statistical models, and high-level statistical expertise is needed to develop the models and interpret their results.
The story already has been picked up in several places:
- http://atthechalkface.com/2014/04/09/k12nn-american-statistical-association-has-just-released-a-very-important-document-on-value-added-methodologies/
- http://larryferlazzo.edublogs.org/2014/04/08/another-nail-in-vams-coffin/
- http://bigeducationape.blogspot.com/2014/04/statisticians-group-issues-statement-on.html
- http://blogs.edweek.org/edweek/teacherbeat/2014/04/statisticians_group_issues_sta.html
- http://www.washingtonpost.com/blogs/answer-sheet/wp/2014/04/13/statisticians-slam-popular-teacher-evaluation-method/
- Huffington Post
- Politico (subscriber-only article)
These articles, and others that will appear, reflect the controversial nature of this issue, and the ways position statements such as this one are interpreted by a writer’s position on the matter. One education writer wrote us an extremely cranky note about how “spineless” the VAMs statement was. He used several more adjectives to describe his displeasure.
The ASA is not in the business of determining educational policy, but we are very much in the business of promoting sound statistical practice. Our descriptive statement about the ASA notes that we promote “sound statistical practice to inform public policy and improve human welfare.” This statement on VAMs urges people to think carefully about the uses of these models and to engage with statistical experts, because such models require expertise to use correctly. Especially when the stakes are high, it is sensible to ensure decisions are made based on proper data and analysis. That’s what we as statisticians bring to the table.
We also agree with ASA that value-added reporting can provoke intense emotions and that some of the press coverage reveals those authors’ own feelings – or their lack of expertise in interpreting technical issues. Value-added models can be complex, and they should be complex in order to address many of the concerns that educators have about using student testing data. My colleague Nadja Young discusses that in more detail here. However, these political opinions should not supplant what is known from two decades of high-quality research performed by statisticians and economists, such as:
- Teaching matters. The differences in teaching effectiveness have a highly significant effect on the rate of student academic progress.[i]
- Teaching matters a lot because ineffective teaching cannot be compensated for in future years. Teacher effects were found to be cumulative and additive with very little evidence of compensatory effects.[ii]
- Students’ background does not matter in terms of their progress. In robust value-added models, students can make significant progress regardless of their race or ethnicity.[iii]
- Good teaching is not just about an increase in test scores. Teachers’ value-added effectiveness is correlated with student success in other areas, such as college attendance, income, etc.[iv]
There is a legitimate debate about the appropriate role of value-added analysis in educational policies, which is evident in the myriad ways that states and districts have used this reporting. It’s important to have these discussions with full understanding of research and quality implementations. ASA’s statement is a step in this direction, not a step away from the use of value-added modeling.
[i] Douglas Staiger and Jonah Rockoff, “Searching for Effective Teachers with Imperfect Information,” Journal of Economic Perspectives 24, no. 3 (Summer 2010), 97–118.
[ii] Sanders, William L., and June C. Rivers (1996). Cumulative and Residual Effects of Teachers on Future Student Academic Achievement. Knoxville: University of Tennessee Value-Added Research and Assessment Center.
[iii] Lockwood J.R. and D.F. McCaffrey (2007). "Controlling for individual heterogeneity in longitudinal models, with applications to student achievement." Electronic Journal of Statistics, Vol. 1, p. 244.
[iv] Chetty, R., Friedman, J. N., & Rockoff, J. E. (2011). The long-term impacts of teachers: Teacher value-added and student outcomes in adulthood (No. w17699). National Bureau of Economic Research.
2 Comments
This quotation from the ASA document sums up the entire issue best: "The majority of the variation in test scores is attributable to factors outside of the teacher’s control". To, as the author above has, try and frame ASA's position as supportive of VAM phrenology takes mendaciousness to breathtaking heights. Rather than considering students as empty receptacles for "knowledge" deposited by a method that can be "measured," perhaps we can start talking about students as agents in their own pedagogical experiences—something that doesn't exist in the current regime of the profitable testing-industrial-complex.
Thanks for your comment. The ASA statement seems to discuss primary drivers of student test scores, not student growth. It is well known that there is a strong relationship between students’ achievement (or test scores) and their socioeconomic/demographic background. However, there is typically little or no relationship between students’ growth and their socioeconomic/demographic background.
Another way to see this is that the most important factor of “current” test scores is prior tests scores and, once enough prior test scores are included in the model, the socioeconomic/demographic factors become relatively small or even non-significant, despite enormous sample sizes.
That said and to your concern about considering students in the context of their own experiences, more sophisticated value-added/growth models, like EVAAS, can follow the progress of individual students over time, so that each student serves as his or her own control.