Value-added data informing and improving TN Schools of Education

2

With the rapid changes in our education systems regarding new standards, assessments, accountability and evaluation, teachers are rightfully feeling the pressure of being underprepared. The majority of teachers were not trained or certified with these rigorous systems in place. Recognizing that higher education institutions need to play an active role in the continuous improvement process, President Obama released a reauthorization plan for the Higher Education Act with a specific focus on reforming teacher preparation programs. This plan highlighted Tennessee and Louisiana as national leaders on this front, with North Carolina, New York and a handful of other states following suit.

I recently spent some time talking to the TN Higher Education Commission (THEC) to learn exactly what they are doing to lead the pack. THEC collects K-12 value-added data to evaluate teacher preparation program effectiveness. While TN’s Report Card on the Effectiveness of Teacher Training Programs is a model for other states to follow, they don’t stop there. Tennessee’s higher ed institutions also educate pre-service teachers and administrators on value-added data literacy to improve student and school outcomes. Pre-service teachers are teachers in undergraduate, training, or alternative certification programs who will soon join the ranks of teachers in the classroom. So TN takes a two-pronged approach with accountability and support.

Accountability: Emily Carter, THEC’s Higher Education Program Coordinator, uses teacher effect scores to evaluate how well different programs are preparing teachers, and identify areas for improvement. After collecting all of the information (academic background, placement and retention, etc.) on completers for a certain cohort year, SAS then provides a value-added score for each teacher preparation program. Who were TN’s top performers in 2011?  Three teacher prep programs were actually able to produce teachers with higher student achievement gains than veteran teachers – Teach for America Memphis, Teach for America Nashville, and Lipscomb University. Read more in the 2011 Report Card.

Carter also discussed planned future improvements to provide more detailed information that is disaggregated for each program (math vs. English/language arts vs. science, etc.). Additionally, TN will develop new growth measures for teachers in traditionally non-tested grades and subjects to soon be incorporated into the report card. Given more data, there are additional research opportunities using these results. For example, is a pre-service teacher’s SAT/ACT score a reliable predictor of their future effectiveness? Does their college GPA or choice of major impact their teaching effectiveness?

Support: If a goal of the Report Card is to improve the services offered to future educators, then additional support is critical. Katrina Miller directs THEC’s First to the Top office managing several programs to improve the teacher pipeline in higher education institutions. Katrina worked with SAS to develop modular data literacy curriculum that is integrated into the pre-service curriculum. This eight-hour package of modules teaches future educators and leaders how to use value-added data to differentiate instruction in order to meet the needs of all students.

Dr. Deborah Boyd, a Professor of Education and Associate Dean for Lipscomb University’s College of Education, currently uses this curriculum and expands upon it. The college's coursework in research discusses the different types of data that teachers will receive about their students, instruction, and their own practice. The college also incorporates value-added reports into case study assessments for graduate students. Perhaps this is a contributing factor to Lipscomb’s superior preparation of teachers, as reflected in the Report Card.

What is your state doing to measure the effectiveness of teacher prep programs and to support them in producing 21st Century educators?

Share

About Author

Nadja Young

Senior Manager, Education Consulting

Hi, I’m Nadja Young. I’m a wife and mother of two who loves to dance, cook, and travel. As SAS’ Senior Manager for Education Industry Consulting, I strive to help education agencies turn data into actionable information to better serve children and families. I aim to bridge the gaps between analysts, practitioners, and policy makers to put data to better use to improve student outcomes. Prior to joining SAS, I spent seven years as a high school Career and Technical Education teacher certified by the National Board of Professional Teaching Standards. I taught in Colorado’s Douglas County School District, in North Carolina’s Wake County Public School System, and contracted with the NC Department of Public Instruction to write curriculum and assessments. I’m thrilled to be able to combine my Bachelor of Science degree in Marketing Management and Master of Arts degree in Secondary Education to improve schools across the country.

2 Comments

  1. John Creatura on

    This is more of a request for information than a comment. The term Value Added is classical in as much as it has been defined throughout the eons, especially by economists) as the "growth" from one point to another point. It seems to have a different conotation in the movement to evaluate teachers based on student performance. I have read much of what is provided by EVAAS, SAS and the Ohio Department of Education. I am having a difficult time understanding the full conotation of Value Added in this relm. I would appreciate a clear definition--if possible.

  2. Thank you for your comment, Mr. Creatura. I think that in the education space, you can still conceptually think of value-added analyses as measuring growth from one point to another for groups of students. However, when working with student assessment data, we know that there is often a large amount of measurement error around any given test score. So if you are just measuring growth between two points (i.e. a pre-test and post-test), the effects of that measurement error can be quite large in the resulting estimates. The power of value-added is in the ability to measure growth across all tested grades and subjects simultaneously to minimize the effects of that measurement error and provide a more accurate measure of schooling effectiveness.

    In terms of teacher evaluations, some think that value-added estimates are supposed to be fully causal. SAS takes the stance that EVAAS is descriptive, not fully causal, when making inferences about teacher, school, and district effectiveness. Additionally, value-added is only one component of EVAAS. Provided alongside of any value-added analyses, teachers need complementary diagnostic reporting tools to guide them in improving their practice. EVAAS also provides individual student projections, which report the probability of student success on a variety of academic milestones. These are used by educators to provide earlier RTI, differentiate instruction, guide course placement, etc.

    Overall, educators need more than a single value-added estimate to improve their practice- they need a comprehensive set of (reflective and predictive) reporting tools to guide their instructional practice and professional growth. This is what we seek to provide with EVAAS, and I'm happy to provide more detailed info as you are interested.

Leave A Reply

Back to Top