Accountability and teacher preparation: How states have led the way

As teacher preparation programs send educators out into the workforce, how can we make sure they're effective in the classroom? Image by Flickr user Visha Angelova
As teacher preparation programs send educators out into the workforce, how can we make sure they're effective in the classroom? Image by Flickr user Visha Angelova

Teacher preparation programs have received some pretty harsh criticism in recent years. For example…

“If there was any piece of legislation that I could pass it would be to blow up colleges of education.” –Reid Lyon, National Institute of Health

“By almost any standard, many if not most of the nation’s 1,450 schools, colleges, and departments of education are doing a mediocre job of preparing teachers for the realities of the 21st-century classroom.  America’s university-based teacher preparation programs need revolutionary change, not evolutionary thinking.”-Arne Duncan, Secretary of Education

In response to such criticisms, the US Department of Education (ED) undertook a lengthy and arduous revision of regulations in the Higher Education Act pertaining to the accountability structures in teacher preparation.

Released in 2016, the final regulations require that states report annually on a variety of indicators for all teacher preparation programs (TPPs), including traditional programs in colleges and universities, alternative programs such as Teach for America, and transitional programs aimed at filling vacancies within schools. The final document contains almost 700 pages of regulations, which the ED press office represented with the broad set of indicators listed below:

  • Placement and retention rates of graduates in their first three years of teaching, including placement and retention in high-need schools;
  • Feedback from graduates and their employers on the effectiveness of program preparation;
  • Student learning outcomes measured by novice teachers' student growth, teacher evaluation results, and/or another state-determined measure that is relevant to students' outcomes, including academic performance, and meaningfully differentiates amongst teachers; and
  • Other program characteristics, including assurances that the program has specialized accreditation or graduates candidates with content and pedagogical knowledge, and quality clinical preparation, who have met rigorous exit requirements.

Previous state accountability structures for teacher preparation relied mainly on input measures such as enrollment, graduation rates, and course curriculum.  These new regulations will require a seismic shift for most states moving from input-based measures (primarily captured by the teacher prep program) to output-based measures (based on performance in the field.) Let’s look at the approaches of Tennessee and Louisiana, which have a long history of providing this type of analysis on their TPPs.

Tennessee Teacher Preparation Report Card

Since 2007, Tennessee has produced a Report Card on the Effectiveness of Teacher Training Programs as a joint effort among the Tennessee Department of Education, Tennessee State Board of Education (SBE), and the Tennessee Higher Education Commission.  State law required that these agencies report on the effectiveness of TPPs as defined by three measures:

  • Placement and retention rates of program completers;
  • Pass rates on Praxis teacher certification exams; and
  • Tennessee Value-Added Assessment System (TVAAS) scores of program completers.

In addition to these measures, the state has added other indicators such as demographic information and longitudinal analysis throughout various revisions of the report.

Tennessee was one of the first states to link the performance of graduates in the classroom, as measured by TVAAS, with the program where they were trained. TVAAS is the state-wide student growth measure. The state differentiated performance by program completers of traditional and alternative programs as well as by subject area and grade span. This allowed programs to evaluate their impact on student learning once their graduates were in the field. The state worked with programs to refine the analysis each year to provide more detailed and useful information back to the programs for continuous improvement.

Always looking to improve the depth of information, the state is redesigning reports to include performance levels and more detailed information about each program.

Louisiana Teacher Preparation Data Dashboard and Fact Book

Louisiana is another state with a longstanding tradition of providing publicly available information on the effectiveness of TPPs.  The Louisiana Board of Regents produced the Louisiana Teacher Preparation Fact Book for almost a decade.  Now known as the Louisiana Teacher Preparation Data Dashboard, this report provides program information on enrollment, demographics of program enrollees and completers, placement and retention rates of program graduates, and student impact of completers as measured by value-added and other evaluation metrics.

Both Louisiana and Tennessee have the benefit of longstanding, longitudinal data systems linking teachers and students, as well as connecting data from K-12 to teacher preparation programs. Many states do not have such data systems or have systems still in development.

States must also grapple with the measures used to evaluate program effectiveness.  Both Louisiana and Tennessee have robust value-added measures that provide meaningful differentiation among completers and programs. The complexity of these measures provides a level of confidence that more simplistic measures do not.

For states beginning this work, the implementation of the finalized ED regulations may require the development of a new data system, the collection of new elements, and work across agencies and organizations that may have never worked together before.  Over the next few blogs, we will delve into the area of teacher preparation accountability systems to learn more about what leading states have done in this area, pose questions to consider for states as they build these systems, and take a look at some research on what makes teacher preparation effective.


About Author

Katrina Miller

Education Industry Consultant, SAS State & Local Government Practice

Katrina utilizes her experience in K-12 and higher education policy to support SAS EVAAS for K-12 in the educational community. She previously worked with the Council of Chief State School Officers and the Tennessee Higher Education Commission, where she supported state efforts to use data to improve teacher and leader preparation for all students, including students with disabilities. Katrina holds a bachelor’s degree in communication and master’s degree in higher education administration from the University of Tennessee, Knoxville.

1 Comment

  1. Pingback: Dashboard vs. Scorecard: Measuring the Impact of Educator Preparation - State and Local Connection

Leave A Reply

Back to Top