Dashboard vs. Scorecard: Measuring the Impact of Educator Preparation


With educator preparation programs (EPPs) under fire, states must make difficult decisions on how to hold EPPs accountable, provide information for program improvement, and offer consumer information to the public on EPP efficacy. In conversations I’ve had with state leaders grappling with this issue, I have seen a debate arise over the best way to provide this information. The debate is over dashboard versus scorecard.

Should states simply provide a variety of information on the programs or should that information be weighted and programs be ranked or assigned an effectiveness value? Below we will explore two states’, North Carolina’s and Delaware’s, approaches in developing these systems.

EPP Dashboards: The University of North Carolina System

Example report from the UNC System’s Educator Quality Dashboard comparing GPAs of students in UNC Education programs to the rest of UNC students, broken out by university. Click to enlarge.

Advocates for the dashboard approach suggest that states provide a variety of information on EPPs from demographic information, licensure passage rates, placement and retention information, to student outcome information. This information can be used for consumer information, program improvement, or accountability. All of the indicators that are selected are presented as equal levels of information about the program. There are no weighting systems for each of the indicators or ranking system across programs or providers.

The University of North Carolina system developed a dashboard-type system and continues to add more metrics annually. Currently, the UNC Educator Quality Dashboard contains information on things such as enrollment trends, teacher productivity, time to degree and student growth data for recent completers. The state is currently working on adding additional information on the clinical experience candidates have prior to completion.

EPP Scorecards: Delaware

Scorecard for the University of Delaware’s elementary teacher education program. Performance is broken out in 6 domains. Click to enlarge.
Scorecard for the University of Delaware’s elementary teacher education program. Performance is evaluated in 6 domains. Click to enlarge.

States choosing to use a scorecard approach select a variety of metrics and provide a weighting system or ranking system for those metrics. For instance, the Delaware Educator Preparation Program Reports include information on six scored domains: recruitment, candidate performance, placement, retention, graduate performance, and perceptions. Each of these domains are assigned a score and then a summative performance rating places each program in a performance tier. Systems such as Delaware’s allows states to place a value on specific characteristics of programs such as candidate performance in the classroom or candidate diversity.

Each of these systems have various strengths and weaknesses. States will need to explore the different options of not only the indicators used in educator preparation reporting but also the purpose of that reporting and the values their state will assign to the metrics.



About Author

Katrina Miller

Education Industry Consultant, SAS State & Local Government Practice

Katrina utilizes her experience in K-12 and higher education policy to support SAS EVAAS for K-12 in the educational community. She previously worked with the Council of Chief State School Officers and the Tennessee Higher Education Commission, where she supported state efforts to use data to improve teacher and leader preparation for all students, including students with disabilities. Katrina holds a bachelor’s degree in communication and master’s degree in higher education administration from the University of Tennessee, Knoxville.

Leave A Reply

Back to Top