NC teachers’ voices regarding use of student growth in educator evaluations


In 2011-2012, North Carolina became one of many states to restructure their educator evaluation system to incorporate student growth. The NC Department of Public Instruction commissioned the external expertise of WestEd to evaluate various growth models and recommend value-added technology that would help them best meet their mission of using meaningful evaluation to increase effectiveness of teachers and leaders. As announced today, WestEd chose SAS EVAAS (Educational Value-Added Assessment System) for K-12.

As a former public high-school teacher in NC, I wanted to gather thoughts and reflections from two former colleagues who are still currently teaching high school ELA and math. The following interview Q&A reflects these teachers’ personal opinions about NC’s new evaluation system, specifically, the inclusion of student growth measures. Both teachers are from Leesville Road High School in Wake County Public School System.

Teacher’s Name, Grade, and Subject Taught:
Angela   Stephenson

English   Language Arts

Grades   9, 10, & 12

Beth   Duckett

Pre-Algebra, Foundations of Algebra, Algebra I

Grade 9

What types of students do you teach?
All achievement levels I/II-III/IVBoth non-honors and honors placement students Low-achieving, at-risk students 100% of the time
Teacher’s background:
25 years teaching in FL and NCMEd, Master’s in Curriculum & InstructionNational Board Certified Teacher, ELANC Association of Educators (NCAE) union member 9 years teaching in NCMSA, Master’s in School AdministrationHolds a Principal License
How do you feel about EVAAS reporting being used in teacher and leader evaluations?
I am an advocate for use of data to motivate and enhance teaching and learning.  I am interested in the fact that the data available can be reviewed and interpreted from many different angles and based on various subgroups. Hopefully some of the information will support teacher placement that will allow for the most growth for students. Many times students who are “below grade-level” need the most experienced teachers. Unless there is an effort to insure that course of placement is followed, there is the possibility that newer teachers (who are quite capable but still growing in their strategies) will be placed where their frame of reference might not serve those students as well as a more experienced teacher. My EVAAS diagnostic reports show me that I am most highly effective with low-achieving students and I work with those students in my year-long CTE/English I team-taught paired course. I had anxiety at first in hearing about this new metric because my students come to me 2-3 years below grade level. I initially wondered if it would measure my effectiveness fairly given their low achievement levels. When it was clearly explained to me that EVAAS measures growth, not achievement, I realized I could use that as an improvement tool. Sometimes I think I am growing kids, but maybe I am not. And likewise, maybe I don’t feel as if kids are growing, but they actually are. Now I will know and can   adjust my instruction appropriately.
What insights can you gain about your own practice from the value-added reporting outside of educator evaluation?
This information and wide-ranging data is what allows for differentiated instruction- combining EVAAS statistical data with my own anecdotal and personal data as I get to know my students. We don’t track students, but I can learn more about their strengths and weaknesses more quickly instead of through the passage of valuable time and can appreciate and work with students’ demonstrative patterns sooner. How teachers interpret and react to the data is what’s important. Ultimately, I want to get this additional feedback (through EVAAS) to see if my differentiated instruction has a significant impact on student learning and performance. Was it worth it? I see now that   in my year-long paired class, it was worth it! Trying to find a way to discover the strengths that a school/department/teacher possesses as demonstrated by the student results can lead to a discussion of best practices that can be used and enhanced by all stakeholders. Where it really helps me is in creating my personal goals for my Personal Development Plan (PDP). Each year, we set a school-level and department-level goal to work toward. Teachers also have to set a personal goal, and after nine years, it has become challenging to determine a new and meaningful goal each year. This data tool (EVAAS) gives me a capture of last year and tells me what I need to work on this year, so I know my personal PDP goal is   on-target.
What concerns do you have for fairness or implementation of the system?
The state is diverse regarding funding, access to technology and to the receipt of timely information.  All public institutions need an evaluation system and public schools are no different. I have heard people voice concerns because they still need a clear understanding of how the tools will be helpful to public education. I expect there to be associated reactions to any   educator evaluation and interpretation of results. As long as it is used correctly across schools and districts, I am not concerned.
What type of training or communications have you received? What are your thoughts on the gradual implementation of the new evaluation system?
I think we are all becoming more familiar, not only with standard 6 (EVAAS) but with the entire teacher evaluation system and its changes over the last few years.  I anticipate more training in the fall. The reports are very well-organized and included graphs, numbers, and comparisons but I am not exactly sure how to use this information to support my understanding regarding lesson design and changes to my skill focus in order to support student growth. I look forward to developing my understanding of this information and the discussions that will develop in my PLT in the fall. I think that implementing it in pieces was very beneficial. It could have been   overwhelming if teachers had to learn the new observation rubric and EVAAS at the same time. But at my school, we had a year to get familiar with the observation rubric and collect student artifacts. We then had a year to work with EVAAS before it was a weighted component of the overall evaluation.
Final thoughts on the multiple measures included in the evaluation system? As a math teacher, what are your thoughts on NC DPI’s decision to use a 3-year weighted average for the student growth measure (EVAAS)?
We now have two distinct ways of learning about our practice with the objective (EVAAS) and subjective (observation rubric). And then teachers can provide their own input with student artifacts. So, overall, it seems like a very well thought-out and balanced approach. It is definitely more fair to teachers. The 3-year average compensates for an adjustment year where my EVAAS estimate might be slightly lower due to   teaching a new grade level, working with new curriculum or assessments, etc.

I thank these two respected colleagues and educators for their candid feedback and input. The entire state is now looking forward to seeing a positive impact on educator performance now that multiple measures and student growth are instrumental parts of NC’s teacher and leader evaluation system.


About Author

Nadja Young

Senior Manager, Education Consulting

Hi, I’m Nadja Young. I’m a wife and mother of two who loves to dance, cook, and travel. As SAS’ Senior Manager for Education Industry Consulting, I strive to help education agencies turn data into actionable information to better serve children and families. I aim to bridge the gaps between analysts, practitioners, and policy makers to put data to better use to improve student outcomes. Prior to joining SAS, I spent seven years as a high school Career and Technical Education teacher certified by the National Board of Professional Teaching Standards. I taught in Colorado’s Douglas County School District, in North Carolina’s Wake County Public School System, and contracted with the NC Department of Public Instruction to write curriculum and assessments. I’m thrilled to be able to combine my Bachelor of Science degree in Marketing Management and Master of Arts degree in Secondary Education to improve schools across the country.


  1. Ms. Young, I know I have told you over the phone, but I wanted to put it in writing how much I appreciate your efforts here to keep us up to date with EVAAS related information. I was wondering if you could please comment on how teachers can specifically use EVAAS data to improve instruction and outcomes. More specifically, I am easily able to assess the likelihood that my students will reach the required proficiency level on a future state standardized test (EOC), with a teacher made test or exam (or any other form of assessment), and feel fairly confident about their ability to perform come EOC time (based on their performance on my assessments). I can enact remediation if necessary, because I know if the student is not performing to proficient levels on my assessments, they more than likely will not perform at proficient levels on the EOC. As I've said, this gives me a great starting point as to what students need remediation in order to attain proficiency. At what point do I know which students may not achieve "growth" so I can provide after school remediation? Please comment on our ability to look at a student's projected growth score (based on their 3 past years of scores - which I'm not even sure we would be able to view in full), or any other data from EVAAS, and be able to measure their progress in terms of growth using our classroom assessments. How can I translate a student's projected growth score into a workable diagnostic piece of data that I can measure my assessments against in order to know if a student is on track to "grow" or not on track to "grow"? Or is it fair to simply assume that if a student is performing at high levels on my classroom assessment pieces, they will in fact show growth - which may get confusing if the student performed well in their last class and continuously performed at proficient levels for their prior teacher and maybe even scored high on their last standardized test? How do I know they will "grow" for me? I am failing to see the connection, and these questions stem from the teachers above that stated the following, "Sometimes I think I am growing kids, but maybe I am not. And likewise, maybe I don’t feel as if kids are growing, but they actually are. Now I will know and can adjust my instruction appropriately." How can we know which kids are, or are not, on track to show growth in order that we may "adjust...instruction appropriately", especially as we do not receive VAM results for our students until well after they have exited our class? If you could, specifically shape your suggestions based on the available data that teachers will have at their disposal, of which I am fairly certain we would at least be receiving projected growth scores for our students before classes started. I am not sure what other information would be available to us here in NC, but I know I have read in the literature that TN teachers can log on to their own unique website that offers them projected scores for their students. What other information would be available to us, and will we have our own website here in NC? Thanks again for your hard work here on this blog.

    • Thank you for your comment, Mr. Wydo. I am going to try to answer each question briefly here, and then feel free to email me for additional discussion.

      1. To understand how teachers can specifically use EVAAS data to improve instruction and outcomes, I would recommend reading the paper "Improving Student Outcomes with Advanced Analytics." It specifically addresses how EVAAS might be used differently by district & school administrators, teachers, counselors, etc.

      2. Regarding your question about teacher-made assessments:
      Those tests are not used in the EVAAS analyses. However, there are many different types of formative assessments that teachers typically administer throughout the school year in order to track student achievement and make appropriate adjustments to their instruction. Formative assessments provide achievement data, whereas EVAAS provides student progress or growth data. These two sets of tools, while measuring different things, complement each other when used together.

      3. To answer your question, "Is it fair to simply assume that if a student is performing at high levels on my classroom assessment pieces, they will in fact show growth?":
      This will vary depending on whether or not those teacher-made/classroom assessments are aligned to the same standards and objectives that are being measured in the state assessments. Formative assessment practices may vary from teacher-to-teacher, just as curriculum and instruction practices vary from teacher-to-teacher.

      4. To answer your question, "How can we know which kids are, or are not, on track to show growth in order that we may "adjust...instruction appropriately", especially as we do not receive VAM results for our students until well after they have exited our class?":
      This is where teachers will want to use the individual student projection reports at the beginning of the school year/semester to inform the following practices: differentiation of instruction, Response to Interventions, small group placements, seating charts, etc. You are correct that the value-added reporting is reflective, looking at the progress you made with your most recent group of students. However, the individual student projection reports are available to you throughout the school year and make projections to each student's future probability of success on different academic milestones (such as your End-of-Grade, or End-of-Course test). Having this information at the start of the school year can help you appropriately personalize the teaching and learning experience for each student's needs.

      5. Availability and Access to reports:
      For the first time this year, NC is going to provide web enabled reporting down to the teacher level. So, like in TN, you will now be able to login to the EVAAS website and see your value-added, diagnostic, and student projection reports from any computer. Once you have time to get hands-on with your reporting, SAS has a team of Educator Support Specialists that can answer any questions you may have. You can access this support team through the EVAAS website.

      I hope that answered your questions, but feel free to contact the EVAAS Educator Support Services team with additional thoughts or needs. Thank you!

  2. Pingback: Homepage

  3. Bill Sorenson on

    Ms Stephenson mentions a VERY real problem in NC schools. Per pupil funding varies widely and in CTE classes, which are vendor driven, different LEA's have different resources. While the raw material (young open and capable minds) are essentially equal, the teaching tools to mold them vary. A GLARING example is in the Computer Engineering Technology curriculum. The state test is vendor provided by ONE of several vendors used throughout the state. Naturally those schools that utilize this specific vendor's instructional program are at an advantage. Until ALL disparity in resources and funding is made equal, I find this whole system degrading and unreliable. There are also other factors that play into this such as school size. In a very small school setting course offering are limited, therefore many students end up in classes they are not necessarily interested in. Frankly, I find growth models the right direction, but until all courses and all educators in a specific state are playing on a level field, this process will cause effective and capable educators to leave the profession.

  4. Hello Mr. Sorenson,

    Thank you for your comment. You raise some logical points. Are positive value-added estimates reflective of instruction, curriculum, programs, policies, etc.?

    Our research indicates that student progress is certainly impacted by district and school-led initiatives (like curriculum choices, interventions, programs, policies, etc.) However, teacher quality far outweighs those district or school-level influences. Using EVAAS models on statewide data in Tennessee in 2001, Dr. William Sanders found that approximately 5% of student progress was influenced by district factors, 35% by school factors, and 65% by classroom/teacher factors. Other researchers have since reported similar findings.

    Equity in school funding and resources is certainly an issue across the country that teachers have to grapple with. However, we see teachers in very affluent districts/schools with high value-added and low value-added estimates. Conversely, we see teachers in very low-income districts/schools with high value-added and low value-added estimates. You can find some empirical evidence of this the 2nd scatterplot in my recent blog:

Leave A Reply

Back to Top