Last week I experienced my first end-of-year evaluation before resigning my teaching contract for the upcoming school year. At my charter school, we already use an evaluation method that many of us think would help greatly improve our nation’s schools if used broadly and consistently: student test scores to help determine teacher performance ratings.

Student performance-based teacher evaluations are becoming a reform trend across the country. In 2011, Minnesota passed a K-12 education bill requiring that student performance count for 35 percent of teacher evaluations, starting in the 2014-2015 school year. Keeping this in mind, I was curious to find out exactly how student data would be incorporated into my own evaluation. Though I received a performance rubric at the start of the school year, I didn’t know how it was scored, and I didn’t give it another thought until the morning of my evaluation.

Student achievement is one of five factors on the rubric, along with personal attendance, parent communication, lesson plans and classroom management. Scoring in the other four categories was fairly straightforward, while the student achievement category was more complex. To determine my score, my students were grouped into four quartiles, based on their test performance at the start of the year. I received points on the rubric based on the percentage of each quartile that passed the Minnesota Comprehensive Assessments, and the percentage that made significant progress on the NWEA tests. Finally, the points earned for each test were averaged to determine my final rubric score for the student performance category.

I completely support the idea that schools should evaluate teachers on the basis of their students’ progress. As for my school’s approach, it’s a complicated but fair way to incorporate student data into my evaluation. For instance, we use two different assessments, and by grouping students we account for the fact that they start the year at different levels of ability.

However, my experience in the classroom has made me realize how difficult it can be to accurately use student data. I know that some of my students’ test scores do not always truly reflect their academic growth, whether that is due to problems with taking the assessment or even lucky guesses for the answers. Based on my personal experience, I think that using student data to evaluate teachers nationwide is a worthwhile goal, but lawmakers need to ensure that it’s used fairly and in conjunction with other measures of teacher quality.

As part of a research project, I’ve also recently spoken with many teachers and principals in Illinois about teacher evaluations in their state, where student test scores will factor into all teacher evaluations by 2016. The teachers and administrators there have concerns that make a lot of sense.

For example, what if a student’s beginning-of-year data wasn’t accurate, or was missing? (This happened to several of my students.) What if students didn’t take the test seriously and their scores didn’t truly reflect their learning? What about students who enrolled at our school in January or later? Are we responsible for their progress this year, or is their previous school? What about students who switched into or out of my classroom at various points in the year, for various reasons? (Only half of my class roster remained the same from September to June). What about students who are on my official roster, but actually leave my classroom for special education services the majority of the day? Or those who missed too many school days to adequately catch up?

I’m the first to admit that I don’t have any answers to these questions. With that said, teacher evaluations should include student performance data. But it does mean that we need to carefully develop our methods for using this data. Incorporating test scores can’t be a magical fix for our struggling schools. As Minnesota and other states begin to include student performance in their teacher evaluations, we need to make sure that data is accurately, effectively and consistently used, to the best of our ability. Ultimately, I look forward to the recommendations that the Teacher Evaluation Work Group puts forward later this year for the finer details of Minnesota’s new teacher evaluation system. I also hope that our state is flexible to make refinements to the evaluation system when need be.

The point of using student data to evaluate teachers is to take advantage of all of the information we have so that we can ensure that all students have access to a quality education. When we use data effectively and consistently, we all win – teachers and students alike.

Christina Salter is a MInnCAN School Reform Blogging Fellow.

Comments

Recent Posts

More posts from Uncategorized

See All Posts