Findings of widely-cited Gates Foundation study on teacher effectiveness called into question

Late in 2010, the Bill & Melinda Gates Foundation, which has sponsored much research into teacher effectiveness, released the Measures of Effective Teaching report (PDF). The report’s analysis was developed using a statistical methodology called value-added modeling, which attempts to quantify the impact an individual teacher has on students’ learning outcomes by examining student standardized test scores over time and across different classes. A number of education reformers, including Arne Duncan, U. S. Secretary of Education, have touted value-added modeling in general and the Gates Foundation report in particular as important new tools for understanding teacher effectiveness.

Now, an analysis of the Gates Foundation report by a U. of California at Berkeley economist challenges the conclusions of the Gates Foundation report:

The Bill & Melinda Gates Foundation’s “Measures of Effective Teaching” (MET) Project seeks to validate the use of a teacher’s estimated “value-added”—computed from the year-on-year test score gains of her students—as a measure of teaching effectiveness. Using data from six school districts, the initial report examines correlations between student survey responses and value-added scores computed both from state tests and from higher-order tests of conceptual understanding. The study finds that the measures are related, but only modestly. The report interprets this as support for the use of value-added as the basis for teacher evaluations. This conclusion is unsupported, as the data in fact indicate that a teachers’ value-added for the state test is not strongly related to her effectiveness in a broader sense. Most notably, value-added for state assessments is correlated 0.5 or less with that for the alternative assessments, meaning that many teachers whose value-added for one test is low are in fact quite effective when judged by the other. As there is every reason to think that the problems with value-added measures apparent in the MET data would be worse in a high-stakes environment, the MET results are sobering about the value of student achievement data as a significant component of teacher evaluations.

About these ads

Comments are closed.