BY HANK REICHMAN
I grew up in the New York suburb of Great Neck, where last week a fourth-grade teacher won a landmark case against New York State’s “value-added modeling” (VAM) formula for teacher evaluation, an assessment system that was developed when John King, the current U.S. Secretary of Education, was the New York State education commissioner. Sheri Lederman had filed a suit against state education officials over their controversial method of evaluating her — and, by extension, other N.Y. teachers.
New York Supreme Court Judge Roger McDonough said in his decision that he could not rule beyond Lederman’s individual case because regulations around the evaluation system have been changed, but he said she had proved that the controversial method that King developed and administered in New York was “indisputably arbitrary and capricious” and therefore provided her with an unfair evaluation.
According to Washington Post education writer Valerie Straus, VAM
purports to be able to use student standardized test scores to determine the “value” of a teacher while factoring out every other influence on a student (including, for example, hunger, sickness, and stress). One way it works is by predicting, through a complicated computer model, how students with similar characteristics are supposed to perform on the exams, and teachers are then evaluated on how well their students measure up to the theoretical students. New York is just one of the many states where VAM is a key component of teacher assessment. Evaluation experts have warned policymakers that this method is not reliable for evaluating teachers, but VAM became popular among school reformers as a “data-driven” evaluation solution.
Lederman’s suit against state education officials — including King — challenge[d] the rationality of the VAM model, and it allege[d] that the New York State Growth Measures “actually punishes excellence in education through a statistical black box which no rational educator or fact finder could see as fair, accurate or reliable.”
Here’s what happened to Lederman: In 2012-13, 68.75 percent of her New York students met or exceeded state standards in both English and math. She was labeled “effective” that year. In 2013-2014, her students’ test results were very similar, but she was rated “ineffective.” Meanwhile, her district superintendent, Thomas Dolan, declared that Lederman — whose students received standardized math and English Language Arts test scores consistently higher than the state average — has a “flawless record.”
During the trial, Bruce Lederman, Sheri’s lawyer and husband, described production of the score as a “black box” system that spit out predictions comparing his wife’s students to “avatar students.” He noted that “the magic of numbers brings a suspension of common sense.”
In his ruling, McDonough cited affidavits submitted by Linda Darling Hammond of Stanford University, Aaron Pallas of Columbia University, Audrey Amrien-Beardsley of Arizona State University, Sean Corcoran of New York University, Jesse Rothstein of University of California at Berkeley, clinical school psychologist Brad Lindell, and Carol Burris, the executive director of the Network for Public Education. Each used research and data to demonstrate that the VAM system was indeed arbitrary and capricious, and therefore an abuse of discretion by the New York State Education Department. In his ruling, the judge characterized that evidence as “overwhelming.”
The defendant in the case was King, the former New York State education commissioner and present U.S. Education Secretary, who did not appear in court to defend the system he commissioned and defended as valid, reliable and fair when he was working in New York. Instead, an affidavit submitted by Assistant Commissioner Ira Schwartz claimed that the New York system is rational and fair. But McDonough rejected that argument, basing his decision in stead on five factors:
- the convincing and detailed evidence of VAM bias against teachers at both ends of the spectrum (e.g. those with high-performing students or those with low-performing students);
- the disproportionate effect of Lederman’s small class size and relatively large percentage of high-performing students;
- the functional inability of high-performing students to demonstrate growth akin to lower-performing students;
- the wholly unexplained swing in petitioner’s growth score from 14 to 1 despite the presence of statistically similar scoring students in her respective classes;
- the strict imposition of rating constraints in the form of a “bell curve” that places teachers in four categories via pre-determined percentages regardless of whether the performance of students dramatically rose or dramatically fell from the previous year.
Commenting on the decision’s potential impact, Burris wrote:
There are thousands of teachers like Sheri Lederman all across this nation who suffer in silence when they receive a VAM score labeling them ineffective. They and all teachers and principals owe the Ledermans a great debt. Sheri was willing to be publicly identified as “ineffective” while her attorney husband spent countless hours preparing meticulous briefs and cajoling experts to write affidavits in support.
The Ledermans knew they were fighting against the testocracy that is destroying the schools that they love. Across the country, students are laboring over unfair tests that are too long in order to produce enough “data” for a teacher score. News agencies have printed these invalid scores, humiliating teachers across the nation. Politicians, such as New York Gov. Andrew Cuomo, have raised the weight of those ludicrous scores to 50 percent of a teacher’s and principal’s evaluation, and Brian Davison of Loudon County Schools petitioned the court (and won), to turn this nonsensical data with teacher names over to him so he can have the power to publish it on his Facebook page.
It is time for the madness to stop. It is time for other teachers to stand up and legally challenge their scores. And it is past time for taxpayers to stop these silly measures that cost them millions while enriching test companies and the research firms that produce the teacher scores.
Amen! And let’s keep alert to the possibility that this sort of moronic nonsense will arrive soon at colleges and universities striving to become more “data driven” proponents of “student success.”