In the Education Wars, Reason Dies

I've written before about the disturbing trend of "data abuse" we've seen in both Jersey and the nation during our protracted education wars. A few more examples came up this week; let's start with Mike Paarlberg's takedown of the Washington Posts's Dylan Matthews:
In a Sept. 14 post, Matthews argued that union seniority rules for teachers (the "last in, first out" rule for layoffs) hurt student achievement. This is a mantra of school-reform proponents, who argue seniority protects bad teachers. Teachers unions see the push to end seniority as a pretext for budget-slashing school boards to get rid of the most experienced teachers, good or bad, since senior teachers earn higher salaries and cost more. 
Matthews cited three studies, none of which shows the relationship he alleges, or purports to. The first comes to the not very earth-shattering conclusion that existing teacher-layoff rules in Washington State are primarily determined by seniority. Crucially, the dependent variable (or outcome) it measures is a teacher’s probability of receiving layoff notices, not student achievement. Even then, the authors themselves caution the relationship they find is correlative and not necessarily causal. “Correlation is not causation” is the first rule of statistics, lest you believe umbrellas cause rain. 
The second study is a computer simulation, not an observational study (one that would measure the impact of seniority vs. value-added layoffs in the real world). Matthews doesn't mention this and in fact suggests the opposite—that it was based on observed outcomes: "That paper estimated that the gains due to using effectiveness-based, rather than seniority-based, layoffs improved teachers’ performance by the same amount as is gained when one replaces a teacher with one year of experience with a teacher with five years." In reality, no teachers’ performance actually went up, and there were no actual gains, because it was a theoretical model. The third isn’t really a study at all: it involves no empirical test and no measure of student achievement to speak of. [emphasis mine]
A little more on that second study from CALDER: to their credit, the authors try to deal with the error everyone acknowledges exists in using test-based teacher evaluations. Unfortunately, the data they use comes from New York City's administrations of standardized tests from 2005 through 2009: an era the state later admitted was rife with score inflation.

Further, as Bruce Baker points out, it's circular logic to say: "Teachers who get high VAM scores (evaluation ratings) are better because they get high VAM scores!" Researchers have found many changes in VAM scores when the same teachers teach different classes or grades. They've also found teacher effectiveness ratings change depending on the VAM model used.

In other words: a VAM rating is subject to wide variation outside of the teacher's control. Why, then, would Matthews ever argue that using VAM leads to better student outcomes when we know good teachers can get bad VAM scores?

Speaking of Bruce Baker: he's taking on the data abuse of the NJDOE:
As I’ve written a number of times on this blog, state officials in New Jersey have decided on specific marketing/messaging plan in order to support current policy initiatives. Those policy initiatives involve:
  1. expanding NJDOE authority to impose desired “reforms” (charter/management takeover, staff replacement, etc.) on specific schools otherwise not under their direct authority.
  2. cutting funding from higher poverty, higher need districts and shifting it toward lower poverty, lower need ones.
  3. expanding charter schooling and promoting other  “innovations” in high poverty concentration schools.
The supposed impetus for these reforms is that New Jersey faces a very large achievement gap between low income and non-low income children (one that is largely mis-measured). While it would seem inconsistent to suggest reducing funding in low income districts and shifting it to others, the creative messaging has been that the additional resources are quite possibly the source of the harm… or at the very least those resources are doing no good. Thus, the path to improvement for low income kids is to transfer their resources to others.  What I have found most disturbing about this messaging – other than the ridiculous message itself! – is the flimsy logic and disingenuous presentations of DATA that have been used to advance the argument.
I won't do Bruce's post justice if I try to summarize it; read the whole thing. But I do want to highlight one point he makes in this graph:
In the Chris Cerf regime, schools go from "bad" to "good" by classification as "Priority," "Focus," "Other," and "Reward." Anyone notice a pattern? Your neighborhood school is far less likely to be classified as "Priority" if it has smaller numbers of black, Hispanic, or poor children. In fact, if your numbers are small enough for these demographics, you're likely in line for a "Reward"!

The reformyists say that the difference between "Priority" and "Reward" schools isn't the circumstances of the lives of these schools' children; it's their teachers. You run the risk of being branded a racist if you dare to suggest otherwise (in the pages of the Washington Post, no less!). At the very least, pointing this out means you are "making excuses."

Lord forbid anyone state clearly that the obvious reason "Priority" schools do "badly" is poverty, racism, and the problems of assimilation. No one wants to hear that - sorry, I mean no one at top of the food chain wants to hear that. It may disturb their beautiful minds...

And so reason dies once again. Just as it died in the debate on global warming, and the debate on the economy, and the debate on the Iraq War, and the debate on health care, and the debate on campaign financing, and the debate on just about every other area of public policy in this country.

How much longer do you think we're going to survive as a nation if we keep treating reason this way?

Hey, Nero, play another one!