why research sucks

The story of the Oregon Medicaid study is a lovely little example of why I am no longer interested in diving into the field* of social science research. In a nutshell: in the unlikely event that anyone pays attention to your work, they will misinterpret it. There’s lots of blame to sling around for why this happens—a news media with no patience for subtleties, a populace with no education in basic statistics, various groups with various axes to grind—but it’s also due in part to unavoidable professional conventions/constraints in how you describe results.

So here’s the example. Oregon expanded Medicaid to as much of their population as they could afford to (wouldn’t it be nice to live in a state like that?), which created a neat little natural experiment for researchers, who were able to look at health outcomes for similar groups of people, only some of which had access to health care. So that’s what they did. And after a year or so, they found that most health outcomes for the group with insurance weren’t significantly (key word) better than for the group without. Hence headlines: “Medicaid doesn’t work.” “Health insurance doesn’t make you healthier.” “Obamacare is a sign of the coming apocalypse.” Measuring differences in health outcomes is more complicated, however, than just stacking people up in piles of “diabetic” and “not diabetic” and seeing which one is taller. If that were the measure, in fact, the results would have looked great for Medicaid. Because a lot of health outcomes were actually (key word) better for people who had it. But—stopping myself from describing how a regression works—it’s a lot more complicated than that. There are rules, fairly conservative professional conventions, about how you describe outcomes, and the vocabulary doesn’t line up completely with plain English. Plain English says, “there’s some good evidence that people with Medicaid coverage have better health outcomes than those without, but we need more time and more people to be absolutely convinced.” But professional conventions put the focus on that caveat: we can’t say for sure that Medicaid makes a difference, so we say, “there’s no statistically significant difference in outcomes.” And in the game of telephone that is our media, that becomes, “Medicaid doesn’t help.”

In my own professional life, one of the lines that’s always made me grind my teeth in conversations about education policy and research is, “Well, you can make statistics say anything.” And when you google “Oregon medicaid study” and scan through the descriptions, it is obviously true that people can say anything about a statistic. It’s also true that an excel error and some choices about when to start and end measurement can, oops, compromise conventional wisdom about such worldview pillars as how much national debt is a bad thing for the economy. But unless you’re a con man or the Heritage Foundation, no, you can’t make statistics say anything. It’s just that describing methods and results and limitations appropriately is hard to do without sounding like you’re making excuses, or trying to obfuscate what’s obvious. (Perhaps tellingly, given a chance to write about the study in the New York Times, one of my policy school professors didn’t even attempt to make the case for good health outcomes and simply argued for Medicaid on the strength of the study’s unequivocally positive results regarding better financial security.)

I’d like to say it’s even tougher in education research although I don’t really know enough about the medical field to argue that. But I can say that when you look at programs or school types or curricula or training programs, it’s really, really, really tough to find anything that “works,” in the sense of showing statistically significant and powerful results that are replicated in more than one place. (“Good teachers” is one of the few things we can say makes a real difference, although answers to the useful questions, “What makes a good teacher?” and “How do we get more of them?” are far from clear. Early childhood education is another one with real, measurable, good outcomes, but there’s still plenty to fight about.) It’s all very frustrating for a policy-minded person who just wants to figure out how to make things better.

In grad school, one professor told us bluntly that we all had to pick sides: research or politics. And we protested, like all dewy-eyed university students who climb on barricades, “No, no, it’s about bringing the wisdom of good science into the field, to make good policy!” And then you get picked off by mercenaries, and start considering a career in upholstery.

*I realize this metaphor is a little mixed, but diving into a field is more or less what I imagine it would be like.

Leave a Reply

Your email address will not be published. Required fields are marked *