on the subject of sexism

As long as we’re talking about Trump and Clinton, here’s a little dollop of personal testimonial about being a woman in the workplace:

I recently moved from a shared workspace into a nice private office. Asked by a friendly, more-senior male colleague how I liked it, I said it was great, but since it was up front, strangers would often poke their heads in to ask for directions. His response, “Well, if you didn’t look like a secretary…”

Zing! Funny guy.

work

I don’t often mention work here, but this is kind of a cool visual summary of a program I’ve been building for the last couple years. Plus I want to test the embed code. :) So enjoy!

jazz data viz

One of the things I love about my job is the opportunity it affords to dabble in a lotta things I like doing. I’m no graphic designer nor statistical wizard, but I do love some good data visualization. So when we wrapped up our fabulous jazz MOOC this spring and no one asked me for any sort of outcomes or overview, I made up my own.*

Conventional in-the-media wisdom pegs MOOC completion rates at 5% and calls the enterprise frivolous. Of course the story is more complicated and interesting. This graphic was my attempt to answer the “what was your completion rate?” question in a way that was still clear but more nuanced. As so many of my plans do, it began with markers.

data viz sketch jazz

Jazz Appreciation edX summary results (1)

*Using a highly sophisticated graphic design program called “PowerPoint.” And then spamming it out to every one of my current colleagues.

why research sucks

The story of the Oregon Medicaid study is a lovely little example of why I am no longer interested in diving into the field* of social science research. In a nutshell: in the unlikely event that anyone pays attention to your work, they will misinterpret it. There’s lots of blame to sling around for why this happens—a news media with no patience for subtleties, a populace with no education in basic statistics, various groups with various axes to grind—but it’s also due in part to unavoidable professional conventions/constraints in how you describe results.

So here’s the example. Oregon expanded Medicaid to as much of their population as they could afford to (wouldn’t it be nice to live in a state like that?), which created a neat little natural experiment for researchers, who were able to look at health outcomes for similar groups of people, only some of which had access to health care. So that’s what they did. And after a year or so, they found that most health outcomes for the group with insurance weren’t significantly (key word) better than for the group without. Hence headlines: “Medicaid doesn’t work.” “Health insurance doesn’t make you healthier.” “Obamacare is a sign of the coming apocalypse.” Measuring differences in health outcomes is more complicated, however, than just stacking people up in piles of “diabetic” and “not diabetic” and seeing which one is taller. If that were the measure, in fact, the results would have looked great for Medicaid. Because a lot of health outcomes were actually (key word) better for people who had it. But—stopping myself from describing how a regression works—it’s a lot more complicated than that. There are rules, fairly conservative professional conventions, about how you describe outcomes, and the vocabulary doesn’t line up completely with plain English. Plain English says, “there’s some good evidence that people with Medicaid coverage have better health outcomes than those without, but we need more time and more people to be absolutely convinced.” But professional conventions put the focus on that caveat: we can’t say for sure that Medicaid makes a difference, so we say, “there’s no statistically significant difference in outcomes.” And in the game of telephone that is our media, that becomes, “Medicaid doesn’t help.”

In my own professional life, one of the lines that’s always made me grind my teeth in conversations about education policy and research is, “Well, you can make statistics say anything.” And when you google “Oregon medicaid study” and scan through the descriptions, it is obviously true that people can say anything about a statistic. It’s also true that an excel error and some choices about when to start and end measurement can, oops, compromise conventional wisdom about such worldview pillars as how much national debt is a bad thing for the economy. But unless you’re a con man or the Heritage Foundation, no, you can’t make statistics say anything. It’s just that describing methods and results and limitations appropriately is hard to do without sounding like you’re making excuses, or trying to obfuscate what’s obvious. (Perhaps tellingly, given a chance to write about the study in the New York Times, one of my policy school professors didn’t even attempt to make the case for good health outcomes and simply argued for Medicaid on the strength of the study’s unequivocally positive results regarding better financial security.)

I’d like to say it’s even tougher in education research although I don’t really know enough about the medical field to argue that. But I can say that when you look at programs or school types or curricula or training programs, it’s really, really, really tough to find anything that “works,” in the sense of showing statistically significant and powerful results that are replicated in more than one place. (“Good teachers” is one of the few things we can say makes a real difference, although answers to the useful questions, “What makes a good teacher?” and “How do we get more of them?” are far from clear. Early childhood education is another one with real, measurable, good outcomes, but there’s still plenty to fight about.) It’s all very frustrating for a policy-minded person who just wants to figure out how to make things better.

In grad school, one professor told us bluntly that we all had to pick sides: research or politics. And we protested, like all dewy-eyed university students who climb on barricades, “No, no, it’s about bringing the wisdom of good science into the field, to make good policy!” And then you get picked off by mercenaries, and start considering a career in upholstery.

*I realize this metaphor is a little mixed, but diving into a field is more or less what I imagine it would be like.

off home

I leave for Dallas this afternoon and return Christmas night to work 12/26-28. Though it wasn’t a choice so much as the need to hang on to my two hard-won vacation days, I’m looking forward to being in the office during that quiet week between holidays. I expect it will be a ghost town. Unfortunately, I’ve mentally double-booked the time with both uninterrupted stretches to buckle down on neglected projects and days of long lunches, coworker bonding, and early happy hours. But either way should be fun, a modest break in the routine.

It has been a little strange to watch the semester end for friends, Bryan, and the general Berkeley population. It’s the first time I haven’t been on a school schedule in one way or another, and it’s not the vacation days I miss as much as the chance to put things to rest and have everything reset in January. Nothing is ending or beginning. I continue to work on projects that could last for years. It’s easy to imagine how months could just start to click by, unmarked.

Fortunately, regular paychecks snap me right out of that melancholy crap. Whee!

FINE, more CS in schools

I’m a big proponent of more art and music in secondary education, but I’ve never gotten quite as excited about the idea of more, say, programming classes.

So I was at work today and this happened:

=VLOOKUP($B$182&”-“&”All Students”&”-“&”Below Basic”&”-“&$B$185&”-“&$B186&”-“&D$9,DataRange,2,FALSE)

It’s just basic Excel. But still. Somehow I ended up being responsible for such things. So much for a liberal arts education.