Double Standards

by: Andreas Broscheid

One of the fun things about being a political scientist is that … Okay, maybe I should say: One of the annoying things with political scientists is that they always (I’m exaggerating here) have to contradict what all other people think they know about politics. You think gerrymandering is to blame for political polarization in the US? Think again! Regular Americans are ideologically polarized? These political scientists at least argue that they’re not. Presidential debates tend to decide presidential election? Uh-uh. And the list goes on.

I am pretty sure political scientists aren’t the only ones contesting the common wisdom about their field of inquiry. It’s not that common wisdom is necessarily wrong. (This guy, for example, has data to show that regular Americans are in fact ideologically polarized.) What happens is that experts are not content to just accept any plausible hypothesis; they want to see evidence, test arguments for internal consistency, check if the hypothesis corresponds to other hypotheses that they believe are true, investigate if there are additional sources that might contradict a hypothesis, and so on. And at the end of this process, many hypotheses turn out to be highly questionable or barely on life support.

Considering the intellectual rigor that we display in our academic disciplines, I find it startling that we apply rather lackadaisical standards when it comes to checking whether our teaching works. We typically check at the end of the semester whether our students found their classes useful and enjoyable; we check our grade distributions; otherwise, most of us use our well-developed and sophisticated sense of observation to determine whether something worked or didn’t work: the students looked alert; I managed to get through everything I planned to do in class; nobody left ten minutes before the class ended; the class was really fun for me. These aren’t necessarily bad criteria (and, as I said, maybe regular Americans are really ideologically polarized), but shouldn’t we be dissatisfied about this lack of rigor?

Of course, I’m painting with a broad brush here, burn a few straw men, and mix inappropriate metaphors. At least I know that what I’m saying is true about most of my own work. (And I know that it’s not true for many others on campus.)

What should we do? The literature on secondary education proposes scholarly teaching as a step beyond being an intuitive and (hopefully) excellent teacher. Basically, this approach uses the scholarly process as a model for the process of preparing, teaching, and evaluating learning outcomes. Of course, the scholarly process differs in different disciplines. In my own discipline, it starts with literature research (there is the Journal of Political Science Research, but also plenty of other helpful journals indexed in EBSCO’s Education Research Complete, which my employer luckily subscribes to). As a quantitative political scientist, it makes sense for me to view my classes as natural experiments, in which I manipulate the learning environment in a way that hopefully improves learning outcomes. Obviously, students are not randomly assigned to my classes (although many first-year students in my gened class sometimes have been randomly assigned to it by freshmen advisors), so it’s not a true experiment. But at least I can control, to some extent, what “my” students are exposed to. Class surveys are for me the obvious choice to figure out whether my teaching works – whether students know what I want them to know, whether their perceptions of the subject matter change, whether their attitudes become more sophisticated, and the like. I can look at whether student perceptions of their learning and attitudes change, but I can also use direct knowledge tests to figure out what happens in my class. Heck, maybe I can even design class assignments that provide me with useful feedback on this. And if I am nice to my colleagues (and if they teach comparable classes), I may be able to compare outcomes in my class to outcomes in their classes.

Obviously, people in other disciplines want to follow other scholarly strategies in their teaching. Somebody in history or in literature may want to analyze student writing; a qualitative social scientist might be more interested in in-depth interviews with students. I know that several psychologists at JMU apply the sophisticated strategies that their field has developed to create randomization of teaching “treatment”. Different disciplines know different ways to be rigorous, so why not use them for our teaching?

Leave a Reply