I was recently asked to serve on a panel for graduate students on applying for grants and fellowships. It was pretty fun, and I distributed useful arcana that no one else will tell you. (For instance: the left margin on the department letterhead is 0.65625 inches. The more you know!)
But here's the thing about useful arcana that no one will tell you: it freaks people out when somebody decides that, hey, maybe somebody should tell you--somebody like your peers. Not about the margins on the department letterhead; no one cares about that. I'm talking about two oddly powerful sources of fear and moral panic in our profession:
Rate My Professors and
the wiki.
I get the impression that in the minds of many academics, these--as much as, if not more than, the budget cuts and casualization of academic labor that are endemic everywhere--pose a threat to the integrity of the university. What we have to fear, many insist, are underinformed undergraduates and underinformed job-seeking junior scholars. In other words, people who are almost completely powerless. The puzzling thing is why anybody would give a crap about either site.
Now, it would be foolhardy to take the contents of either Rate My Professors or the Wiki purely at face value. (This is also true of
anything on the internet, on the air, or in print, as trained researchers know perfectly well.) But let's look a little more closely at each of these web sites, both aimed at allowing a relatively underinformed, institutionally marginalized, almost universally young population to share impressions, opinions, and (yes, sometimes unverified) information. What's so threatening about them?
Let's look at Rate My Professors first.
The thing that strikes me about it at first glance is how boring it is. It's not salacious; it's not scandalous; it's not gossipy; it's not even informative. It's boring in the way that most Amazon reviews are boring. Some students post with more or less vitriol or fan-like devotion, but basically it's "yup, this class was pretty cool" or "dude, this prof is boring." Much as a grade on an academic essay tells you next to nothing about its contents or style, a rating tells you almost nothing about what a professor's actual teaching is like. (Also, yes, the chili peppers are sexist, as are many things on the internet.) It's not a useful site. So why does it inspire fear and loathing?
Well, there's its badness. From a social science perspective, it's bad sampling; the students who post on RMP aren't likely to accurately represent the total student population or the students of any given professor. So, okay, professors are being misrepresented. And I get the sense that a lot of people think that the students posting are acting thoughtlessly or maliciously, which may be disproportionately the case given the sample. But if most students were really posting specifically to spread vicious gossip, then you'd think there'd be some funnier stuff on there.
But here's what you really see on RMP: very, very dull, vague evaluations on the basis of criteria that seem to utterly miss the point of taking classes in the first place, given by students who believe that they are performing a public service by making their unincisive opinions known.
What's threatening about that? Well, in it we see the things that really do threaten higher ed: students' apathy; their misapprehension of the academic mission; the degree to which our pedagogical choices seem arbitrary and opaque to students; the degree to which college has been sold to students as either a vocational school or a luxury cruise, or sometimes, oddly, both; the fact that high school graduates whom we have accepted to our institution of "higher learning" almost universally cannot wield a comma worth a damn. That students feel and write this way--or share those writings--isn't the real threat. It's merely a symptom of the far more serious problem: that we as a profession have utterly failed to make the case for learning
per se not only to the public but to the very people who are supposed to be engaged in it.
Now you might object that to read RMP as an accurate, and damning, symptom of the structural problems in higher ed is to assume that, if we were really such good professors, students would always be on board with the project of learning and say nice things about us. And that's clearly not true.
But that may be the most unnerving thing of all. For despite the sampling problem, and chili peppers aside, RMP very closely mirrors our own internal teaching assessment tools, the ones we use for hiring, promotions, and teaching awards. The piety that the graduate teaching center at Cal preaches is that student evaluations are a wonderful, accurate assessment tool that we should take very, very seriously. Evaluate early and often, and learn deeply from every one!
But a multitude of studies (just a few of which are cited below) suggest that there's a broad variety of confounding factors that affect teaching evaluations, including gender, race, student expectations of gender norms, the "course effect" (professors teaching unpopular required courses are likely to receive lower evaluations), and grade inflation. And let's not forget the infamous
chocolate study--the one that found that evaluations were higher among students who were offered chocolate before the evaluation. The evaluations on Rate My Professor are sadly familiar: the lack of specificity, the contradictions, the complaints that there was homework, the recommendations that the class not be scheduled so early in the morning, or that the prof bring in doughnuts. Let's face it: sometimes our course evaluations are quite useful, and sometimes they really aren't.
Does it mean that evaluations are worthless? Of course not, but they have to be considered with care. Our own internal teaching assessments are not much more methodologically sound than RMP. In other words, you never get an "out" for evaluating your sources, and you certainly can't just rely on a numerical score. It's no more reasonable to unthinkingly trust course evaluations than it is to unthinkingly trust RMP. The need for critical thinking is just a little more obvious with one of them, because it advertises its lack of methodological care with chili peppers and smiley faces.
So is RMP a threat? I think it's nothing compared to the threats that it reveals within the structure of the profession itself. It's depressing that such a significant percentage of students are in an academic institution for non-academic reasons, and it's chilling that one of the principal ways that we assess a central component of our jobs is a relatively inaccurate, little understood tool.
* * *
In any case, the RMP panic is old news. You know what really sends (some) faculty into fits of terror? "The wiki," as it's known. Kind of like "the whale."
Here's the story. Some enterprising junior scholars, some years ago, made a free wiki for posting information on ongoing academic job searches. As lore has it, the idea was to share information among job-seekers, in solidarity in the face of a brutal market with an enormous power imbalance.
The way I often hear faculty talk about the wiki, it's a viper's nest of gossip, misinformation, and drama, something that will poison your mind and destroy your soul. And I guess it could get to be that way if one were to read or check it obsessively. But then, if you're checking the wiki obsessively, you've already got a problem, and it wasn't caused by the wiki. Since many faculty also freely admit that the academic job market itself has the capacity to poison your mind and destroy your soul--and that also goes for those who are on search committees--one suspects that we have another case of shooting the messenger on our hands.
I don't post to the wiki, and I don't read it with any frequency, but I have it bookmarked. Why? Because nobody is more likely to ferret out obscure fellowships than a large group of desperate, underemployed junior scholars. And seeking out fellowships for which to apply is a normal part of every research academic's life, so while doing one's usual fellowship trawl, why wouldn't one check a lengthy, highly inclusive list crowd-sourced by people in a position to care? It's far more comprehensive and up-to-date than the fellowship lists published by the MLA, for instance (sorry, MLA). No one list of fellowships is ever going to suit you perfectly or be highly accurate, so if you're going to have to double check all the information anyway (and believe me: YOU DO), you may as well avail yourself of others' obsessive information-gathering.
Is the wiki unreliable? Of course--like everything on the internet, on the air, and in print. But is it a viper's nest of gossip, misinformation, and drama?
Here's a screen shot of one of the "notes and queries" (heh) sections, for a position at Baker University (Baldwin City, KS):
SCANDAL! There's a ... clarification of the job posting, and a note about the department structure. Oh.
Unlike RMP, this does seem like actual information, which one could check up on if one wanted--probably pretty easily. What's so threatening about the wiki?
Perhaps what's threatening about it is that it
isn't a viper's nest of misinformation and rumor--that it holds a mirror up to the irrationalities of the academic labor situation and gives the lie to the notion that there is anything that we could reasonably call a "market" with some kind of regulating invisible hand. Let me be clear: I'm not complaining about my personal situation (I currently have the best postdoc I could possibly want), nor am I claiming that the wiki is a thoroughly accurate source of information. (I mean: it's a wiki.)
But I do believe that, in the aggregate, the wiki is a testament to the lack of transparency and unreasonable burdens in the process, and the large number of junior scholars who suffer for it. I happen to know it's not a walk in the park for the hiring committees, either; it's neither fun nor actually useful to have to evaluate five hundred applications, and the most freakish part of the tight job market is that so many searches actually fail. No suitable candidate, after reading all those files and interviewing all those people, is found. The very existence of the wiki is a symptom of a much bigger problem in academic labor that affects tenured professors as well as ABDs, postdocs, and other junior scholars (and of course the perpetually lamented, rarely actually helped adjuncts). But it's much easier to condemn the wiki and run yet another search, having the already overworked faculty stay up nights exhaustedly skimming five hundred abjectly well crafted cover letters, than it is to try to effect a systematic change in the academic labor situation--a task so monumental as to be nearly unthinkable.
I'm not saying anything new or even, I think, controversial about the structural problems in higher ed. The job "market" barely functions and exacts a lot of collateral damage, and undergraduate teaching faces an significant crisis of legitimacy because the public is neither clear on what learning is nor convinced that undergraduates actually ought to do it. What the moral panic around the Websites of the Frustrated (that's my new umbrella term for RMP and the wiki) tells us is that we'll do anything to avoid addressing either problem at its root.
Algozzine, Bob et al. “Student Evaluation of College Teaching: A Practice in Search of Principles.” College Teaching 52.4 (2004): 134-141. Print.
Baldwin, Tamara, and Nancy Blattner. “Guarding against Potential Bias in Student Evaluations: What Every Faculty Member Needs to Know.” College Teaching 51.1 (2003): 27-32. Print.
Chism, Nancy Van Note. “Teaching Awards: What Do They Award?.” The Journal of Higher Education 77.4 (2006): 589-617. Print.
Eiszler, Charles F. “College Students' Evaluations of Teaching and Grade Inflation.” Research in Higher Education 43.4 (2002): 483-501. Print.
Laube, Heather et al. “The Impact of Gender on the Evaluation of Teaching: What We Know and What We Can Do.” NWSA Journal 19.3 (2007): 87-104. Print.
Romney, David. “Course Effect vs. Teacher Effect on Students' Ratings of Teaching Competence.” Research in Higher Education 5.4 (1976): 345-350. Print.
Youmans, Robert J., and Benjamin D. Jee. “Fudging the Numbers: Distributing Chocolate Influences Student Evaluations of an Undergraduate Course.” Teaching of Psychology 34.4 (2007): 245. Web.