Showing posts with label NYT. Show all posts
Showing posts with label NYT. Show all posts

Sunday, September 25, 2011

It's not "the job market"; it's the profession (and it's your problem too)

I enjoyed Kathleen Fitzpatrick's recent piece in the Chronicle* on risk-taking and the responsibility of mentors to back up those junior scholars who are doing nontraditional work. The piece's key insight is that it's one thing to urge people to "innovate" and quite another to create the institutional frameworks that make innovation not only possible but consequential.**

Kathleen's observation comports with some ideas that have been floating around in my head lately, especially around "digital humanities." I and my Fox Center colleague Bart Brinkman were recently called upon to define digital humanities for the other fellows in residence, and in the process of talking it over with Bart, and during the discussion at the CHI, I've come to realize that I have some real pet peeves around the notion of the "job market" that come into relief specifically around the field of digital humanities.

It boils down to this: peeps, we're all connected.

The recent rise to prominence of digital humanities is indistinguishable from its new importance in "the job market" (I insist on those scare quotes); after all, digital humanities and its predecessor, humanities computing, have been active fields for decades. What's happening now is that they are institutionalizing in new ways. So when we talk about "digital humanities and the 'job market,'" we are not just talking about a young scholar's problem (or opportunity, depending on how you see it). We are talking about a shift in the institutional structures of the profession. And, senior scholars, this is not something that is happening to you. You are, after all, the ones on the hiring and t&p committees. It is a thing you are making—through choices that you make, and through choices that you decline to make.

There's something a little strange about the way that digital humanities gets promoted from the top down; it gets a lot of buzz in the New York Times; it's well known as dean-candy and so gets tacked onto requests for hires; digital humanities grant money seems to pour in (thanks, NEH!) even as philosophy departments across the country are getting shut down; university libraries start up initiatives to promote digital humanities among their faculty. I am waiting for the day when administrators and librarians descend upon the natural sciences faculty to promote history of science. No, I really am.

So it seems quite natural that there should be wariness and resistance to the growing presence of digital humanities. Perhaps there is some bitterness that you might get your new Americanist only on condition that her work involves a Google Maps mashup, because it was easy to persuade people that your department needed a new "digital humanist," whatever the hell that is, and it was not easy to persuade people that you needed somebody to teach Faulkner.

The situation is not improved by the confrontational attitudes of certain factions of the digital humanities establishment (such as it is), which are occasionally prone to snotty comments about how innovative DH is and how tired and intellectually bankrupt everybody else's work is. (Not so often, I find—but even a little is enough to be a problem.) Under those circumstances, DH seems clubby and not liberating; not a way of advocating the humanities but an attack on it, and specifically on the worth of that Faulkner seminar that you teach, and that non-digital research that you do. Why, an established scholar might reasonably ask, should I even deal with this "digital humanities" nonsense? Shouldn't I just keep teaching my Faulkner seminar, because somebody ought to do it, for Christ's sake?

Well, whatever else DH is, it is highly political, and it has political consequences. So, in short, no.

I'm persuaded that the widespread appeal of DH has much to do with the leveling fantasy it offers, a fantasy of meritocracy that is increasingly belied elsewhere in the professional humanities. As Tom Scheinfeldt points out in his useful "Stuff Digital Humanists Like,"
Innovation in digital humanities frequently comes from the edges of the scholarly community rather than from its center—small institutions and even individual actors with few resources are able to make important innovations. Institutions like George Mason, the University of Mary Washington, and CUNY and their staff members play totally out-sized roles in digital humanities when compared to their roles in higher ed more generally, and the community of digital humanities makes room for and values these contributions from the nodes.
This is true. Those involved in digital humanities have also seen the ways that THATCamps, blogs, and Twitter allow junior scholars and scholars at non-R1 institutions to cut geodesics across the profession, allowing them to spread their ideas, collaborate, and achieve a certain prominence that would have been impossible through traditional channels. I'm convinced that real possibilities lie here.

And as traditional scholarly publishing becomes more and more constricted and humanities department budgets are slashed, the fiction of academic meritocracy becomes harder and harder to sustain. Perhaps on the web, we think, through lean DIY publishing and postprint review, meritocracy (or its semblance) can return to the academy. It seems at once a way forward and a way to return to a (fabled) time when people cared about scholarship for the sake of scholarship—not because they needed X number of well-placed articles or a line on the cv or a connection at Y institution without which their careers would disappear. Perhaps DH offers us a way out of the increasingly rationalized death-spiral of "impact scores" and credential inflation. Perhaps it will let us out-quantify the quantifiers, or sidestep them altogether.

Of course, the web always comes with liberatory rhetoric that usually turns out to mean little more than "what the market will bear," and the ostensible meritocracy of digital humanities in the present moment is really no more than a misalignment between its alternative (and potentially even more aggressively capitalistic) value systems and those of the institutionalized humanities more generally. It can be disturbingly easy for the genuinely progressive intentions of digital humanists to become assimilated to the vague libertarianisms of "information wants to be free" and "DIY U," and from there to Google Books and charter schools and the privatization of knowledge—an enclosure of the digital commons ironically in the name of openness. At the same time, the naming of the "alt-ac" "track" (it is generally not a track, of course, by definition) seems to provide new opportunities for young scholars even as it raises research expectations for staff and requires those on the "track" to subordinate their research interests to those of the institutional structure that employs them. Digital forms are exceptionally good at occluding labor. How to navigate those waters thoughtfully—to realize the real promise of DH—is a question to which we must all apply ourselves.

So you see what I mean when I say that "digital humanities and 'the job market'" as it now manifests isn't a narrow, merely administrative sliver of life of interest solely to junior academics who are still gravely listening to advice about how to "tailor" the teaching paragraphs in their cover letters. Digital humanities has become important to "the job market" exactly insofar as it is causing major shifts in the institutions of the profession. These shifts are political. And if you are in my profession, then they are your concern.

*I know, "enjoyed" and "Chronicle" in one sentence... mirabile dictu.

**As we all know, I have a complex relationship with the word "innovation" and do not consider it an unqualified good, nor a transhistorical value. For today, however, we will leave that particular word a black box.

Thanks to Bart and Colleen for sitting through a less-worked-out live version of this rant last week.

Thursday, June 23, 2011

Sunday, December 19, 2010

A Supposedly Fun Thing: Text-Mining and the Amusement/Knowledge System; or, the Epistemological Sentimentalists

If we could text-mine the internets of the last few days for the correlation between the words "n-gram" and "fun," I'm sure we'd get a nontrivial number. One of the most striking things about the reception of the Google Books Ngrams, largely in the form of the web tool, is the giddy delight with which people have announced how much fun it is. Exhibit A is the bit I quoted yesterday from Patricia Cohen at the New York Times:
The intended audience is scholarly, but a simple online tool allows anyone with a computer to plug in a string of up to five words and see a graph that charts the phrase’s use over time — a diversion that can quickly become as addictive as the habit-forming game Angry Birds.
But that's just one example--the fun of the Google Books Ngrams tool is almost universally noted. See, for instance, "Fun With Google's Ngram Viewer" (Mother Jones), "Fun with Google NGram Viewer" (WSJ), and "BRB, Can't Stop NGraming" (The Awl). And Dorothea Salo tweets,
What I like about the GBooks n-grams is seeing all kinds of people playing with it. Just playing. THAT, friends, is how one learns.

The prevalence of this language of play raises two questions.

1. What rhetorical work is this move (calling Google Books Ngrams a fun toy) doing?

2. What experiential dimension of Google Books Ngrams does this rhetorical move describe, and what does it tell us about the tool's epistemic significance?

Answering the first question feeds into answering the second. To call the Google Books Ngrams web tool (henceforth "GBN") a fun toy is to hedge one's bets, to express approval without necessarily venturing into the higher-stakes terrain of approving it as a research method. Any assessment of the tool's epistemic value is channeled through an expression of pleasure (or, as Patricia Cohen and The Awl's Choire Sicha rather interestingly suggest, compulsion). Play can of course be a form of learning, and very important--that's what Dorothea Salo's tweet indicates. But play is a good learning environment precisely because the stakes are low and mistakes can be made safely, as a comment by Bill Flesch suggests: "I played around with it for about half an hour. Now I'm bored." New toy, please! With respect to knowledge, the language of play is deeply ambivalent.

As I read it, the universal declaration of fun that has surrounded the release of GBN is as much about guilt as about pleasure. Those who are compulsively "ngraming," as Sicha so amusingly puts it, are often all too aware of GBN's limitations, which have been blogged extensively, all the way down to what Natalie Binder points out, in her much-retweeted post, has to underlie the whole operation: inevitably imperfect OCR.*

Why does the GBN web tool even exist? Not to advance knowledge, I don't think, or at least not directly, but rather because it's fun. Because it directs interest toward the more substantive element of the project, the downloadable data set that relatively few people are actually going to download.

There are huge problems with using GBN (and throughout I'm alluding to the web tool/toy that everybody is saying is so much fun) as any sort of meaningful index of culture, and everyone knows it. And yet.

I would argue that the universal declaration of fun is a form of confession: I am deriving epistemological satisfaction from this unsound tool, with its built-in Words for Snowism. It's a guilty pleasure, epistemic candy: the sensation of knowledge, lacking in any nutritional value.

But the guilt goes rather deeper than the simple tension between GBN's unreliability for actual research and the "gee whiz!" quality of the graphs: GBN is fun because it is so limited.

That great scholar of nineteenth-century culture, Walter Benjamin, described a mode of writing that he called "information."
Villemessant, the founder of Le Figaro, characterized the nature of information in a famous formulation. 'To my readers,' he used to say, 'an attic fire in the Latin Quarter [Paris] is more important than a revolution in Madrid.' This makes strikingly clear that what gets the readiest hearing is no longer intelligence coming from afar, but the information which supplies a handle for what is nearest. Intelligence that came from afar--whether over spatial distance (from foreign countries) or temporal (from tradition)--possessed an authority which gave it validity, even when it was not subject to verification. Information, however, lays claim to prompt verifiability. The prime requirement is that it appear 'understandable in itself.' (147, emphasis added)
What GBN delivers is information in this sense. It is near at hand, easy to use, and puts out a nice visualization that appears "understandable in itself." It's easy to deliver, in that way, not unlike a pizza. It's no good to point out, as Mark Davies does, that the Corpus of Historical American English (COHA) allows one to look at specific syntactic forms, or include related words, or track usages by the genre of the source. Such capacities only raise anxieties. (For example, what gets tagged as "nonfiction"? Where, for instance, do autobiographies go? I once, to my astonishment, saw The Autobiography of Alice B. Toklas in the nonfiction section of a book store--along with Three Lives! But I digress.)

As soon as we raise such questions, the graph stops being "understandable in itself," stops being information. Conversely, when you aren't given the choice to sort by genre, how genres are defined necessarily stops being a question. It's the very fact that the toy is a black box and a blunt instrument that makes it feel immediate and incontrovertible and, in that very satisfying way, obvious. We get the epistemic satisfaction of information, and the thing that gives it to us is precisely that information's lack of nuance.

Yesterday I used the word "cheap" to describe the kind of historical narratives GBN suggests. There is indeed a kind of economic dimension to the satisfaction that GBN delivers. Of Oscar Wilde's many quotable lines, I am reminded of this one:
The fact is that you were, and are I suppose still, a typical sentimentalist. For a sentimentalist is simply one who desires to have the luxury of an emotion without paying for it. (768)
Feeling, Wilde suggests, has to be earned.** Bracketing the question of whether this is a good description of sentimentalism, it's a good analogue for the epistemic candy of GBN. One receives the apparent solidity of research--the nice graph that summarizes and visualizes what might otherwise be years of labor in the making--without having to have actually done any research. This is only a cheap thrill, "fun," when it is actually cheap--that is, when we don't inquire into how the corpus was prepared, or what effects GBN's case-sensitivity is having on our results.

The analogy to sentimentalism is useful not only because it gives us a model for understanding the economy of feeling here, but also because it allows us to recognize that there is an element of feeling in the way that we encounter information. We are likely to find it ethically reprehensible when our emotions or what we believe we know are manipulated. And yet there are times when we want the cheap thrill. Most people I know will freely cop to liking a good emotionally manipulative movie or novel, whether a thriller or a romance or one of those movies where the dog dies. As the fun of ngrams demonstrates, we like a little intellectual manipulation too.

(I know, I know, it doesn't tell you anything conclusively, but...try Foucault versus Habermas!)

What does it mean, this liking it?

I mentioned Bill Brown's term, the "amusement/knowledge system," in my title above because it's another, perhaps more explicit way of describing the close interweaving of knowledge and fun at the end of the nineteenth century that so fascinated Benjamin (208). In my own work I have tried to make a case for taking seriously both the knowledge and the amusement in that system, notably in naturalist fiction, because it's often in such liminal places that the terms of what counts as knowledge are most at stake. Part of the reason experimental literature seems to be here to stay is that the amusement/knowledge system is, too.

The point is not to condemn fun as something that has no place in knowledge--far from it. Fun is central to how we vet knowledge--just think of how important it is that research be "interesting"! It is our highest (and also most common) praise.*** Indeed, play lies at the heart of our most cherished models of intellectual inquiry--a nonutilitarian curiosity to "see what happens." As I quoted Dorothea Salo at the beginning of this post: "THAT, friends, is how one learns."

So condemning fun is not at all on my agenda. Rather, I want to draw attention to the emotional content of the way we talk about knowledge, and to the ambivalence that intellectual "fun" signifies. Ours is an age of "news junkies" (again with the pleasure bordering on unpleasurable compulsion, à la the "addictive" ngrams) and "armchair policy wonks" and people who read voraciously, but only in the proverbial dubiously defined "nonfiction" category. Nate Silver and the Freakonomics dudes are minor celebrities. Lies, damned lies, and statistics are our idea of fun, as powerfully as a Victorian melodrama was ever considered fun. Which means we need to think much more about how fun operates, and why, and what that means for knowledge. And just as crucially: what knowledge means for pleasure.


*In fairness, Ben Schmidt argues that GBN's OCR is pretty accurate, given the state of the field, and also that "No one is in a position to be holier-than-thou about metadata. We all live in a sub-development of glass houses." But there's a big difference between "this is really good, for OCR" and "this degree of accuracy is good enough for supplying evidence for X kinds of claims."

**Taken out of context, Wilde appears here to be describing sentimentalism through an economic metaphor. In fact, it's rather the reverse, or at the very least something more confused than that: most of the surrounding text is taken up with Wilde chastising Douglas for his financial mooching.

***As Sianne Ngai points out, the "interesting," like the language of play, has a hedging quality, bridging epistemological and aesthetic domains.

Benjamin, Walter. "The Storyteller: Observations on the Works of Nikolai Leskov." Trans. Harry Zohn. Selected Writings: Volume 3, 1935-1938. Ed. Howard Eiland and Michael W. Jennings. Cambridge, Mass.: Belknap-Harvard UP, 2002. Print.

Brown, Bill. The Material Unconscious: American Amusement, Stephen Crane, and the Economies of Play. Cambridge, Mass.: Harvard UP, 1996. Print.

Ngai, Sianne. "Merely Interesting." Critical Inquiry 34.4 (Summer 2008): 777-817. Print.

Wilde, Oscar. "To Alfred Douglas." Jan.-Mar. 1897. The Complete Letters of Oscar Wilde. Eds. Merlin Holland and Rupert Hart-David. New York: Henry Holt, 2000. Print.

Previously on text-mining:
Google Books Ngrams and the number of words for "snow"
Dec. 16, 2010
Dec. 14, 2010
Google's automatic writing and the gendering of birds

Friday, December 17, 2010

Google Books Ngrams and the number of words for "snow"

As I mentioned yesterday, Google has put out a big data set (downloadable) and a handy interface for tracking the incidence of words and phrases. As many have pointed out, one can do a lot more with the raw data set than with the handy, handy online tool, but it's that latter that the New York Times called
a diversion that can quickly become as addictive as the habit-forming game Angry Birds.
(I've never heard of Angry Birds, but that's the kind of thing I'm likely to be out of the loop on, so okay.)

I said yesterday that Google Books Ngrams was a lot more sophisticated than Googlefight, and it is. But I'm troubled by the model of cheap history that's presented in the NYT article--as if to suggest that if you want to do cultural studies now, all you need to do is Google (Books Ngram) it:
With a click you can see that “women,” in comparison with “men,” is rarely mentioned until the early 1970s, when feminism gained a foothold. The lines eventually cross paths about 1986.

You can also learn that Mickey Mouse and Marilyn Monroe don’t get nearly as much attention in print as Jimmy Carter; compare the many more references in English than in Chinese to “Tiananmen Square” after 1989; or follow the ascent of “grilling” from the late 1990s until it outpaced “roasting” and “frying” in 2004.

“The goal is to give an 8-year-old the ability to browse cultural trends throughout history, as recorded in books,” said Erez Lieberman Aiden, a junior fellow at the Society of Fellows at Harvard.
I will concede that newspaper articles are necessarily glib, but it's easy to see how the fallacy that this article promotes would be broadly accepted. The first quoted paragraph above correlates the incidence of words with known historical events; the second moves on to suggest the ngrams' predictive capacity. There's a narrative implicit in each statement of "just the facts," only the assumptions that go into them are effaced.

Let's look at the first of these reports: "With a click you can see that “women,” in comparison with “men,” is rarely mentioned until the early 1970s, when feminism gained a foothold."

The implicit narrative is that nobody even bothered to talk about women until second-wave feminism came along. In fact, if you go by the incidence of the words "men" and "women" in the Google Books Ngrams data set, sure, you might be tempted to really believe that the 1970s was the time "when feminism gained a foothold." I can imagine the suffragists who fought for and won the franchise that I as a woman can enjoy annually asking, "what are we, chopped liver?"

What distinguishes the feminist movements of the 1970s, for the purposes of this data set, is its renewed attention to language. The suffragists wanted a policy change: they wanted the vote (and the freedoms that the vote could give them). The second-wave feminists wanted policy changes too (still working on that wage gap, people!) but they also wanted a deeper change: they wanted to change the way we thought about women and--here's the kicker--spoke about women. The 1970s is when it became broadly recognized as problematic to treat "man" as a synonym for "person," and I suspect that a significant percentage of the uses of "men" were and remain the "universal" usage. That's a nuance that the online Ngrams tool can't give you ("with a click").

Likewise, if you got your understanding of history through Google Books Ngrams, you wouldn't expect to hear this from 1929:
Have you any notion of how many books are written about women in the course of one year? Have you any notion how many are written by men? Are you aware that you are, perhaps, the most discussed animal in the universe? Here had I come with a notebook and a pencil proposing to spend a morning reading, supposing that at the end of the morning I should have transferred the truth to my notebook. But I should need to be a herd of elephants, I thought, and a wilderness of spiders, desperately referring to the animals that are reputed longest lived and most multitudinously eyed, to cope with all this. I should need claws of steel and beak of brass even to penetrate the husk. How shall I ever find the grains of truth embedded in all this mass of paper, I asked myself, and in despair began running my eye up and down the long list of titles. Even the names of the books gave me food for thought. Sex and its nature might well attract doctors and biologists; but what was surprising and difficult of explanation was the fact that sex--woman, that is to say--also attracts agreeable essayists, light-fingered novelists, young men who have taken the M.A. degree; men who have taken no degree; men who have no apparent qualification save that they are not women. (27)
That's Virginia Woolf, of course, giving a fictionalized, subjective encounter with the British Library. Yes, it's a bit longer than a sentence, and you have to read it; you can't just click! But it gives you much more women's history than does the Google Books Ngrams example cited by the NYT.

Google Books Ngrams is a fun tool (as everyone keeps pointing out) and, if you download the data set, even a useful one. But it can only get you so far, and uncontextualized, it encourages assumptions that it does not announce. I mention the number of words for "snow" in my title above because it's a famous fallacy--the notion that Inuit has [insert high number here] words for snow, always with the implicit suggestion that having a lot of words for something means that something is extremely important to the culture. Language Log uses this as their go-to example of stupid assertions about language widely believed by the public; it's a cheap Whorfism, claiming broad cultural significance for something incidental. We have a widely accepted term for a magical being that flies by night and runs a clandestine cash-for-baby-teeth operation. That doesn't make it central to American culture. ("Mom, is the Tooth Fairy real?" "Yes! Check Google Books Ngrams if you don't believe me!")

There's a certain Words For Snowism in the online Google Books Ngrams tool, the suggestion that the more frequently a word is used, the more important it is in a collective unconscious of which the Google Books data set serves as a convenient index. This importance is not the same thing as significance, in the sense of significant digits or statistical significance; it's not the difference that makes a difference, but rather a psychologized importance--attachment, cathexis. Which is really kind of garbage.

The web interface is, as my friend Will says, a toy. For the serious scholar, there's much more to be done with ngrams, and one can be careful as well as lazy with the conclusions one draws. But the toy has a "boom! proven with statistics!" quality, a reality-effect that's enormously pleasurable, even, as Patricia Cohen writes for the NYT, "addictive." (That's the point of toys, isn't it?) That's why I'm inclined to agree with Jen Howard, who writes that her "skepticism is mostly directed at how people will use it and what kinds of conclusions they will jump to on dubious evidence." That sort of jumping is practically built into the ngrams tool.


Woolf, Virginia. A Room of One's Own. Annot. and introd. Susan Gubar. 1929; Orlando: Harcourt, 2005. Print.

Previously on text-mining:
Dec. 16, 2010
Dec. 14, 2010
Google's automatic writing and the gendering of birds

Friday, September 17, 2010

In a recent review, David A. Bell compares Mark C. Taylor to Thomas Friedman -- it's cold, but apt. Taylor isn't a public intellectual so much as a public anti-intellectual, an outlier who can't possibly represent higher ed in general but whose voice is disproportionately amplified by the mainstream media. In fact, it may be Taylor's own tunnel vision that's the most powerful argument against (a segment of) the academic system that he decries, for the problems he rails against, when they are real problems at all, only exist in his highly idiosyncratic situation. As Bell puts it:
He dismisses—in a few sentences—the idea that [tenure] might protect academic freedom, noting that he has never personally seen it under threat, and that in forty years of teaching he has never met a professor “who was more willing to express his or her views after tenure than before.”

On the second of these points, I can only conclude that Taylor and I know a very different set of academics. As to the first of them, well, Taylor’s personal experience came at Williams College and Columbia University. Perhaps he should think for a moment of what it might be like to teach at a large public university in a state where Tea Party members increasingly dominate the legislature, denouncing “radical professors” and calling for the further slashing of university budgets. Would he feel entirely free, at such an institution, to start a research project on, say, homoeroticism in American poetry? The evolution of dinosaurs? The history of racial discrimination in American evangelical churches? Corruption in the state senate? Lifetime tenure, for all its problems, still provides a very real safeguard for the advancement of unpopular ideas.

It's like Tina Fey told us: if you can't see outside it, then being in a bubble causes you to become a little bit dumb. Historiann has recently written about the failure of the national "conversation" on higher ed to meaningfully acknowledge state universities; Dean Dad similarly points out that such polemics never even consider the single most powerful component of public education: community colleges. The discussions of higher ed that are being privileged in the mainstream press totally ignore the vast majority of higher ed, at no risk to the people making their pronouncements but to the great detriment of most of higher ed and the students and public it serves. Mark C. Taylor, his book contract, and his repeat invitations to the New York Times are exhibit A.

Sunday, August 22, 2010

You might be thinking, "Hey, Natalia, are you still getting google hits for that Duns Scotus citation thing?"

The answer to that would be yes.

I hope that anonymous grad student has a book contract, is all I'm saying.

Saturday, July 24, 2010

Attention and Length

In the midst of a satisfying rant about the recent NYT faux-forum on tenure, Aaron notes:
But the real problem is simply that this was never going to be a real discussion anyway; in 350 words, not much can be said about a complicated issue, and so it’s hardly surprising that not much was said. The NY Times’ decision to limit these contributions to such a microscopically small word count — in a virtual forum whose space is virtually infinite — illustrates that they were far more interested in the pretense of debate than an actual discussion (the same way grabbing onto a reliably orthodox leftist and two reliably orthodox conservatives demonstrates an interest in the pretense of balance, rather than the reality of actual discussion). Which is why, as irritating as this non-discussion is, it’s totally unsurprising.
Aaron complains about the shortness of the pieces, none of which respond to the others, because it prevents any depth of discussion. (Uh, MLA roundtable, anybody?)

What caught my eye in Aaron's statement was the point he makes about the cheapness of space on the web. The available space is, if not limitless, much more than anyone could possibly need. This is a point that digital humanists make all the time. This is just a true fact: space is cheap on the web. The capacity to store large texts is there.

Yet there's also a contrary notion, namely that, despite arbitrarily expandable space, the web is not the natural home of the long form but rather a "shallows," a place of soundbites and snippets and Hollywood movies illegally uploaded to YouTube in nine-minute chunks.

This is something less than a true fact, but something more than just a rumor. There is certainly a culture of the internet that privileges the short form, and culture is very, very strong.

Moreover, the web is not only virtual but also material, and while virtual space may be infinite, the ability of my wrists to withstand trackpad scrolling is not. Perhaps iPads and Kindles are more ergonomically sound than is my trusty MacBook (not perhaps: definitely), but there's still a physical limit to the amount of on-screen reading one can do. I don't think the internet makes people stupid, but I also don't think it's especially accommodating of long-form reading, at least not yet.

The web has two great strengths that lie in tension. One is the aforementioned availability of space. The other is ease of linkage: the web makes it very easy to travel around this vast space. (It's less good at marking stable places, keeping the ground from shifting.) So it's possible to stay in one place for a long, long time, because there are no technical obstacles to storing War and Peace online. But to do so mitigates against the other strong impulse of the web, transit -- what Anne Friedberg has pointed to as an arcade-evoking virtual motion through interconnected, visually captivating spaces.



The your-brain-on-teh-internets debates are very much reminiscent of modernist debates about distraction; there's that same fear that attention and the moral rectitude that it implies have been replaced by superficial and trashy pleasures, as Jonathan Crary has so persuasively documented. And yet, I recently had the pleasure of hearing my fourteen-year-old brother narrate the ins and outs of his most recent internet RP in excruciating detail, and it was I, the Ph.D., whose attention wandered (A LOT).* The internet narrative bested my attention span.

Is the internet the future of long-form publications, as the digital humanists would have it (because paper publishing is in the throes of death)? Or is it culturally and materially inimical to longer forms?

Perhaps on the internet the attention-distraction dialectic that Crary describes is simply operating in a way we're not yet used to discussing, offering us a new way to experience old anxieties about where an idle mind might go.

*While the RP itself bores me to tears, I absolutely love that my brother wanted to tell me all about it.

Crary, Jonathan. Suspensions of Perception: Attention, Spectacle, and Modern Culture. Cambridge, Mass.: MITP, 1999. Print.

Image: Passage Jouffroy, Paris. Wikimedia.

Tuesday, September 8, 2009

Teaching "grammar" versus teaching rhetoric

It always makes me nervous to say that I agree with Stanley Fish, but he makes some good points in his recent column on teaching composition. (Standard Fish-related disclaimer: I deeply disagree with some things he's said previously on the subject.)
“If we teach standardized, handbook grammar as if it is the only ‘correct’ form of grammar, we are teaching in cooperation with a discriminatory power system” (Patricia A. Dunn and Kenneth Lindblom, English Journal, January, 2003).

Statements like this one issue from the mistake of importing a sociological/political analysis of a craft into the teaching of it. It may be true that the standard language is an instrument of power and a device for protecting the status quo, but that very truth is a reason for teaching it to students who are being prepared for entry into the world as it now is rather than the world as it might be in some utopian imagination — all dialects equal, all habit of speech and writing equally rewarded.
Of course, Dunn and Lindblom are completely correct when it comes to imputing moral value to different sociolects. You'll get no argument from me there.

But Fish is right to point out the problem with importing the concerns of one discipline wholesale into another. That's what happens when linguists (or, on occasion, people who took one linguistics class in undergrad) make it a personal crusade to eradicate "prescriptivism" not only within their discipline, where that label is meaningful, but in the entire wide world, where it is less so. (Please note: this is not a description of all linguists by any means.)

Fish's point is related to one of my fundamental convictions about teaching writing, which is that it's not about teaching morals (good grief) or about language-as-it-exists-in-the-world (as in linguistics, where "prescriptivism" versus "descriptivism" is a meaningful matter of methodology). Rather, it's about teaching rhetoric. And rhetoric means manipulating language in all its plasticity, not observing it like a creature in the wild. That involves mastering particular stylized linguistic patterns, sometimes informally known by the name of "grammar," no Chomskian implications intended.

I also quite like the exercises Fish proposes:
I have devised a number of exercises designed to reinforce and extend the basic insight. These include (1) asking students to make a sentence out of a random list of words, and then explain what they did; (2) asking students to turn a three-word sentence like “Jane likes cake” into a 100-word sentence without losing control of the basic structure and then explain, word-by-word, clause-by-clause, what they did; (3) asking students to replace the nonsense words in the first stanza of Lewis Carroll’s “Jabberwocky” with ordinary English words in a way that makes coherent (if silly) sense, and then explain what they did, and how they knew what kind of word to put into each “slot.” (The answer is that even in the absence of sense or content, the stanza’s formal structure tells them what to do and what not to do.)

Notice that the exercises always come in two parts. In the first part students are asked to do something they can do easily. In the second part they are asked to analyze their own performance. The second part is the hard one; it requires students to raise to a level of analytical conscience the operations they must perform if they are to write sentences that hang together.
"Jabberwocky," by the way, is God's gift to teaching. I used it in a History of the English Language lecture last year. I can't tell you how my heart swelled with delight when a student proposed, based on the stem vowel, that "outgrabe" was a past-tense strong verb.

Listen up, NYT! More smart discussions of humanities pedagogy, please! Maybe someday if you work at it you'll even make it to humanities research...

Sunday, July 26, 2009

You know, I think this sci-fi novel has been written.



(Link.)

If you get any cognitive dissonance with this following sentence, then "man" is not gender-neutral: "Like other mammals, man breastfeeds his young."

Sunday, July 12, 2009

UC Budget cuts, yet again

UPDATE: Petition to the UC Regents

Mark Yudof released the proposed UC budget plan [pdf] on Friday.

The New York Times has an article that appears to be basically a press release from the UC Regents. No one is quoted who is not a UC administrator. Nice work, NYT!

A good blog on the issue is Remaking the University. There are also a number of documents up at the UCB English blog and at a site hosted by the faculty of UCLA.

The Chronicle also has an article [paywall], which laudably addresses both UC and Cal State, but gives little detail.

I'm wondering whether we'll be seeing bigger classes this fall. I sincerely hope not; students are not widgets, and it does not behoove us to cut corners in educating them.

This concerns me:
At the briefing, the current chairman, Russell Gould, announced creation of a new University of California Commission on the Future, which he and Mr. Yudof will head. The commission will consider how to maintain access, quality and affordability in a tough economic climate, what delivery models for higher education make the most sense, how big the university should be, and how to maximize traditional and alternative revenue streams.

“We’re going to have to change the way we do business,” Mr. Yudof said.

..."Delivery models"? What, we now "deliver" education, like it's shrimp lo mein? And here I thought we did these things called "research" and "teaching." Huh.

Sunday, July 5, 2009

Friday, May 1, 2009

"The design of digital tools for scholarship is an intellectual responsibility, not a technical task."

Skg sent me this link to a very interesting article by Johanna Drucker. Drucker points out that even though humanities research depends on libraries and archives, faculty tend to take a bizarrely hands-off attitude toward the development of the library. Even as they are aware that print culture has changed and continues to change, they believe that librarians and other information specialists will take care of it -- and take care of it in a way that will suit scholars' needs.
The design of new environments for performing scholarly work cannot be left to the technical staff and to library professionals. The library is a crucial partner in planning and envisioning the future of preserving, using, even creating scholarly resources. So are the technology professionals. But in an analogy with building construction, they are the architects and the contractors. The creation of archives, analytic tools, and statistical analyses of aggregate data in the humanities (and in some other scholarly fields) requires the combined expertise of technical, professional, and scholarly personnel.
Just as problematic is the hand-waving approach so often taken toward technology: technology is the future; technology is magical; technology will fix things. It's as if the issue were whether to be pro-technology or anti-technology. But, as Drucker observes, different technologies do different things in different ways. The question is how will we use which technologies, and to what ends?

Robin G. Schulze makes a similar argument about textual editing in her bluntly titled essay "How Not to Edit: The Case of Marianne Moore" [Muse]. On one level, the article is a detailed critique of Grace Schulman's edition of Moore's poems (Viking, 2003). The edition has some virtues, but logic and accuracy are not among them. But Schulze is also sounding the alarm about the assumption that editing will take care of itself, that it is a merely technical task to be farmed out to publishing houses (even commercial houses like Penguin/Viking) rather than a scholarly intellectual task. Such an attitude has real practical consequences for literary criticism and teaching.
Scholars ... need to speak out and up on the matter of bad editions — the louder the better — because if they don't, bad things can happen. Indeed, Moore scholars now face a serious danger. In October of 2003, Publishers Weekly forecast that Schulman's edition would "supplant Moore's 1967 collection for course assignments, making for steady sales over the long run" (79). Thankfully, this prediction has not yet come to pass, in part because Moore scholars have been reluctant to bring the book into the classroom. Penguin, however, is the publisher of both the Complete Poems [Clive Driver and Patricia C. Willis's "final authorial intention" edition] and Schulman's edition. Eager to boost sales of the Schulman edition, Penguin could well discontinue the Complete Poems of Marianne Moore. If that happens, scholars will literally have no reliable classroom edition of Moore's poems from which to work—a sad day indeed for Moore's literary reputation.

Similarly, Drucker argues, if we fail to value, support, and require the intellectual labor of designing tools for digital scholarship, we risk making all our work more difficult, or worse, conforming our work to tools designed to meet the entirely different demands of the corporate world. We're all familiar with something like this already, with Microsoft Word's (irritatingly persistent) defaults apparently set explicitly to baffle MLA or Chicago style.
Many humanities principles developed in hard-fought critical battles of the last decades are absent in the design of digital contexts. Here is a short list: the subjectivity of interpretation, theoretical conceptions of texts as events (not things), cross-cultural perspectives that reveal the ideological workings of power, recognition of the fundamentally social nature of knowledge production, an intersubjective, mediated model of knowledge as something constituted, not just transmitted. For too long, the digital humanities, the advanced research arm of humanistic scholarly dialogue with computational methods, has taken its rules and cues from digital exigencies.

[...]

Unless scholars in the humanities help design and model the environments in which they will work, they will not be able to use them. Tools developed for PlayStation and PowerPoint, Word, and Excel will be as appropriate to our intellectual labors as a Playskool workbench is to the chores of a real plumber.


I think Drucker and Schulze are both right about the need to value (and promote, and fund) the intellectual work that must go into making texts accessible. Both Drucker and Schulze seem to propose that the way to do it is for academic departments to take over some of this work, I think rightly. But what's difficult about that solution is that departments have fewer and fewer tenure-track/tenured faculty, and thus barely have the resources to cover their current undergraduate courses. In other words, these articles could be misinterpreted as a call to shift resources away from existing specialties toward the editing and dissemination of text, when really departments need to hire -- and be authorized to hire -- scholars who specialize in these areas.

* * * * *

Another addendum to recent posts: Marc Bousquet's response to Mark Taylor is excellent:
But sure, you’re right. The problem is that we need to end tenure. When we end tenure, the market will insure that these folks are paid fairly, that persons with Ph.D.’s will be able to work for those wages.

Oh, crap, wait. As anyone actually paying attention has observed, we’ve ALREADY ended tenure. With the overwhelming majority of faculty off the tenure track, and most of teaching work being done by them, by students, and professional staff, tenured appointments are basically the privilege of a) a retiring generation b) grant-getters and c) the candidate pool for administration.

How’s that working out? Well, gee, we’re graduating a very poor percentage of students. Various literacies are kinda low. We don’t have a racially diverse faculty, and women, especially women with children, are far more likely to have the low-paying low-status faculty jobs.

Nice! Let’s get more of that!

Wednesday, April 29, 2009

A brief addendum, in case anyone was wondering whether mocking a grad student's diss in the NYT could really be problematic.

In the last two days this (not especially popular) blog has received hits from the following search strings:

medieval theologian Duns Scotu [sic] citations

duns scotus use of citation

Duns Scotus citations thesis

duns scotus colombia [sic] footnotes

duns scotus citations (five times)

how duns scotus used citations

how the medieval theologian Duns Scotus used citations

dissertation medieval theologian Duns Scotus used citations

duns scotus citations dissertation (twice)

duns scotus citations taylor "new york times"

duns scotus citation citations

duns scotus citations thesis

duns scotus used citations

duns scotus citation columbia

dissertation duns scotus citations

columbia dissertation Duns Scotus citation ph.d.

So now I'm really curious. How did Duns Scotus use citations? I seriously kind of want to read this dissertation. I hope somebody publishes a monograph on this topic in the near future. Listen up, Oxford UP: there is public interest.

Incidentally, I am currently writing something on how Marianne Moore cites Duns Scotus. No lie.

Monday, April 27, 2009

Citations: Useless and Boring

Mark Taylor wants to end the university as we know it. I was thinking we were about due for another call for radical change from a tenured professor. I hadn't heard one since MLA. Taylor's essay has the unusual virtue, at least, of wanting to see the restructuring happen from the top down, rather than placing the burden of transforming the profession on disenfranchised grad students. But Taylor's proposals are incoherent.

He wants interdisciplinarity (radical!), but he confuses interdisciplinarity with adisciplinarity.

He wants to avoid the pitfalls of "overspecialization," but simultaneously advocates for hyperspecialization (one university specializing in French studies, another in German).

He acknowledges that universities could never get along without cheap grad student teaching, but also argues that there is "no market" and "no demand" for new PhDs. This is clearly incoherent. Given the choice -- and this is reflected in college ranking algorithms everywhere -- having real, tenured professors do the teaching is always preferred. The point of going to a four-year school in the first place is that it's considered best to learn from people who are active in the field, who are institutionally supported in doing research.

The fact is that there is a huge demand for trained scholars, but like the WalMart nation we are, we want them as cheaply as possible. The difference is that people don't kid themselves that the cheap plastic goods from WalMart are equivalent to artisan-made products in the way that universities try to insist that farming their teaching out to inexperienced-by-design, officeless, unsupported grad students is the intellectual equivalent to giving students small seminars with professors.

But I'm even more bothered by Taylor's cavalier treatment of research being done by young scholars in his own field. So far as I can tell, it's a byproduct of a deep suspicion of specialized knowledge and possibly scholarship itself. Taylor writes,
Unfortunately this mass-production university model [proposed by Kant] has led to separation where there ought to be collaboration and to ever-increasing specialization. In my own religion department, for example, we have 10 faculty members, working in eight subfields, with little overlap. And as departments fragment, research and publication become more and more about less and less. Each academic becomes the trustee not of a branch of the sciences, but of limited knowledge that all too often is irrelevant for genuinely important problems. A colleague recently boasted to me that his best student was doing his dissertation on how the medieval theologian Duns Scotus used citations.
Taylor represents specialization as per se tending to uselessness -- as if knowledge were not an end in itself. So Taylor decides to make a joke out of some young scholar's work, using his position as a tenured professor at Columbia -- the kind who can get Op-Ed pieces into the NYT -- to ridicule his own field rather than attempt to inform the public about what it really does. Taylor pretends to be sympathetic to grad students, but to ridicule someone's dissertation in the pages of the New York Times is nothing but hostile. And, if he is talking about a real colleague's real student's real dissertation, unethical.

Taylor plunks the description down there as if it were supposed to be obvious that a dissertation on Duns Scotus's use of citations were trivial. But is it that there is something inherently trivial about this student's dissertation, or is it that Taylor himself is in the wrong gig? When a professor of religion cannot imagine why a dissertation on Duns Scotus might be important or useful, and when a scholar thinks that citations are necessarily trivial, then there's trouble. I'm not a specialist in medieval theology, but offhand, and as someone who studies modernism, I know that citation practices involve questions of authority and deference, intertextuality, bibliographic/genetic information about the author's sources, and philosophical positions on presence/absence that are probably fairly relevant to theology. I don't know enough about the project to evaluate it, but there's nothing about that description that should give anyone license to dismiss the project out of hand, unless that person is already hostile to the idea of specialized knowledge per se.

I get that impression, too, from another shot Taylor takes at citation in his recommendation that we abolish traditional dissertations in favor of multimedia projects for the post-print era. "[T]here is no longer a market," he writes, "for books modeled on the medieval dissertation, with more footnotes than text." Actually, there never was such a market, which is why we have university presses. The idea is that scholarly value might not be the same thing as market value.

But Taylor's evidence that academic books are arcane and useless is apparently that they come with a scholarly citational apparatus, which Taylor gently exaggerates for maximum mock-value: "more footnotes than text." Can he be serious? Citations are part of academic standards of intellectual honesty (quite a different thing from intellectual "property"). Citations acknowledge the research that one has done, and help others do their own research. Citations refer other scholars to one's sources and expand on points of contention in the field. Citations are fundamental to good research, but Taylor only alludes to citations as jokey stand-ins for what he sees as the problem with the university as we know it. There seems to be something deeper at work here than the usual discontent with How Things Are.

Calls to reform academia are fairly common. I've made some myself. Inherent in the genre is a tension between wanting to do away with large and entrenched structures and an awareness that much of what is good, and indeed foundational, about academia has been produced by those same structures (like, say, disciplinary methodologies). Taylor registers those tensions in moments like his proposal for interdisciplinary problem-based research groups.
A Water program would bring together people in the humanities, arts, social and natural sciences with representatives from professional schools like medicine, law, business, engineering, social work, theology and architecture. Through the intersection of multiple perspectives and approaches, new theoretical insights will develop and unexpected practical solutions will emerge.
A Water program sounds like an interesting, nay, even exciting idea. But as Taylor himself acknowledges, this would hinge on bringing together multiple disciplinary perspectives, not eradicating disciplinarity. That means it wouldn't work out to "[a]bolish permanent departments," as Taylor proposes at the beginning of the section, because methodological perspectives do not happen in a vacuum. I suspect more and more that the Humanities Are Dead (TM) essay cannot be coherent.

It is, on the other hand, perfectly possible to write such an essay without mocking a colleague's grad student in the pages of the New York Times.

Sunday, November 30, 2008

Communicable diseases

I once heard a most intriguing talk by Jim Mussell on "The 'Very Proteus of Disease': Media, Materiality, and the Flu in 1890s London." He tracked the way that influenza epidemics and newspaper accounts of the aforementioned epidemics were received, the one causing the other and (allegedly) vice-versa, as people eager to be the first to know the news developed psychosomatic runny noses.

Now I read in the NYT that
a few weeks ago, Google deployed an early-warning service for spotting flu trends, based on search queries for flu-related symptoms.
Doubtless this will be extremely useful for those of us who like to pre-emptively stock up on orange juice. The article is at least ostensibly about privacy, and a researcher adds,
“The new information tools symbolized [sic] by the Internet are radically changing the possibility of how we can organize large-scale human efforts,” said Thomas W. Malone, director of the M.I.T. Center for Collective Intelligence.

“For most of human history, people have lived in small tribes where everything they did was known by everyone they knew,” Dr. Malone said. “In some sense we’re becoming a global village. Privacy may turn out to have become an anomaly.”

We can use our brand-new media to track the flu, and are turning into what you might call a "global village." In other words, all your McLuhan are belong to us.

Tuesday, November 25, 2008

New BAM/PFA

There's an article in the NYT just now on the proposed Berkeley Art Museum. I quite like the new design, although I don't see what the author, Nicolai Ouroussoff, has against the current building (apart from the fact that it's falling down). He writes,
Standing on a rough commercial strip at the campus’s southern edge, the old building is still marred by the big steel columns that were installed after the quake to support its cantilevered floors. Its rough, angular concrete forms and oddly shaped galleries are awkward settings for art.
I beg to differ: its oddly shaped galleries are awesome. I'm also not sure what Ourossoff means by "a rough commercial strip." Surely he doesn't think Bancroft and Bowditch is a rough neighborhood. But then, he does come out with things like
On a local level, the museum could help break down the divide between the ivory tower at the top of the hill and the gritty neighborhood at the bottom.
Gritty, gritty Shattuck Ave.

One thing I do think Ouroussoff did get right is this, the very first line of the article:
I have no idea whether, in this dismal economic climate, the University of California will find the money to build its new art museum here.

Good question. Since, you know, we've fired the lecturers.

Tuesday, November 18, 2008

Apocalyptic narratives about narrative

NYT: MIT's Media Lab Will Study Film Narrative in Center for Future Storytelling.

In a telephone interview last week, Mr. Kirkpatrick said he might take a cue from Al Gore, who used a documentary film, “An Inconvenient Truth,” to heighten concern about global warming. Mr. Kirkpatrick is now considering an alarm-bell documentary of his own, he said.

Its tentative title: “A World Without Story.”


As Arcadia put it when she emailed me the link, "NYT or Onion?"

Thursday, November 13, 2008

Austin Grossman has a very interesting review of Maze of Bones.
It’s a story about people born into the most privileged family in the world, who then set out to become the most important people in history. Whatever happened to just owning your own chocolate factory?