Wednesday, January 1, 2014
Thursday, October 10, 2013
Won't somebody let this child into the cage?
Cross-posted to the course blog for my junior seminar Modernism and Childhood.
-----
Another one for the "government and cuteness" theme:
Think about this photo again when we read Curious George.
What does the tweeter—journalist Alex Fitzpatrick—seem to think is the rhetorical force of this photo?
It's "sad"; the toddler is sad; the toddler loves animals, as evidenced by her or his indeterminate animal-ears hood, and wants into the zoo; the toddler can't go in because of the government shutdown.
Of course, it's completely plausible to think that a toddler loves animals. You should see my niece looking at a turtle; she could not be more psyched.
But back up. Why would wearing an animal-ear hood translate into evidence of loving animals? After all, the toddler didn't wobble down to Baby Gap and pick it out him- or herself. It was an adult who decided that this child's love of animals should be manifested as an identification with the animal.
The child is trying to get into the zoo. To see animals? Or to be an animal?
The striking iconography of metal bars here makes the child look caged, citing what we know a zoo to be: a place where animals are kept in cages. The cages are carefully designed and controlled environments meant to emulate the animals' natural habitats and keep them happy, but they are cages all the same. The child is dressed as an animal. The child wants in, and the bars are keeping her or him out. The child cannot read the sign, prominent on the right, that explains why. For that matter, the child cannot vote for members of Congress.
The sadness of this image is the same as its cuteness: the child's desire is frustrated by the same adult forces that iconographically stage her helplessness and her kinship with the animals she is trying to see.
-----
Another one for the "government and cuteness" theme:
Via @Reddit, the absolute saddest #shutdown photo you will see pic.twitter.com/6KhLdZeBOs
— Alex Fitzpatrick (@AlexJamesFitz) October 10, 2013
Think about this photo again when we read Curious George.
What does the tweeter—journalist Alex Fitzpatrick—seem to think is the rhetorical force of this photo?
It's "sad"; the toddler is sad; the toddler loves animals, as evidenced by her or his indeterminate animal-ears hood, and wants into the zoo; the toddler can't go in because of the government shutdown.
Of course, it's completely plausible to think that a toddler loves animals. You should see my niece looking at a turtle; she could not be more psyched.
But back up. Why would wearing an animal-ear hood translate into evidence of loving animals? After all, the toddler didn't wobble down to Baby Gap and pick it out him- or herself. It was an adult who decided that this child's love of animals should be manifested as an identification with the animal.
The child is trying to get into the zoo. To see animals? Or to be an animal?
The striking iconography of metal bars here makes the child look caged, citing what we know a zoo to be: a place where animals are kept in cages. The cages are carefully designed and controlled environments meant to emulate the animals' natural habitats and keep them happy, but they are cages all the same. The child is dressed as an animal. The child wants in, and the bars are keeping her or him out. The child cannot read the sign, prominent on the right, that explains why. For that matter, the child cannot vote for members of Congress.
The sadness of this image is the same as its cuteness: the child's desire is frustrated by the same adult forces that iconographically stage her helplessness and her kinship with the animals she is trying to see.
Friday, September 20, 2013
This is a far too long response to a post by Adeline Koh.
--
I agree with Ted's point that DH is a social category more than anything else, but, as he acknowledges, such social categories are consequential. I've argued before that the search for "the most digital digital" is basically intellectually doomed. But I don't think that's the question Adeline was asking—I think she was already asking the social question. (Matt Kirschenbaum gave a social answer.)
In most cases in the humanities, there isn't that much disciplinary boundary-policing; people usually care more about whether the work is good (for what it is) than whether it's "modernist" or "eighteenth century" (a century reputed to be quite long!) or, to use Ted's examples, Marxist or New-Critical. Thus "[t]he ideal PMLA essay exemplifies the best of its kind, whatever the kind." To the question of "what kind of scholarship is this?" PMLA literally says "whatever"!*
Weirdly, though, when it comes to digital humanities, the digitalness (how digital?) matters a lot. In some quite consequential institutional settings (hiring, fellowships and grants, tenure), what kind? matters for things marked "digital" where, for other things, the operative question would be how good? (for widely varying definitions of "good," of course).** It's nice to say, "focus on the scholarship, not on whether it's DH" (#4 above). But there's a reason people focus on whether it's DH: largely through the urging of digital humanists themselves, digital work has come to be seen as warranting an entirely different evaluative system from "traditional" scholarship, so that the question of how good? depends on the question of what kind? in the first place. So in practical terms, "how digital?"—philosophically incoherent as the question is—often serves as a proxy for "how good?," and even if we think it shouldn't matter we've set it up so that it does (and not entirely without reason).
Matt's social answer to the "how digital?" question—tautological or recursive, depending on how we prefer to read it—is that "It is DH if it assumes value within a community of practice that 'does' DH."
But Adeline's question was posed in the specific context of putting together an introduction to DH for people who need one—who have heard of this "digital humanities" thing, do not [think they] do it, and would like to. If they're in "a community of practice that 'does' DH," they're not aware of it. Adeline's task is to inform them of how they might create or join such a community of practice. Under what circumstances would creating an online journal constitute such a thing?
So I think Ted's right; it's a social question. But it's a social question that matters for social reasons that can't, I think, be disavowed without abdicating responsibility for the institutionalization that was so ardently fought for (resulting in an "eternal September" or "DH II" that long-time practitioners are now declaring uncomfortable—y'all, what did you think was going to happen?).
Is an online journal "DH"?
I think Matt's social answer to this social question probably comes closest to the mark. But I also think what he's describing is unfortunate. It would be better, I think, to examine why the social question—how digital?—keeps mattering, so we can figure out how to make it matter less.
--
* But: as far as I know, PMLA is not set up to publish a database.
** I'm glossing over some local circumstances of boundary-policing—like, we all know that C19 has a vision of American lit scholarship that's different from ASA's or ALA's.
--
I agree with Ted's point that DH is a social category more than anything else, but, as he acknowledges, such social categories are consequential. I've argued before that the search for "the most digital digital" is basically intellectually doomed. But I don't think that's the question Adeline was asking—I think she was already asking the social question. (Matt Kirschenbaum gave a social answer.)
In most cases in the humanities, there isn't that much disciplinary boundary-policing; people usually care more about whether the work is good (for what it is) than whether it's "modernist" or "eighteenth century" (a century reputed to be quite long!) or, to use Ted's examples, Marxist or New-Critical. Thus "[t]he ideal PMLA essay exemplifies the best of its kind, whatever the kind." To the question of "what kind of scholarship is this?" PMLA literally says "whatever"!*
Weirdly, though, when it comes to digital humanities, the digitalness (how digital?) matters a lot. In some quite consequential institutional settings (hiring, fellowships and grants, tenure), what kind? matters for things marked "digital" where, for other things, the operative question would be how good? (for widely varying definitions of "good," of course).** It's nice to say, "focus on the scholarship, not on whether it's DH" (#4 above). But there's a reason people focus on whether it's DH: largely through the urging of digital humanists themselves, digital work has come to be seen as warranting an entirely different evaluative system from "traditional" scholarship, so that the question of how good? depends on the question of what kind? in the first place. So in practical terms, "how digital?"—philosophically incoherent as the question is—often serves as a proxy for "how good?," and even if we think it shouldn't matter we've set it up so that it does (and not entirely without reason).
Matt's social answer to the "how digital?" question—tautological or recursive, depending on how we prefer to read it—is that "It is DH if it assumes value within a community of practice that 'does' DH."
But Adeline's question was posed in the specific context of putting together an introduction to DH for people who need one—who have heard of this "digital humanities" thing, do not [think they] do it, and would like to. If they're in "a community of practice that 'does' DH," they're not aware of it. Adeline's task is to inform them of how they might create or join such a community of practice. Under what circumstances would creating an online journal constitute such a thing?
So I think Ted's right; it's a social question. But it's a social question that matters for social reasons that can't, I think, be disavowed without abdicating responsibility for the institutionalization that was so ardently fought for (resulting in an "eternal September" or "DH II" that long-time practitioners are now declaring uncomfortable—y'all, what did you think was going to happen?).
Is an online journal "DH"?
I think Matt's social answer to this social question probably comes closest to the mark. But I also think what he's describing is unfortunate. It would be better, I think, to examine why the social question—how digital?—keeps mattering, so we can figure out how to make it matter less.
--
* But: as far as I know, PMLA is not set up to publish a database.
** I'm glossing over some local circumstances of boundary-policing—like, we all know that C19 has a vision of American lit scholarship that's different from ASA's or ALA's.
Tuesday, September 17, 2013
Wasting time on the internet: a syllabus
This is a syllabus in progress, imagined as part writing workshop, part American studies course on aesthetics. Comments and suggestions are welcome.
What I Did For Love: Taste, Evaluation, and Aesthetics in American Culture
“I don’t know art, but I know what I like,” goes the disclaimer. In this writing-intensive part-workshop, part-seminar, we will seek to unpack the relationship between “art” and “what I like” by examining a variety of cultural objects together with accounts of “taste.” What are the uses of an art that nobody likes? Could “annoyance” be an aesthetic principle? What is the role of money in taste? What are the ethics of aesthetics? Under what circumstances is an aesthetic pleasure “guilty”? When should the appreciation of art works be a matter of disinterested judgment, and when a matter of passionate engagement? Does “love” blind? What is the difference between a “fan” and a “critic”? What are the affordances and limits of the “formulaic” and the “generic”?
Four weeks of this course will be devoted to workshopping students’ critical writing, examining the roles of description, praise, blame, analysis, and enthusiasm in writing about culture. Students will also maintain a course blog. For the final assignment, students are encouraged to pitch their writing to an appropriately chosen publication.
What I Did For Love: Taste, Evaluation, and Aesthetics in American Culture
“I don’t know art, but I know what I like,” goes the disclaimer. In this writing-intensive part-workshop, part-seminar, we will seek to unpack the relationship between “art” and “what I like” by examining a variety of cultural objects together with accounts of “taste.” What are the uses of an art that nobody likes? Could “annoyance” be an aesthetic principle? What is the role of money in taste? What are the ethics of aesthetics? Under what circumstances is an aesthetic pleasure “guilty”? When should the appreciation of art works be a matter of disinterested judgment, and when a matter of passionate engagement? Does “love” blind? What is the difference between a “fan” and a “critic”? What are the affordances and limits of the “formulaic” and the “generic”?
Four weeks of this course will be devoted to workshopping students’ critical writing, examining the roles of description, praise, blame, analysis, and enthusiasm in writing about culture. Students will also maintain a course blog. For the final assignment, students are encouraged to pitch their writing to an appropriately chosen publication.
Week 1 | Introduction: Aesthetics John Keats, “Ode to a Nightingale” Robert Frost, “Stopping by Woods on a Snowy Evening” Peter Coviello, “Talk, Talk” |
Week 2 | Beauty and the sublime Immanuel Kant, from Critique of the Power of Judgment Thomas Nagel, from The View from Nowhere Short exercise: choose a cultural object to describe as plainly as possible. About 500 words. |
Week 3 | Taste and class Clement Greenberg, “Avant-Garde and Kitsch” Pierre Bourdieu, from Distinction Thorstein Veblen, from The Theory of the Leisure Class Barbara Herrnstein Smith, from Contingencies of Value T. S. Eliot, The Waste Land; selections from Old Possum’s Book of Practical Cats Andrew Lloyd Webber et al., selections from CATS |
Week 4 | Essay 1: Describe some piece of culture (novel, film, painting, poem, music video, etc.) that you love, and that you also think is good. (These are two different things.) Explain why it is that you love the piece, what it is that makes it good, and how you can tell the difference (and under what circumstances you can’t). Be sure to explain what it is that makes art good in general—you don’t need to advance a fully developed theory of aesthetics, but you do need to unpack your assumptions as much as you can. Have an argument. This should be around 3000 words. |
Week 5 | Difficulty William Butler Yeats, “The Fascination of What’s Difficult” Josef Albers, Homage to the Square: Dissolving/vanishing (1951) Marianne Moore, “An Octopus” Sianne Ngai, “Merely Interesting” Leonard Diepeveen, from The Difficulties of Modernism Lawrence Levine, from Highbrow/Lowbrow: The Emergence of Cultural Hierarchy in America Rosalind Krauss, “Grids” |
Week 6 | “Guilty pleasures,” pop culture, and authenticity Céline Dion, Let’s Talk About Love (1997) Carl Wilson, Let’s Talk About Love: A Journey to the End of Taste Sarah Blackwood, “Dance Dance Revelation: On So You Think You Can Dance” “I'm Not Here To Make Friends” supercut [YouTube video] Mallory Ortberg, “Oscar Wilde and Walt Whitman Probably Had Sex Once” Abigail De Kosnik, “Fandom as Free Labor,” in The Internet as Playground and Factory, ed. Scholz |
Week 7 | Popular culture, popular criticism Walter Benjamin, “The Work of Art in the Age of Its Technological Reproducibility” Theodor Adorno and Max Horkheimer, “The Culture Industry” Caleb Smith, “Say Hello to My Little Friend” Mary Oliver, selected poems Short exercise: write a piece of fanfiction, about 1000 words, in the setting of your choice. |
Week 8 | Gender and “the popular” Andreas Huyssen, “Mass Culture as Woman” Rebecca Black, “Friday” [YouTube video] Dana Vachon, “Arms So Freezy: Rebecca Black’s ‘Friday’ as Radical Text” Rae Armantrout, “Why Don’t Women Write Language-Centered Poetry?” Eve Kosofsky Sedgwick, “Queer and Now”; "Jane Austen and the Masturbating Girl" Eve Kosofsky, “Curl Up and Read” (Seventeen, 1964) Short exercise: Make the case that some cultural object is a “remake” of another, earlier one (for example, that Pixar’s Toy Story is a remake of Disney’s Pinocchio). Be honest about the ways in which the claim does not hold up. In addition to noting similarities or lines of influence, you should explain what we gain from understanding the later object as a remake of the earlier one. 500–1,000 words. |
Week 9 | “Formulaic” Edgar Allan Poe, “The Philosophy of Composition”; “The Raven” Mark Twain, “Fenimore Cooper’s Literary Offenses” Janice Radway, from Reading the Romance Smart Bitches, Trashy Books reviews: "The RealDeal by Lucy Monroe”; “Tell Me Lies by Jennifer Crusie”; “Skies of Gold by Zoe Archer” Tvtropes.org, “Elves versus Dwarves”; “As You Know” Lili Loofbourow, “Just Another Princess Movie” [rev of Brave] Christian Bök, Eunoia |
Week 10 | Essay 2: Choose a piece of art and viciously pan it. Your critique should be utterly devastating, which is to say that you should be able to persuade your reader that this piece is a blight on humanity, and not merely that you are a mean-spirited person. This will be more effective if you resist choosing an easy target. 2,000–3,000 words. |
Week 11 | Cuteness and commodification Sianne Ngai, “The Cuteness of the Avant-Garde” Gary Cross, from The Cute and the Cool “Many too small boxes and Maru” [YouTube video] “Nyan Cat [original]” [YouTube video] |
Week 12 | Essay 3: Review some piece of culture that was recently produced—say, since January 2012. Give your reader a fairly thickly textured sense of what this piece is like, and explain what its successes and failures are. Once again, be sure to unpack what it means for something to “succeed” (in any register). What is the historical, cultural, or aesthetic milieu in which this piece is ideally legible? Make a point. This should be around 3,000 words. |
Week 13 | Cool William Gibson, Pattern Recognition Alan Liu, from The Laws of Cool Michael Szalay, from Hip Figures Janelle Monáe, “Tightrope” [YouTube video] |
Week 14 | Inappropriate/appropriative Chinua Achebe, “An Image of Africa: Racism in Conrad’s Heart of Darkness” Justine Larbalestier, “Ain’t That a Shame” Fanlore Wiki: “Race and Fandom” Mitali Perkins, “Straight Talk on Race: Challenging the Stereotypes in Kids’ Books” Malcolm Harris, “The White Market” Nancy Sommers, “Revision Strategies of Student Writers and Experienced Adult Writers” |
Week 15 | Conclusions Essay 4: Revise your review for publication in a venue of your choice. It may be print or online. When you submit this assignment to me, you should also submit a copy of the submission guidelines for this venue (to which your revised review should adhere) and a rationale (about 500 words) for choosing this publication. You are encouraged to actually submit the review to the publication you have chosen. (You might be interested in this.) |
Tuesday, September 10, 2013
On DHThis and critiques thereof
This was turning into The Infinite Comment, so I am posting here instead of on Whitney Trettien's post on DHThis, which is very worth reading. The bulk of this post ends up being about what "self-promotion" means under neoliberalism's compulsory self-commodification, which is a complete tangent, so I guess that's another reason not to dump it at Whitney's.
-----
I am in general agreement with Whitney's main point that a reddit-like system has the potential for serious problems, and that reliance on "the community" to self-regulate has not worked out terribly well in the past. Stephen Ramsay and Trevor Owens asked on Twitter why DHThis didn't just use the existing DH subreddit. The obvious answer is that reddit is a terrifying cesspool of misogyny, racism, and assholery, and that is a very good reason.
But the obvious question that raises is: what structural safeguards will prevent DHthis from becoming a terrifying cesspool of misogyny, racism, and assholery? I have great faith in Adeline and Roopsi and the DHthis team as stewards of the project, but it's an important question. When I had my own dismaying interaction with JDH (which was also, to be clear, in many ways also a very good interaction), the problem was precisely an overreliance on crowdsourcing. As Matthew Ciszek recently tweeted, "Crowdsourcing selection kills diversity. More diverse materials typically less popular." I'll look forward to seeing what procedures DHthis implements to maintain a safe, productive, and genuinely diverse space. It's early days, and there's time for this project to develop.
I would offer a few points of disagreement as well.
First, I disagree with the suggestion that recent debates have been "petty quarrels." They have stakes for someone, and deciding which quarrel is petty and which is substantive depends on one's sense of security vis-à-vis the point of contention.
And second, I question the "self promotion" description, for three reasons.
1. I really don't see how this project is any more self-promoting than any other project rollout—say, One Week One Tool. Even the inclusion of a DHPoco category doesn't seem heavily self-promoting to me. Maybe it should have been called "Postcolonialism" instead?
2. Supposing we were to grant that the style of rollout was self-promoting (rather than project-promoting), what bearing would that have on the quality of the project? This, to me, is unclear. As a general rule, I think the question of intentions hinders evaluating effects.
3. There are a lot of mixed messages about self-promotion under neoliberalism, and women and people of color get them most of all. Like makeup ads that urge you to cover your face inallergens foundation to get that "natural glow," social media—which I think many people will agree have been central to recent DH formations—exhort an engagement that is "genuine," but which will also "get your voice out there"; ideally your internet presence will promote you through the effacement of its own promotional aspect. Merely having an internet presence is a form of "self-promotion"; yet it is also, importantly, a place of genuine (not just "genuine") engagement, a part of people's lives, and in many cases, not optional.
This critique has precisely been leveled at DH in recent years: that its webcentricity renders it "cliquish," even though blogs and Twitter are (mostly) public. Even for practitioners at the center of DH, the "second shift" of social media can be burdensome. The counterargument—not an empty one—is that these media offer a horizontal means of (genuine, not "genuine") engagement that cuts across existing hierarchies. Blogs and social media are currently central to DH, in part for the very good reason that digital publishing and pedagogy, through precisely some of these media (Tumblr and Twitter, but also Omeka and CommentPress) are a brave new terrain for DH (Stephen Ramsay's and David Golumbia's "DH II"), and have facilitated its recent expansion in all manner of ways. JDH and DHNow rely centrally on blogs and social media, which is why it never caught wind of #transformdh's important ASA panel on embodiment.
So who is "self-promoting"? Everyone probably remembers how, every time VIDA issues its count, editors from mainstream pubs wring their hands and say that women just don't pitch to them often enough; what can they doooo? Famously, the editors at Seal Press, a small feminist press, performed the same shopworn handwringing ritual about authors of color a few years ago. It was not impressive. Women and people of color are constantly admonished for failing to "put themselves out there" often enough. But when they do, all too often they are told that they are unbecomingly "self-promoting," and nobody should reward that! You kind of can't win.
I don't at all want to suggest that Whitney is proposing a double standard here, or singling the DHThis team out—I think most of us are turned off by what seems to be obvious self-promotion, wherever our thresholds for detecting it may lie. I myself have been known to zing people on the self-promotion front. But I do think that the question of self-promotion, in addition to being a language of intention that tends to confuse the issue (see 2 above), is a constantly moving target. For that reason, I don't think it's nearly as important a criterion as the central objection Whitney raised about the redditlike voting structure of DHThis.
I recognize the irony of spending an outsize amount of space on one of Whitney's avowedly lesser points, only to conclude that it is a lesser point! In a way, it's completely derailing of me to even bring it up. And yet, I also wanted to unpack the substance of my reservations about "self-promotion." Somehow its unimportance seems important.
I look forward to seeing how DHThis works, and how it will be shaped in the future by concerns like the ones Whitney raised.
-----
I am in general agreement with Whitney's main point that a reddit-like system has the potential for serious problems, and that reliance on "the community" to self-regulate has not worked out terribly well in the past. Stephen Ramsay and Trevor Owens asked on Twitter why DHThis didn't just use the existing DH subreddit. The obvious answer is that reddit is a terrifying cesspool of misogyny, racism, and assholery, and that is a very good reason.
But the obvious question that raises is: what structural safeguards will prevent DHthis from becoming a terrifying cesspool of misogyny, racism, and assholery? I have great faith in Adeline and Roopsi and the DHthis team as stewards of the project, but it's an important question. When I had my own dismaying interaction with JDH (which was also, to be clear, in many ways also a very good interaction), the problem was precisely an overreliance on crowdsourcing. As Matthew Ciszek recently tweeted, "Crowdsourcing selection kills diversity. More diverse materials typically less popular." I'll look forward to seeing what procedures DHthis implements to maintain a safe, productive, and genuinely diverse space. It's early days, and there's time for this project to develop.
I would offer a few points of disagreement as well.
First, I disagree with the suggestion that recent debates have been "petty quarrels." They have stakes for someone, and deciding which quarrel is petty and which is substantive depends on one's sense of security vis-à-vis the point of contention.
And second, I question the "self promotion" description, for three reasons.
1. I really don't see how this project is any more self-promoting than any other project rollout—say, One Week One Tool. Even the inclusion of a DHPoco category doesn't seem heavily self-promoting to me. Maybe it should have been called "Postcolonialism" instead?
2. Supposing we were to grant that the style of rollout was self-promoting (rather than project-promoting), what bearing would that have on the quality of the project? This, to me, is unclear. As a general rule, I think the question of intentions hinders evaluating effects.
3. There are a lot of mixed messages about self-promotion under neoliberalism, and women and people of color get them most of all. Like makeup ads that urge you to cover your face in
This critique has precisely been leveled at DH in recent years: that its webcentricity renders it "cliquish," even though blogs and Twitter are (mostly) public. Even for practitioners at the center of DH, the "second shift" of social media can be burdensome. The counterargument—not an empty one—is that these media offer a horizontal means of (genuine, not "genuine") engagement that cuts across existing hierarchies. Blogs and social media are currently central to DH, in part for the very good reason that digital publishing and pedagogy, through precisely some of these media (Tumblr and Twitter, but also Omeka and CommentPress) are a brave new terrain for DH (Stephen Ramsay's and David Golumbia's "DH II"), and have facilitated its recent expansion in all manner of ways. JDH and DHNow rely centrally on blogs and social media, which is why it never caught wind of #transformdh's important ASA panel on embodiment.
So who is "self-promoting"? Everyone probably remembers how, every time VIDA issues its count, editors from mainstream pubs wring their hands and say that women just don't pitch to them often enough; what can they doooo? Famously, the editors at Seal Press, a small feminist press, performed the same shopworn handwringing ritual about authors of color a few years ago. It was not impressive. Women and people of color are constantly admonished for failing to "put themselves out there" often enough. But when they do, all too often they are told that they are unbecomingly "self-promoting," and nobody should reward that! You kind of can't win.
I don't at all want to suggest that Whitney is proposing a double standard here, or singling the DHThis team out—I think most of us are turned off by what seems to be obvious self-promotion, wherever our thresholds for detecting it may lie. I myself have been known to zing people on the self-promotion front. But I do think that the question of self-promotion, in addition to being a language of intention that tends to confuse the issue (see 2 above), is a constantly moving target. For that reason, I don't think it's nearly as important a criterion as the central objection Whitney raised about the redditlike voting structure of DHThis.
I recognize the irony of spending an outsize amount of space on one of Whitney's avowedly lesser points, only to conclude that it is a lesser point! In a way, it's completely derailing of me to even bring it up. And yet, I also wanted to unpack the substance of my reservations about "self-promotion." Somehow its unimportance seems important.
I look forward to seeing how DHThis works, and how it will be shaped in the future by concerns like the ones Whitney raised.
Saturday, August 10, 2013
Tacit
One thing I wish to observe about the UVa Scholars' Lab's upcoming "Speaking in Code" symposium is this.
A call for diverse participation rings hollow when the lineup of invited speakers is 100% white and cis male. I can think of some things besides "impostor syndrome" that might keep a developer from an underrepresented group from applying.
It is doubly problematic when "tacit knowledge" has been used in DH (idiosyncratically; see Collins and Polanyi) to represent software development as a minority culture imperiled by "dominant, extravagantly vocal and individualist verbal expressions." This is an ideological reversal of the fact that software development is a prestige domain both within DH and in contemporary U.S. culture at large and that, far from being a marginalized culture, it is marginalizing, insofar as it is structurally exclusionary of women and racial, ethnic, and sexual minorities.
Saying "you are welcome here" (as a student or participant but not as a leader or invited speaker) may ameliorate this structural exclusion, but not much.
I see the demystification of "tacit knowledge" as a salutary project, and I wish this symposium all success. But this is not a model for inclusivity. We can and should do better.
-----
Collins, Harry M. Tacit and Explicit Knowledge. Chicago: The University of Chicago Press, 2010.
Polanyi, Michael. Personal Knowledge: Towards a Post-Critical Philosophy. Chicago: University of Chicago Press, 1958.
———. The Tacit Dimension. Chicago: University of Chicago Press, 1966.
A call for diverse participation rings hollow when the lineup of invited speakers is 100% white and cis male. I can think of some things besides "impostor syndrome" that might keep a developer from an underrepresented group from applying.
It is doubly problematic when "tacit knowledge" has been used in DH (idiosyncratically; see Collins and Polanyi) to represent software development as a minority culture imperiled by "dominant, extravagantly vocal and individualist verbal expressions." This is an ideological reversal of the fact that software development is a prestige domain both within DH and in contemporary U.S. culture at large and that, far from being a marginalized culture, it is marginalizing, insofar as it is structurally exclusionary of women and racial, ethnic, and sexual minorities.
Saying "you are welcome here" (as a student or participant but not as a leader or invited speaker) may ameliorate this structural exclusion, but not much.
I see the demystification of "tacit knowledge" as a salutary project, and I wish this symposium all success. But this is not a model for inclusivity. We can and should do better.
-----
Collins, Harry M. Tacit and Explicit Knowledge. Chicago: The University of Chicago Press, 2010.
Polanyi, Michael. Personal Knowledge: Towards a Post-Critical Philosophy. Chicago: University of Chicago Press, 1958.
———. The Tacit Dimension. Chicago: University of Chicago Press, 1966.
Labels:
digital humanities,
gender,
queer studies,
race,
tacit knowledge
Friday, June 21, 2013
What I know about Sis Willner
I'd never heard of Sis Willner (a.k.a. S. W. Philbin, a.k.a. Dorothy Dearborn), a Chicago poet, lyricist, socialite, and gossip columnist until yesterday when Paul Gehl gave me and a bunch of other C20ers a whirlwind tour of the modern print collection at the Newberry Library.
Paul put Willner's first two books, A Lady Thinks (Black Archer, 1930) and A Gentleman Decides (Black Archer, 1931), out for us as examples of early C20 Chicago small press printing. Or rather, A Gentleman Decides, was placed in a series of Sandburgiana for its (very weird) preface by Carl Sandburg. (He calls her "a hard-boiled virgin." What?) Paul intimated that Willner was "justly forgotten," and I can see a way that that's true (in the sense that any account of the period will leave people out, and better Sis Willner than, say, Langston Hughes). But paging through the books left me snorting with laughter—it's funny, sarcastic, sometimes embarrassing middlebrow light verse. Despite not caring for the comparison, Willner writes in a breezy Dorothyparkeresque vein, often about gender and romance.
This is the kind of encounter that would leave anyone a-googling.
In his memoir The Right Time, the Right Place, random meeter of famous people Charles Wohlstetter describes Willner thus:
Wohlstetter goes on to describe her marriage to an apparently pretty flaky financier named Phil Philbin, who, briefly jailed for crossing the SEC, got in the habit of cheerfully greeting new acquaintances with, "Hello, I'm Phil Philbin. I've been in the can" (Wohlstetter 123). After their marriage they moved from Chicago to Beverly Hills.
I also cheated a little and looked on the online Newspaper Archive (thanks, library VPN!). My Firefox was being fussy, so I didn't check very thoroughly, but Willner definitely had a fan in Jefferson, MO journalist Margaret Morris Pinet, who wrote about her first two books of poems in the Daily Capital News and Post-Tribune.
In a November 22, 1931 article, Pinet describes the "Middlewest Society Girl Noted as Modernist Poet":
The final Sis Willner fact my cursory investigation turned up is that she wrote the lyrics for a song sung by Frank Sinatra and (wait for it) the Modernaires, titled "Why Remind Me."
WorldCat locates copies of the sheet music at the University of North Texas and... the British Library. Go figure.
***
Select Bibliography
Pinet, Margaret Morris. “Margaret Morris Pinet Writes About Chicago.” Daily Capital News and Post-Tribune. July 5, 1931.
———. “Middlewest Society Girl Noted as Modernist Poet.” Daily Capital News and Post-Tribune. November 22, 1931.
Sinatra, Frank, and Modernaires (Musical group). "Why Remind Me." U.S.A.: Columbia, 1949.
Willner, Sis. A Gentleman Decides. Chicago: Black archer Press, 1931.
———. A Lady Thinks. Chicago: The Black Archer Press, 1930.
———. The Morning After. Chicago: Black Archer Press, 1933.
Wohlstetter, Charles. The Right Time, the Right Place. Hal Leonard Corporation, 1997.
Paul put Willner's first two books, A Lady Thinks (Black Archer, 1930) and A Gentleman Decides (Black Archer, 1931), out for us as examples of early C20 Chicago small press printing. Or rather, A Gentleman Decides, was placed in a series of Sandburgiana for its (very weird) preface by Carl Sandburg. (He calls her "a hard-boiled virgin." What?) Paul intimated that Willner was "justly forgotten," and I can see a way that that's true (in the sense that any account of the period will leave people out, and better Sis Willner than, say, Langston Hughes). But paging through the books left me snorting with laughter—it's funny, sarcastic, sometimes embarrassing middlebrow light verse. Despite not caring for the comparison, Willner writes in a breezy Dorothyparkeresque vein, often about gender and romance.
This is the kind of encounter that would leave anyone a-googling.
In his memoir The Right Time, the Right Place, random meeter of famous people Charles Wohlstetter describes Willner thus:
During the [Second World] war, whenever people traveled from coast to coast, there was a seven-hour stopover when the Twentieth Century Limited arrived in Chicago. While waiting for the Super Chief to take them the rest of the way to Los Angeles, regular tripsters would meet in the Pump Room of the Ambassador West Hotel. The doyenne of that table was Sis Willner, a breezy society columnist and a dear friend of mine; she wrote under the nom de plume Dorothy Dearborn. The attendees at her table included famous directors and producers, playwrights and novelists. (122)
Wohlstetter goes on to describe her marriage to an apparently pretty flaky financier named Phil Philbin, who, briefly jailed for crossing the SEC, got in the habit of cheerfully greeting new acquaintances with, "Hello, I'm Phil Philbin. I've been in the can" (Wohlstetter 123). After their marriage they moved from Chicago to Beverly Hills.
I also cheated a little and looked on the online Newspaper Archive (thanks, library VPN!). My Firefox was being fussy, so I didn't check very thoroughly, but Willner definitely had a fan in Jefferson, MO journalist Margaret Morris Pinet, who wrote about her first two books of poems in the Daily Capital News and Post-Tribune.
In a November 22, 1931 article, Pinet describes the "Middlewest Society Girl Noted as Modernist Poet":
A daughter of a well known and wealthy Chicago family, undoubtedly one of the youthful sophisticates that have distressed an older generation, Sis Willner scrutinizes life of the present day and cpatures [sic] its laughter, tears, and purpose.An earlier, July 5, 1931 notice, in a Pinet column on various Chicago doings, reveals some further personal information:
Those who read the poetry from her facile pen will agree that "this girl who writes vividly" [quoting Carl Sandburg —N.C.] indeed faces a future as one of the "best light verse queens of the U.S. A." With more honors to the great middlewest which claims her as a daughter!
Sis is a person of charm and many ideas that are her own. She loves the colors of turquoise and black. And she wears the combination almost exclusively. Her apartment at the Shoreland is done throughout in this blue[.] Furnishings are black and the effect is startling. A young modernist whose verses show great promise. Sis may look to a future filled with literary success.If the Shoreland Pinet alludes to is the building I'm thinking of (and I imagine it is), then Sis Willner lived at the Shoreland Hotel back in its fancy hotel days (it has subsequently been a pretty crusty/weird University of Chicago dorm and, most recently, an apartment complex).
The final Sis Willner fact my cursory investigation turned up is that she wrote the lyrics for a song sung by Frank Sinatra and (wait for it) the Modernaires, titled "Why Remind Me."
WorldCat locates copies of the sheet music at the University of North Texas and... the British Library. Go figure.
***
Select Bibliography
Pinet, Margaret Morris. “Margaret Morris Pinet Writes About Chicago.” Daily Capital News and Post-Tribune. July 5, 1931.
———. “Middlewest Society Girl Noted as Modernist Poet.” Daily Capital News and Post-Tribune. November 22, 1931.
Sinatra, Frank, and Modernaires (Musical group). "Why Remind Me." U.S.A.: Columbia, 1949.
Willner, Sis. A Gentleman Decides. Chicago: Black archer Press, 1931.
———. A Lady Thinks. Chicago: The Black Archer Press, 1930.
———. The Morning After. Chicago: Black Archer Press, 1933.
Wohlstetter, Charles. The Right Time, the Right Place. Hal Leonard Corporation, 1997.
Friday, April 26, 2013
Goldmines
One of the really wonderful things about the Beinecke Library's Beyond the Text symposium this weekend is the way in which it weaves together archival and poetic concerns. Tara McPherson was here earlier this week to give a talk in WGSS, and I found it so refreshing when she talked about the great students in the practice-based Ph.D. program at USC: "we cheat a little," she said, "because most of them come in with MFAs, so they have certain skills." It's so rare to hear that kind of training spoken of as a good thing. But it is a good thing. This morning Lori Emerson cited studying the Emily Dickinson archive with Susan Howe as her primary training for the media archaeology work she now does at Boulder.
In the second panel, on sound archives (Al Filreis, Jason Camlot, and Steve Evans), some conversation emerged—some spoken, some in the Twitter backchannel—around labor. It began with Jason Camlot and Al Filreis's discussions of workflows, which were largely "DIY" (I get the feeling Al spends a lot of time digitizin' away) and/or supplemented by grad or staff labor. (Steve Evans ribbed Al: "You guys aren't purely DIY—come on!")
These issues emerged more explicitly in the Q&A. One librarian noted that libraries' slowness often has to do with the cost of digitization, not in terms of equipment but of labor (because unlike the less formal structures Al had in place, libraries pay for this work). Lori noted that "DIY" often meant there was no one to whom to pass the torch when one person needed to step down, and that had much to do with the fact that this labor was uncompensated. As Jason rightly observed, this is a sustainability problem. Clearly this work is a labor of love for many people (in Al's case, visibly and wonderfully), but that does not then render it not labor.
The general consensus seemed to be that poetry and sound archives necessarily run on uncompensated labor, and that the basic question is how to get more of it. (Crowdsourcing came up a number of times.)
Jason Scott of the Internet Archive pointed out the incredible archival resource that the Internet Archive has been for a long time and continues to be, including for poetry materials (such as Naropa's archives). But this bounty was also framed in terms of volunteer labor, and in particular, the volunteer labor of young people.
This seemed to me to be a very problematic premise, especially the assumption that it's not only fine but a good idea to have young people do unpaid work—that unpaid work is somehow the natural province of youth. It's true that young people are often enthusiastic, want to learn, and have time to contribute. They may very well like doing the work. At the same time, the naturalization of unpaid or underpaid youth labor should be resisted ("it's mostly high school students": the {false} rationalization of a low minimum wage), as should the naturalization of not paying for cultural preservation work. Jason's responses to me are in the Storify, but one of them struck me as particularly interesting.
Can we take a moment to talk about mining?
Mining is perhaps now thought of as the quintessential poorly paid, dangerous, exploitative labor. There's a long history of labor conflict around mining. Getting the stuff out of the ground is a really unsavory process, but then, we also really want the stuff.
Mining is also the go-to metaphor for another often unsavory yet much-desired practice, the transmutation of "data" into usable "information." Mainly we have algorithms, overseen by humans, do the labor. Sometimes that's too hard, and you'd be better off having a human inside that machine; in that case you use Mechanical Turk. And sometimes the only thing of value that you need is labor, in which case we "crowdsource," mining the laborers themselves. The crowd's a goldmine. "So the hunter becomes the hunted, migrating from a situation in which users farm for gold, to a situation in which users are being farmed" (Galloway 137).
Is crowdsourcing "like" mining? As always with likeness: in some ways yes, in some ways no. This is less a matter of "exact resemblance to exact resemblance" than of the difference spreading. Is volunteering to digitize poetry sound recordings "like" mining? Not intrinsically—but if neither is paid, or paid sufficiently, then they are "like" in that sense, which is the only relevant sense for the comparison.
The canonical scene of mining strikes is actually not gold mining. The famous labor strikes repressed by Thatcher's government were mounted by coal miners. But gold mining looms large in the digital imagination for another reason: the phenomenon of simulacral primitive accumulation known as "gold farming." In the game World of Warcraft, the low-skill, time-intensive acquisition of "gold," which can then be sold for real currency to wealthier players, is famously associated with exploited Chinese workers, including prisoners.
As Alex Galloway observes in The Interface Effect, the figure of the Chinese gold farmer—and its installation as a racial other in ways that, as Lisa Nakamura has shown, conduce to deeply racialized social formations within the world of the game—is as powerful as ideology as it is problematic as a labor form. It serves to fashion exploitative digital labor as not-us.
And I would suggest that the eager twenty-year-old with a laptop functions similarly; in that way, too, digitizing sound archives is "like" gold mining, or rather "gold farming." Like the hypothetical minimum-wage high schooler whose income serves as pocket money, non-essential and destined for "fun," the youthful volunteer, who may very well intrinsically enjoy the work, authorizes a category of labor exploitation that is not only okay but also okay to take as the norm for the labor of cultural preservation. "I can get you a twenty-year-old!" is, in that sense, not a labor solution but its opposite: a commitment to the norm that this work will be unpaid.
Galloway, Alexander R. The Interface Effect. Cambridge, UK ; Malden, MA: Polity, 2012. Print.
See also Galloway, "Does the Whatever Speak?" In Race After the Internet. Ed. Lisa Nakamura and Peter Chow-White. New York: Routledge, 2012. 111-27. Print.
Nakamura, Lisa. "Don't Hate the Player, Hate the Game: The Racialization of Labor in World of Warcraft." In Digital Labor: The Internet as Playground and Factory. Ed. Trebor Scholz. New York: Routledge, 2013. Print.
In the second panel, on sound archives (Al Filreis, Jason Camlot, and Steve Evans), some conversation emerged—some spoken, some in the Twitter backchannel—around labor. It began with Jason Camlot and Al Filreis's discussions of workflows, which were largely "DIY" (I get the feeling Al spends a lot of time digitizin' away) and/or supplemented by grad or staff labor. (Steve Evans ribbed Al: "You guys aren't purely DIY—come on!")
These issues emerged more explicitly in the Q&A. One librarian noted that libraries' slowness often has to do with the cost of digitization, not in terms of equipment but of labor (because unlike the less formal structures Al had in place, libraries pay for this work). Lori noted that "DIY" often meant there was no one to whom to pass the torch when one person needed to step down, and that had much to do with the fact that this labor was uncompensated. As Jason rightly observed, this is a sustainability problem. Clearly this work is a labor of love for many people (in Al's case, visibly and wonderfully), but that does not then render it not labor.
The general consensus seemed to be that poetry and sound archives necessarily run on uncompensated labor, and that the basic question is how to get more of it. (Crowdsourcing came up a number of times.)
Jason Scott of the Internet Archive pointed out the incredible archival resource that the Internet Archive has been for a long time and continues to be, including for poetry materials (such as Naropa's archives). But this bounty was also framed in terms of volunteer labor, and in particular, the volunteer labor of young people.
This seemed to me to be a very problematic premise, especially the assumption that it's not only fine but a good idea to have young people do unpaid work—that unpaid work is somehow the natural province of youth. It's true that young people are often enthusiastic, want to learn, and have time to contribute. They may very well like doing the work. At the same time, the naturalization of unpaid or underpaid youth labor should be resisted ("it's mostly high school students": the {false} rationalization of a low minimum wage), as should the naturalization of not paying for cultural preservation work. Jason's responses to me are in the Storify, but one of them struck me as particularly interesting.
@anarchivist @elotroalex @ncecire I like the idea of asking people to run a .wav digitizer on a record is like making children mine gold
— Jason Scott (@textfiles) April 26, 2013
Can we take a moment to talk about mining?
Mining is perhaps now thought of as the quintessential poorly paid, dangerous, exploitative labor. There's a long history of labor conflict around mining. Getting the stuff out of the ground is a really unsavory process, but then, we also really want the stuff.
Mining is also the go-to metaphor for another often unsavory yet much-desired practice, the transmutation of "data" into usable "information." Mainly we have algorithms, overseen by humans, do the labor. Sometimes that's too hard, and you'd be better off having a human inside that machine; in that case you use Mechanical Turk. And sometimes the only thing of value that you need is labor, in which case we "crowdsource," mining the laborers themselves. The crowd's a goldmine. "So the hunter becomes the hunted, migrating from a situation in which users farm for gold, to a situation in which users are being farmed" (Galloway 137).
Is crowdsourcing "like" mining? As always with likeness: in some ways yes, in some ways no. This is less a matter of "exact resemblance to exact resemblance" than of the difference spreading. Is volunteering to digitize poetry sound recordings "like" mining? Not intrinsically—but if neither is paid, or paid sufficiently, then they are "like" in that sense, which is the only relevant sense for the comparison.
The canonical scene of mining strikes is actually not gold mining. The famous labor strikes repressed by Thatcher's government were mounted by coal miners. But gold mining looms large in the digital imagination for another reason: the phenomenon of simulacral primitive accumulation known as "gold farming." In the game World of Warcraft, the low-skill, time-intensive acquisition of "gold," which can then be sold for real currency to wealthier players, is famously associated with exploited Chinese workers, including prisoners.
As Alex Galloway observes in The Interface Effect, the figure of the Chinese gold farmer—and its installation as a racial other in ways that, as Lisa Nakamura has shown, conduce to deeply racialized social formations within the world of the game—is as powerful as ideology as it is problematic as a labor form. It serves to fashion exploitative digital labor as not-us.
And I would suggest that the eager twenty-year-old with a laptop functions similarly; in that way, too, digitizing sound archives is "like" gold mining, or rather "gold farming." Like the hypothetical minimum-wage high schooler whose income serves as pocket money, non-essential and destined for "fun," the youthful volunteer, who may very well intrinsically enjoy the work, authorizes a category of labor exploitation that is not only okay but also okay to take as the norm for the labor of cultural preservation. "I can get you a twenty-year-old!" is, in that sense, not a labor solution but its opposite: a commitment to the norm that this work will be unpaid.
Galloway, Alexander R. The Interface Effect. Cambridge, UK ; Malden, MA: Polity, 2012. Print.
See also Galloway, "Does the Whatever Speak?" In Race After the Internet. Ed. Lisa Nakamura and Peter Chow-White. New York: Routledge, 2012. 111-27. Print.
Nakamura, Lisa. "Don't Hate the Player, Hate the Game: The Racialization of Labor in World of Warcraft." In Digital Labor: The Internet as Playground and Factory. Ed. Trebor Scholz. New York: Routledge, 2013. Print.
Wednesday, April 10, 2013
"Increasingly large classes"
It is never really worth the time to point out something egregious the New York Times is doing, but it's rainy out and I'm a bit ranty this morning, so.
The recentish NYT article on machine-grading essays ends thus:
Three things going on there.
1. "With increasingly large classes." Oh, what is causing those classes to grow? Nature? The seasons? The moon and tides? Or the failure to hire enough professors to meet the size of the student body in the first place? Shermis suggests that "increasingly large classes" are a fact of nature, not a personnel decision. The author of the article, John Markoff, does not correct.
2. Shermis notes that critics often come from "very prestigious" institutions, by which he actually seems to mean "good" ones, because "they do a much better job of providing feedback than a machine ever could." I highly doubt the software gives more useful feedback than do humans at "less prestigious" institutions. (Don't get me started on the offensiveness of his insinuation about faculty at "less prestigious" institutions.) There is a disgusting and invidious ranking implicit in Shermis's remarks that imply that "the nation's best universities" are "best" purely through Merit and Talent, and that we have no responsibility to try to get all the nation's college students a commensurately high-quality education. That used to be what the University of California was for, but I guess that's gone.
3. In a neat twist, Shermis decides that qualitatively rich teaching and helpful feedback on essays are not real. The actually existing, documentedly and admittedly better solution of hiring enough faculty to teach your students is placed outside "the real world," in a zone of unreality that makes it unemulable, and certainly not a model for broader educational policy. Whereas having software grade your students' essays is totally realistic and a great idea.
The recentish NYT article on machine-grading essays ends thus:
Mark D. Shermis, a professor at the University of Akron in Ohio, supervised the Hewlett Foundation’s contest on automated essay scoring and wrote a paper about the experiment. In his view, the technology — though imperfect — has a place in educational settings.
With increasingly large classes, it is impossible for most teachers to give students meaningful feedback on writing assignments, he said. Plus, he noted, critics of the technology have tended to come from the nation’s best universities, where the level of pedagogy is much better than at most schools.
“Often they come from very prestigious institutions where, in fact, they do a much better job of providing feedback than a machine ever could,” Dr. Shermis said. “There seems to be a lack of appreciation of what is actually going on in the real world.”
Three things going on there.
1. "With increasingly large classes." Oh, what is causing those classes to grow? Nature? The seasons? The moon and tides? Or the failure to hire enough professors to meet the size of the student body in the first place? Shermis suggests that "increasingly large classes" are a fact of nature, not a personnel decision. The author of the article, John Markoff, does not correct.
2. Shermis notes that critics often come from "very prestigious" institutions, by which he actually seems to mean "good" ones, because "they do a much better job of providing feedback than a machine ever could." I highly doubt the software gives more useful feedback than do humans at "less prestigious" institutions. (Don't get me started on the offensiveness of his insinuation about faculty at "less prestigious" institutions.) There is a disgusting and invidious ranking implicit in Shermis's remarks that imply that "the nation's best universities" are "best" purely through Merit and Talent, and that we have no responsibility to try to get all the nation's college students a commensurately high-quality education. That used to be what the University of California was for, but I guess that's gone.
3. In a neat twist, Shermis decides that qualitatively rich teaching and helpful feedback on essays are not real. The actually existing, documentedly and admittedly better solution of hiring enough faculty to teach your students is placed outside "the real world," in a zone of unreality that makes it unemulable, and certainly not a model for broader educational policy. Whereas having software grade your students' essays is totally realistic and a great idea.
Tuesday, April 9, 2013
In fact, when I asked at yesterday’s conference what the university would like without its current adjuncts, I received this reply: MOOCs coupled with a new student body of global elite students.
A wonderful post about moving beyond the "repressive hypothesis" of the corporate university, by Karen Gregory.
Tuesday, April 2, 2013
Dear Professor James, #YOLO :)
Any Stein scholar would be struck by this story about a Texas high school student, Kyron Birdine, who wrote "YOLO :)" ["you only live once"]* on his standardized test paper and tweeted the photo to school officials. Birdine was punished with four days of in-school suspension.
The first thing this made me think of was this famous story about Gertrude Stein:
The story is often quoted as evidence of Stein's whimsy, of James's good will, or both. Last semester I told the story to my American Lit students, who immediately asked me (as students always do when you tell this story) whether the same trick would avail in my class. ("Ha, ha, you can try." Never let it be said that I am not a loving teacher.)
But of course, nothing will happen to a Yale undergrad who doesn't take a final exam, except that they'll fail the exam (which, in my course, only counted for perhaps 15% of the grade, and so would have by no means have kept the student from passing the course—and you know where a C average at Yale can get you).
Nothing was going to happen to Gertrude Stein, either. She didn't even plan on taking a degree at Radcliffe until the very end, when James persuaded her to try medical school. "There were no difficulties except that Gertrude Stein had never passed more than half of her entrance examinations for Radcliffe, having never intended to take a degree. However with considerable struggle and enough tutoring that was accomplished [yes—she accomplished getting into the college she was already attending, which by the way was Harvard] and Gertrude Stein entered Johns Hopkins Medical School" (740). She later flunked out of same and went to Paris and that was that.
Contrast this charming tale of the 1890s with the artisanal home-canned pickle we are now in. Gertrude Stein took an exam when she absolutely had to, and sometimes not even then; that she could even attend Radcliffe was a mark of her privilege. Kids These Days, in contrast, are constantly subjected to high-stakes tests, consequential for them as individuals and for their school districts. Contrast Stein's story with the myriad gymnastics (figurative and, if literal, often Olympic-grade) students now go through to get into Harvard (7.9% admission rate) or into medical school anywhere (we all have our pre-med stories).
To refuse to take a standardized test is to practically refuse both present and future—even if the test doesn't really mean anything. By all accounts standardized testing is even more constant and more emphasized than it was when I was in high school (I was pre-NCLB), and even I remember how frequent the tests were, how arbitrary-seeming, and above all, how boring—the SSATs, the PSATs, the (and oh, did we laugh about it) Virginia Standards of Learning (SOLs).
Kyron Birdine's exceedingly mild rebellion and its consequences suggest, too, that if anything they are even more rigidly policed than they were in the 90s. I remember how each student was interpellated into the role of a potential cheater, a potential violator. Make sure you have the right kind of pencils, make sure you have extra, eyes on your own paper, also cover your paper in case someone else might look over because if someone else cheats off your paper you are then a cheater too. I don't know about cell phones; in Virginia in the 90s they were considered evidence of dealing drugs and banned from public schools.
But the testing is also as arbitrary as it is compulsory. From the Gawker article linked above:
Students in Birdine's year were, in other words, being used as a data source to help calibrate the new test. I know I had to do this too, on the SAT--I took an analytical reasoning section, but the scores didn't count for anything because it was new; they just collected the data and used it to calibrate the scoring. I'm sure that's a standard procedure now as in the 90s. It disturbs me a little, though, that it's never occurred to me before that standardized testing companies shouldn't get to waste students' time and collect their data for free—let alone compulsorily.
Kyron Birdine staged his mild protest with good humor, as his self-deprecating "#freeKyron" tweets indicate. And although in the grander scheme of things the punishment seems unjust, the four days of suspension probably won't have much impact on his life. Yet he seems aware of the power dynamic surrounding him, and the ease with which school conduces to criminalization.

"Some people are acting like I tweeted nuclear launch codes. I expressed my opinion on red lines. No more, no less," Birdine writes. He's right; it's not a crime.
A friend replies, with the brutal honesty for which we so prize teenagers, that Birdine's protest was used as a cautionary tale in a class called "MYF."
"myf?"
"'mapping your future you know... The fuck around class where we do nothing[.]"

Nothing could be as important for mapping your future as obediently taking whatever test is put in front of you, it's suggested. Follow the rules. Avoid criminalization.
Another friend responds by joking about the danger that Birdine poses to the Arlington school district:

Birdine jokes back, but the joke has an uneasy undertow. "don't say that. Haha. They might see that as a threat."
Haha.
I often feel like that myself.
*****
*It delights me to no end that Birdine not only uses an internet abbreviation but renders the smiley face on his paper as an emoticon, using punctuation, then tweets a photograph of the analog message. Somebody get this kid in a media theory class.
**As given in the original, what Stein wrote on that paper was "Dear Professor James,...I am so sorry but really I do not feel a bit like an examination paper in philosophy to-day."
Stein, Gertrude. The Autobiography of Alice B. Toklas. In Writings, 1903-1932: Q.E.D., Three Lives, Portraits and Other Short Works, The Autobiography of Alice B. Toklas. Ed. Catharine R. Stimpson and Harriet Scott Chessman. New York: Library of America, 1998. Print. The Library of America 99.
The first thing this made me think of was this famous story about Gertrude Stein:
It was a very lovely spring day, Gertrude Stein had been going to the opera every night and going also to the opera in the afternoon and had been otherwise engrossed and it was the period of the final examinations, and there was the examination in William James's course. She sat down with the examination paper before her and she just could not. YOLO :) ** she wrote at the top of her paper, and left.
The next day she had a postal card from William James saying, Dear Miss Stein, I understand perfectly how you feel I often feel like that myself. And underneath it he gave her work the highest mark in his course. (Stein, The Autobiography of Alice B. Toklas 740)
The story is often quoted as evidence of Stein's whimsy, of James's good will, or both. Last semester I told the story to my American Lit students, who immediately asked me (as students always do when you tell this story) whether the same trick would avail in my class. ("Ha, ha, you can try." Never let it be said that I am not a loving teacher.)
But of course, nothing will happen to a Yale undergrad who doesn't take a final exam, except that they'll fail the exam (which, in my course, only counted for perhaps 15% of the grade, and so would have by no means have kept the student from passing the course—and you know where a C average at Yale can get you).
Nothing was going to happen to Gertrude Stein, either. She didn't even plan on taking a degree at Radcliffe until the very end, when James persuaded her to try medical school. "There were no difficulties except that Gertrude Stein had never passed more than half of her entrance examinations for Radcliffe, having never intended to take a degree. However with considerable struggle and enough tutoring that was accomplished [yes—she accomplished getting into the college she was already attending, which by the way was Harvard] and Gertrude Stein entered Johns Hopkins Medical School" (740). She later flunked out of same and went to Paris and that was that.
Contrast this charming tale of the 1890s with the artisanal home-canned pickle we are now in. Gertrude Stein took an exam when she absolutely had to, and sometimes not even then; that she could even attend Radcliffe was a mark of her privilege. Kids These Days, in contrast, are constantly subjected to high-stakes tests, consequential for them as individuals and for their school districts. Contrast Stein's story with the myriad gymnastics (figurative and, if literal, often Olympic-grade) students now go through to get into Harvard (7.9% admission rate) or into medical school anywhere (we all have our pre-med stories).
To refuse to take a standardized test is to practically refuse both present and future—even if the test doesn't really mean anything. By all accounts standardized testing is even more constant and more emphasized than it was when I was in high school (I was pre-NCLB), and even I remember how frequent the tests were, how arbitrary-seeming, and above all, how boring—the SSATs, the PSATs, the (and oh, did we laugh about it) Virginia Standards of Learning (SOLs).
Kyron Birdine's exceedingly mild rebellion and its consequences suggest, too, that if anything they are even more rigidly policed than they were in the 90s. I remember how each student was interpellated into the role of a potential cheater, a potential violator. Make sure you have the right kind of pencils, make sure you have extra, eyes on your own paper, also cover your paper in case someone else might look over because if someone else cheats off your paper you are then a cheater too. I don't know about cell phones; in Virginia in the 90s they were considered evidence of dealing drugs and banned from public schools.
But the testing is also as arbitrary as it is compulsory. From the Gawker article linked above:
As the Dallas Observer notes, Kyron are being forced to take both the new State of Texas Assessments of Academic Readiness (STAAR) test and the old Texas Assessment of Knowledge and Skills (TAKS) test, even though only the TAKS will count.
"He and any other Texas students who entered ninth grade before the 2011-12 school year are still evaluated on the TAKS test," the Observer explains. "They're still required to take the STAAR, but mainly so the state can get data they can use to tweak the test before it really matters."
"It wasn't for a grade," Kyron told WFAA's News 8. "Colleges don't see it. It didn't benefit my personal life at all."
Students in Birdine's year were, in other words, being used as a data source to help calibrate the new test. I know I had to do this too, on the SAT--I took an analytical reasoning section, but the scores didn't count for anything because it was new; they just collected the data and used it to calibrate the scoring. I'm sure that's a standard procedure now as in the 90s. It disturbs me a little, though, that it's never occurred to me before that standardized testing companies shouldn't get to waste students' time and collect their data for free—let alone compulsorily.
Kyron Birdine staged his mild protest with good humor, as his self-deprecating "#freeKyron" tweets indicate. And although in the grander scheme of things the punishment seems unjust, the four days of suspension probably won't have much impact on his life. Yet he seems aware of the power dynamic surrounding him, and the ease with which school conduces to criminalization.

"Some people are acting like I tweeted nuclear launch codes. I expressed my opinion on red lines. No more, no less," Birdine writes. He's right; it's not a crime.
A friend replies, with the brutal honesty for which we so prize teenagers, that Birdine's protest was used as a cautionary tale in a class called "MYF."
"myf?"
"'mapping your future you know... The fuck around class where we do nothing[.]"

Nothing could be as important for mapping your future as obediently taking whatever test is put in front of you, it's suggested. Follow the rules. Avoid criminalization.
Another friend responds by joking about the danger that Birdine poses to the Arlington school district:

Birdine jokes back, but the joke has an uneasy undertow. "don't say that. Haha. They might see that as a threat."
Haha.
I often feel like that myself.
*****
*It delights me to no end that Birdine not only uses an internet abbreviation but renders the smiley face on his paper as an emoticon, using punctuation, then tweets a photograph of the analog message. Somebody get this kid in a media theory class.
**As given in the original, what Stein wrote on that paper was "Dear Professor James,...I am so sorry but really I do not feel a bit like an examination paper in philosophy to-day."
Stein, Gertrude. The Autobiography of Alice B. Toklas. In Writings, 1903-1932: Q.E.D., Three Lives, Portraits and Other Short Works, The Autobiography of Alice B. Toklas. Ed. Catharine R. Stimpson and Harriet Scott Chessman. New York: Library of America, 1998. Print. The Library of America 99.
Friday, March 8, 2013
Still further beyond the cave
Somehow the other day in the midst of tweeting the morning sessions of the New Directions in Digital Scholarship symposium I found myself once again accidentally having the "coding/building" conversation, this time with Jonathan Stray. Jonathan's an interesting interlocutor in this regard, since he doesn't come from an academic humanities discipline, but rather from computational journalism. But despite the sentiment expressed by all-around upbeat, positive-thinking guy Patrick Murray-John (and bless him for his general good will), I can't say that I found the conversation especially intriguing. In fact, I feel like I've had and seen this conversation a zillion times before, and so has everyone else.
To put it bluntly, I think the search for the digital essence of the digital humanities is a classic case of an unexamined and openly theological metaphysics of presence, whether described as "building," "craft," or, as Patrick put it, a "creative or experimental" relation to the tools at hand. (Creative and experimental as opposed to what? Conventional and obedient? Do we only count the using of "tools" if the using is against the grain? If so, remind me why we're building these things in the first place? Or, in other words, is the "creative and experimental" distinction anything other than a tautological insistence on the goodness of the good?)
So I really have nothing to offer in the way of furthering the conversation, feeling as I do that the conversation is in many ways a mistake. (The conversation we might have instead could involve looking at great work that's happening now and talking about what makes it interesting.)
Yet here I am, wittering on all the same. Why?
Well, in the course of our conversation Jonathan said something that struck me as illuminating:
Yes. This is also why digital humanists keep having The Mistaken Conversation, especially in a moment of institutionalization in which identifying standards for evaluation is highly consequential. There is shallow work, and one wants to raise the bar.
But here's the question: why does the desire to evaluate work—judge its quality—manifest as a desire to delimit the boundaries of the (self-consciously interdisciplinary) discipline?
To make an analogy, there's a lot of shallow work done in literary criticism, too, but we don't call it not literary criticism. We call it bad.
(We have our own metaphysics-of-presence hobbyhorses in litcrit too—I think work that fails to engage with literary form strikes many literary critics as "shallow," with all the resonances that a word like "shallow" entails.)
There seems to me to be something particular about maintaining standards by delimiting boundaries, and there's one other place that we see it all the time: science. There are, of course, judgments about "bad science." But what really gets people up in arms is instances of what gets judged to be pseudoscience—the ultimate affront, pretending to be science when you're not. To name something a pseudoscience is to declare it out of bounds, as not even playing the same language game as science.
There's a vast literature in the history of science establishing why the designation of "pseudoscience" is so thorny (Michael Gordin gives a nice rundown of this in the introduction to The Pseudoscience Wars, and there's a sketchy pulled-from-Zotero bibliography below).* And it's absolutely not for want of historians of science trying to find a boundary. For myself, I do not use the word "pseudoscience" at all, because I'm persuaded that when a theory is bad, what's bad about it isn't the fact that it's pretending to be science. I especially resist applying the term "pseudoscience" to the nineteenth-century scientific racism that I frequently encounter in my research. To endorse as real science only those results that we (a) currently accept and (b) find moral, even when this forces us to exclude concepts that were fully accepted by the scientific mainstream (it doesn't get more mainstream than Louis Agassiz), is presentist, and only functions to sentimentally preserve the purity of an idea of "science" that excises the disturbing vagaries of history and contingency that shape inquiry. Scientific racism was science. We don't get to expel it from the discipline(s) just because it makes science look bad, any more than we get to expel the Bohr model of the atom (which remains in high school chemistry textbooks as part of a teleological, progressive march toward the electron cloud model). Scientific racism structured its inquiries around premises that we now find indefensible—but all scientific inquiries are structured around premises of one sort or another, and we shouldn't expel scientific racism from the history of science in order to preserve the fantasy that they aren't. Louis Agassiz was (1) racist and (2) definitely a scientist, a very important one. We have to be able to hold both of these notions in our heads.
Okay, so I've used a pretty contentious example to demonstrate why I find "pseudoscience" to be a problematic term. But why do we want it so, so much? Why does it seem to make so much sense to repudiate ideas not only on logical or methodological grounds but with a charge of fraudulence?—on, in other words, ethical grounds?
And why do digital humanities and computational journalism, in a less charged but quite persistent way, seem to want to do the same thing?
I think the gatekeeping impulse has a great deal to do with a desire to preserve the field (is it a field?) as a site of virtue. This is what I meant when I wrote about "the virtues of digital humanities" in JDH. The discourse of digital humanities is charged through and through with a language of ethics, which makes the language of ethical breach (pseudo, shallow) a logical concomitant. As I tried to show in the JDH piece, we inherit this frame from a long tradition of framing empiricist methods of knowledge-production as sites of virtue—as, indeed, dependent on virtue, or, as Lisa Spiro puts it, "ethos."
Moreover, the ethical norms imputed to digital humanities (as a necessary concomitant of its praxes, I wish to emphasize) are strongly associated with hopes (and fears) about the institutional change that digital humanities might bring about—democratization of access, collaboration, shortened journal pipelines, dismantling of traditional academic hierarchies, more equitable (or magically invisible) labor models, just to name a few of the hopes pinned on this—what: field? methodology? cluster of projects? tendency? It's no wonder "who's in and who's out" continues to feel like a terribly pressing question, even though its answer seems as elusive as the answer to the "who's in and who's out" question in science.
Nothing I say here will much countervail the impulse to "separat[e] the pilgrims from the rakes," as Larry Laudan puts it. But I'd argue that we should stay conscious of why Laudan's tongue-in-cheek nomenclature actually applies with eerie accuracy—why deprecating "shallow work" is a matter of delimiting boundaries. Pilgrim's progress or celestial railroad, the ethical discourses around digital humanities shape how work is done.
* * * * *
*Thanks to Scott Selisker for pointing me to the Steven Shapin review that pointed me to this book.
While I was writing this, Lindsay Thomas tweeted a reference to Kristen Whissel's essay "Digital Multitudes" as one approach to the "what is digital?" question. In my laziness, I'm just going to link it for now [Muse paywall].
Bud, Robert. “‘Applied Science’: A Phrase in Search of a Meaning.” Isis 103.3 (2012): 537–545. JSTOR. Web. 16 Feb. 2013.
Cooter, Roger. The Cultural Meaning of Popular Science: Phrenology and the Organization of Consent in Nineteenth-century Britain. Cambridge [Cambridgeshire] ; New York: Cambridge University Press, 1984. Print. Cambridge History of Medicine.
Cooter, Roger, and Stephen Pumphrey. “Separate Spheres and Public Places: Reflections on the History of Science Popularization and Science in Popular Culture.” History of Science 32 (1994): 237–267. Print.
Daum, Andreas W. “Varieties of Popular Science and the Transformations of Public Knowledge: Some Historical Reflections.” Isis 100.2 (2009): 319–332. University of Chicago. Web. 8 July 2009.
Daston, Lorraine, and Peter Galison. Objectivity. New York : Cambridge, Mass: Zone Books ; Distributed by the MIT Press, 2007. Print.
Fyfe, Aileen, and Bernard V. Lightman, eds. Science in the Marketplace: Nineteenth-century Sites and Experiences. Chicago: University of Chicago Press, 2007. Print.
Gordin, Michael D. The Pseudoscience Wars: Immanuel Velikovsky and the Birth of the Modern Fringe. Chicago: The University of Chicago Press, 2012. Print.
Laudan, Larry. “Views of Progress: Separating the Pilgrims from the Rakes.” Philosophy of the Social Sciences 10.3 (1980): 273–286. pos.sagepub.com. Web. 4 Mar. 2013.
Pennock, Robert T, and Michael Ruse, eds. But Is It Science?: The Philosophical Question in the Creation/Evolution Controversy. Amherst, N.Y.: Prometheus Books, 2009. Print.
Secord, James A. “Knowledge in Transit.” Isis 95.4 (2004): 654–672. JSTOR. Web. 11 Feb. 2013.
Spiro, Lisa. " 'This Is Why We Fight': Defining the Values of the Digital Humanities," in Debates in the Digital Humanities, ed. Matthew K. Gold (Minneapolis: U of Minnesota P, 2012), 16-35.
Wallis, Roy. “Science and Pseudo-science.” Social Science Information 24.3 (1985): 585–601. Sage Journals Online. Web. 6 Oct. 2009.
Whissel, Kristen. “The Digital Multitude.” Cinema Journal 49.4 (2010): 90–110. Print.
Everyone should learn t̶o̶ ̶c̶o̶d̶e̶to argue about whether everyone should learn to code. #fixedthatforyou
— Ted Underwood (@Ted_Underwood) March 4, 2013
To put it bluntly, I think the search for the digital essence of the digital humanities is a classic case of an unexamined and openly theological metaphysics of presence, whether described as "building," "craft," or, as Patrick put it, a "creative or experimental" relation to the tools at hand. (Creative and experimental as opposed to what? Conventional and obedient? Do we only count the using of "tools" if the using is against the grain? If so, remind me why we're building these things in the first place? Or, in other words, is the "creative and experimental" distinction anything other than a tautological insistence on the goodness of the good?)
So I really have nothing to offer in the way of furthering the conversation, feeling as I do that the conversation is in many ways a mistake. (The conversation we might have instead could involve looking at great work that's happening now and talking about what makes it interesting.)
Yet here I am, wittering on all the same. Why?
Well, in the course of our conversation Jonathan said something that struck me as illuminating:
@ncecire I feel compelled to make such distinctions partially as a move to raise the bar -- some really shallow work calls itself "digital"
— jonathanstray (@jonathanstray) March 3, 2013
Yes. This is also why digital humanists keep having The Mistaken Conversation, especially in a moment of institutionalization in which identifying standards for evaluation is highly consequential. There is shallow work, and one wants to raise the bar.
But here's the question: why does the desire to evaluate work—judge its quality—manifest as a desire to delimit the boundaries of the (self-consciously interdisciplinary) discipline?
To make an analogy, there's a lot of shallow work done in literary criticism, too, but we don't call it not literary criticism. We call it bad.
(We have our own metaphysics-of-presence hobbyhorses in litcrit too—I think work that fails to engage with literary form strikes many literary critics as "shallow," with all the resonances that a word like "shallow" entails.)
There seems to me to be something particular about maintaining standards by delimiting boundaries, and there's one other place that we see it all the time: science. There are, of course, judgments about "bad science." But what really gets people up in arms is instances of what gets judged to be pseudoscience—the ultimate affront, pretending to be science when you're not. To name something a pseudoscience is to declare it out of bounds, as not even playing the same language game as science.
There's a vast literature in the history of science establishing why the designation of "pseudoscience" is so thorny (Michael Gordin gives a nice rundown of this in the introduction to The Pseudoscience Wars, and there's a sketchy pulled-from-Zotero bibliography below).* And it's absolutely not for want of historians of science trying to find a boundary. For myself, I do not use the word "pseudoscience" at all, because I'm persuaded that when a theory is bad, what's bad about it isn't the fact that it's pretending to be science. I especially resist applying the term "pseudoscience" to the nineteenth-century scientific racism that I frequently encounter in my research. To endorse as real science only those results that we (a) currently accept and (b) find moral, even when this forces us to exclude concepts that were fully accepted by the scientific mainstream (it doesn't get more mainstream than Louis Agassiz), is presentist, and only functions to sentimentally preserve the purity of an idea of "science" that excises the disturbing vagaries of history and contingency that shape inquiry. Scientific racism was science. We don't get to expel it from the discipline(s) just because it makes science look bad, any more than we get to expel the Bohr model of the atom (which remains in high school chemistry textbooks as part of a teleological, progressive march toward the electron cloud model). Scientific racism structured its inquiries around premises that we now find indefensible—but all scientific inquiries are structured around premises of one sort or another, and we shouldn't expel scientific racism from the history of science in order to preserve the fantasy that they aren't. Louis Agassiz was (1) racist and (2) definitely a scientist, a very important one. We have to be able to hold both of these notions in our heads.
Okay, so I've used a pretty contentious example to demonstrate why I find "pseudoscience" to be a problematic term. But why do we want it so, so much? Why does it seem to make so much sense to repudiate ideas not only on logical or methodological grounds but with a charge of fraudulence?—on, in other words, ethical grounds?
And why do digital humanities and computational journalism, in a less charged but quite persistent way, seem to want to do the same thing?
I think the gatekeeping impulse has a great deal to do with a desire to preserve the field (is it a field?) as a site of virtue. This is what I meant when I wrote about "the virtues of digital humanities" in JDH. The discourse of digital humanities is charged through and through with a language of ethics, which makes the language of ethical breach (pseudo, shallow) a logical concomitant. As I tried to show in the JDH piece, we inherit this frame from a long tradition of framing empiricist methods of knowledge-production as sites of virtue—as, indeed, dependent on virtue, or, as Lisa Spiro puts it, "ethos."
Moreover, the ethical norms imputed to digital humanities (as a necessary concomitant of its praxes, I wish to emphasize) are strongly associated with hopes (and fears) about the institutional change that digital humanities might bring about—democratization of access, collaboration, shortened journal pipelines, dismantling of traditional academic hierarchies, more equitable (or magically invisible) labor models, just to name a few of the hopes pinned on this—what: field? methodology? cluster of projects? tendency? It's no wonder "who's in and who's out" continues to feel like a terribly pressing question, even though its answer seems as elusive as the answer to the "who's in and who's out" question in science.
Nothing I say here will much countervail the impulse to "separat[e] the pilgrims from the rakes," as Larry Laudan puts it. But I'd argue that we should stay conscious of why Laudan's tongue-in-cheek nomenclature actually applies with eerie accuracy—why deprecating "shallow work" is a matter of delimiting boundaries. Pilgrim's progress or celestial railroad, the ethical discourses around digital humanities shape how work is done.
* * * * *
*Thanks to Scott Selisker for pointing me to the Steven Shapin review that pointed me to this book.
While I was writing this, Lindsay Thomas tweeted a reference to Kristen Whissel's essay "Digital Multitudes" as one approach to the "what is digital?" question. In my laziness, I'm just going to link it for now [Muse paywall].
Bud, Robert. “‘Applied Science’: A Phrase in Search of a Meaning.” Isis 103.3 (2012): 537–545. JSTOR. Web. 16 Feb. 2013.
Cooter, Roger. The Cultural Meaning of Popular Science: Phrenology and the Organization of Consent in Nineteenth-century Britain. Cambridge [Cambridgeshire] ; New York: Cambridge University Press, 1984. Print. Cambridge History of Medicine.
Cooter, Roger, and Stephen Pumphrey. “Separate Spheres and Public Places: Reflections on the History of Science Popularization and Science in Popular Culture.” History of Science 32 (1994): 237–267. Print.
Daum, Andreas W. “Varieties of Popular Science and the Transformations of Public Knowledge: Some Historical Reflections.” Isis 100.2 (2009): 319–332. University of Chicago. Web. 8 July 2009.
Daston, Lorraine, and Peter Galison. Objectivity. New York : Cambridge, Mass: Zone Books ; Distributed by the MIT Press, 2007. Print.
Fyfe, Aileen, and Bernard V. Lightman, eds. Science in the Marketplace: Nineteenth-century Sites and Experiences. Chicago: University of Chicago Press, 2007. Print.
Gordin, Michael D. The Pseudoscience Wars: Immanuel Velikovsky and the Birth of the Modern Fringe. Chicago: The University of Chicago Press, 2012. Print.
Laudan, Larry. “Views of Progress: Separating the Pilgrims from the Rakes.” Philosophy of the Social Sciences 10.3 (1980): 273–286. pos.sagepub.com. Web. 4 Mar. 2013.
Pennock, Robert T, and Michael Ruse, eds. But Is It Science?: The Philosophical Question in the Creation/Evolution Controversy. Amherst, N.Y.: Prometheus Books, 2009. Print.
Secord, James A. “Knowledge in Transit.” Isis 95.4 (2004): 654–672. JSTOR. Web. 11 Feb. 2013.
Spiro, Lisa. " 'This Is Why We Fight': Defining the Values of the Digital Humanities," in Debates in the Digital Humanities, ed. Matthew K. Gold (Minneapolis: U of Minnesota P, 2012), 16-35.
Wallis, Roy. “Science and Pseudo-science.” Social Science Information 24.3 (1985): 585–601. Sage Journals Online. Web. 6 Oct. 2009.
Whissel, Kristen. “The Digital Multitude.” Cinema Journal 49.4 (2010): 90–110. Print.
Subscribe to:
Posts (Atom)