A Thing I’ve Learned About Teaching

One thing I’ve learned about teaching over the years is that in an undergraduate course, it’s usually a mistake to assign the best scholarly works that you otherwise rely on in your field.

There’s some exceptions. I teach an upper-level honors seminar in colonial African history where the point is to expose students to the historiography of a particular specialized field, and so there I do try to teach what I see as the canon in that field. Though even there I throw a lot of idiosyncratic choices in the mix like David Hecht and Maliqalim Simone’s Invisible Governance or Alexandra Fuller’s Don’t Let’s Go to the Dogs Tonight.

Outside of such courses, however, it seems to me it’s usually a mistake to assign works that are the best in conventional scholarly terms. I’m not saying that as a criticism of such work, not at all. However, when you’re teaching, you need something that’s got some rough edges, some openings, something discussable. I think this is especially true in historical scholarship. A great work of scholarship assigned in a small discussion-oriented course is likely to just fall with a thud on the students. If it’s short, it may provide useful empirical information that the class can make use of later. Its usefulness as an object of discussion fades in direct proportion to its scholarly quality. A terrifically good book that is also a properly scholarly book may have a great deal which makes it discussable, but only among other scholars.

It takes a long time to sort out which books or articles may be scholarly, of high-quality, and yet also have some kind of adventuresome or speculative argument which readily engages undergraduates who have no long-term or dedicated interest in historiography or scholarly practice. These are not necessarily the same kinds of books and articles that are written by self-conscious popularizers: I would say that David McCullough’s 1776 is as undiscussable as a high-quality monograph on some aspect of the Revolutionary period. 1776 may be more readable, but there’s not that much to talk about within the text itself, just about the relationship between popularizing history and scholarly history as forms of writing.

This observation is one of the reasons that I’ve rarely used the same syllabus twice for the same course. I pretty much rip them up and start again. I actually have a hard time imagining using the same syllabus twice except in my aforementioned honors seminar, where stability from year to year is institutionally important.

Posted in Academia | 9 Comments

Woof, Woof! Zap!

My daughter was intrigued by the ads for Charlie and the Chocolate Factory. So I decided to strike while the iron’s hot.

I haven’t read the book to her yet, (we’re halfway through The Voyage of the Dawn Treader) or told her the story, nor has she seen the Gene Wilder version. But she managed to demonstrate her narrative literacy again when she whispered to me as Violet Beauregarde began to turn blue and inflate, “Are those guys [Oompa-Loompas] going to sing each time a kid does something bad?” and then about a minute later, “I think bad things are going to happen to all the kids. Except for Charlie.” Anyway, she liked it a lot.

Me? Well, see, when I was about eleven we had this dog. It was the dumbest dog we ever owned, and unfortunately was also a constant barker, with a very irritating bark. I got dispatched to obedience school (two of them!) with the dog. Both trainers pronounced the dog untrainable, and both of them appeared to be relentlessly cheery optimists otherwise. One finally in desperation suggested this special collar that would give the dog a mild shock when the dog barked.

That didn’t work either. My parents had some acquaintances who lived out on the high desert with lots and lots of land. They liked the dog and the breed and agreed to take her. Good thing too since our neighbor was about to sue us.

I bring this up because I’d fit Tim Burton with a shock collar like that one, designed to go off whenever he starts to do something terrifically stupid or miscalculated in a film that’s otherwise humming along just fine. Only I don’t think it would help him any more than it helped that dog.

The film is a really dead-on, faithful version of the book right up until Wonka walks out of the factory. Then strike one, Johnny Depp’s portrayal. Now I like Depp, enormously, but he screws the pooch this time. It’s just a miscued performance, and yes, whether he meant to or not, it has a whiff of Michael Jackson in it. Bad bad idea. Burton’s been crowing about how faithful he is to the book, but it’s hard to imagine anything more faithless than making Wonka a kind of boy-man aesthete suffering from arrested development. The key to Wonka is that he’s a misanthropist (like Dahl) who feels himself the last decent man in the world (with the important addition that he sees creativity and imagination as a key part of being decent). He’s delighted to be proved wrong by Charlie and Grandpa Joe. He’s eccentric, yes, but it’s the wise old version of eccentric–and Dahl’s has the capacity to care deeply about the few people who deserve it, which Depp’s Wonka doesn’t.

But that’s just strike one. There’s a lot of good swings here, so you’d think Burton might still hit it out of the park. But then he blows the game completely, at least for me, with a god-awful backstory for Wonka. Wonka’s estranged from his candy-hating dentist father and has to seek redemption through loving his father again.

I’ve never seen a filmmaker so capable of getting so many things right and then just colossally miscalculating with horrible plot ideas or stagings that rip the guts out of everything he’s done to that point in the film. I can only think of four films where he doesn’t blow it that way (Pee-Wee, Edward Scissorhands, Ed Wood, Nightmare Before Christmas).

Posted in Popular Culture | 7 Comments

Whoa. Yeah, yeah. Hmm. Yes! No! Whoa.

Or, “My weekend with the new Harry Potter book”.

Spoilers abound, so avert your eyes if you want to remain innocent of the plot details.

At the beginning of the book, Rowling appeared to signal that the status quo in Potterland was about to change, which was something of a relief, after the wheel-spinning of the last book. Still, I have to confess that I was still thinking she’d probably back off by the end, that Snape’s situation would once again be returned to mysterious ambiguity, that Voldemort would receive some minor setback, that Hogwart’s would go on as it has. I knew someone was supposed to die, and that the money was on Dumbledore. After the first four chapters or so, I was thinking instead that it would probably be Draco Malfoy.

Fooled me. Rowling really does seem to be building the narrative towards conclusion, and I am very glad for it. I’m still vaguely nervous that she’ll somehow backslide and revert the whole story to the Hogwarts’ norm, or that Dumbledore will just have had a comic-book death from which he can be resurrected (or a Jedi death so that he’ll be a blue glowie advising Harry from beyond the grave). The latter seems particularly possible, given the mysteries about the false Horcrux that Harry and Dumbledore secured. I hope not.

The most likeable thing about the book is that it carries forward Harry’s classically adolescent behavior from the last book but suddenly Harry finally breaks through to a maturity of his own. For the first time, after Dumbledore explains to him how Voldemort has forced the terms of the prophecy to come true, Harry seems to actually be an active protagonist. Up until now, the character has always ridden on the plot’s rollercoaster, a slave to its conveyances. He’s never really done anything, just had things done to him. It’s one of many weaknesses of these books, the previous predictability of the narrative structure and Harry’s passivity (coupled with his author-manipulated thick-headedness or cluelessness) within the plot. But the clouds break here and the sun comes out. Harry knows enough to act and has an independent sense of what he has to, wants to, achieve.

Moreover, assuming that everything about Malfoy and Snape is as it appears in the plot’s climax, Harry also knows that his own judgement of the situation is superior to everyone’s, including Dumbledore’s. The book sets you up for much of its length to think that Dumbledore, Hermione, and Ron are correct in their skepticism about Harry’s suspicions of Malfoy and Snape, and then abruptly reveals that he was perfectly right all along. Harry’s become the Captain Kirk of the plot, with Hermione his Spock and Ron his McCoy.

I was a little surprised at how simple Snape’s actions at the end turned out to be. It really doesn’t matter now what we find out about Snape later: this pretty well seals the character’s fate and status within the story. He’s flipped to being a straightforward antagonist. Which makes all the effort lavished on Snape up to this point a bit odd, as if Rowling changed her mind about him. It’s one of the problems with the books in general. They’re partly driven by the public-school, Tom Brown’s Schooldays, setting, which requires at least one sadistic schoolmaster who hates the protagonist. Fine, that’s Snape. But up to this point, Dumbledore has also been in the Mr. Chips’ role as headmaster, and when there’s a kindly headmaster who has a deep personal connection to the protagonist, the continued employment of the sadist becomes a bit harder to justify unless the sadist has some kind of hold over the kindly headmaster. In the last book, Rowling finally explained why Dumbledore allowed the Dursleys’ torment of Harry to continue so long (and there’s a nice, brief, sharp exchange about that in this one as well) but now Dumbledore’s faith in Snape simply seems a bit weird and his tolerance for Snape’s abusiveness even more so. I assume there’s more to come on this in the next book, but it does feel now as if a good deal of set-up went to waste.

The book ends with the clear sense that the next book will not be set at Hogwart’s. I hope Rowling can carry through on that. It would be really annoying to have Harry constantly getting demerits or whatever because he’s sneaking off to find Horcruxes. I was hoping a bit also that at least some government officials would finally align usefully alongside Harry: the incompetence routine is getting a bit old. Maybe in the next book.

A final thought. At least to me it seems very possible now that a long-held speculation about Voldemort and Harry is true: they’re related by blood. In fact, my guess is that Voldemort is Harry’s uncle. We know very little about Harry’s mother even now, and I suspect that’s by design. Note that we’ve seen nothing of Voldemort’s father’s family: it would be very easy for Lily to turn out to be Voldemort’s half-sister.

Posted in Books, Popular Culture | 43 Comments

Book Notes: Theory’s Empire

So, over at The Valve, they’re talking about the new anthology Theory’s Empire, and I was asked to join in the fun. Beware of what you ask for: I may have achieved true Holbonian length here, at 3,000 words or so.

—————————-

I’ll start with Jeff Reid’s cartoon “Breakfast Theory: A Morning Methodology”, just as the book starts. I was one of the thousands of academics in graduate school or newly hired in 1989 that cut that cartoon out and put it up on a bulletin board. I remember showing it to my wife, saying it was the funniest thing I’d seen. She read it attentively and smiled politely.

The cartoon stayed funny but it also started to become an emblem of something else for me, a growing awareness of distress. In 1989, I was well into graduate school. I had actually had a lot of exposure to “critical theory” as an undergraduate major in history and English in the mid-1980s. I had even had a class with Judith Butler on Foucault while she was at Wesleyan. I liked theory, even when I felt I didn’t have the faintest idea what was going on, because if nothing else you could sense the energy behind it, that the theorists we read were urgently engaged by their work, the professors who taught the theorists were among the most exciting and skilled teachers at the college, because in the backwash of the 1960s and 1970s, many of us had a restless sense that the next intellectual and political step was waiting to be taken, but none of us knew what that might be. Theory made you feel almost like you were in the dream of the Enlightenment again, everyone speaking the same language with disciplines and specializations set aside.

The cartoon was funny for those of who spent time reading, thinking, speaking theory at a very particular moment in the institutional and intellectual history of American academia. For anyone who didn’t, the cartoon is mildly amusing in another way: as a kind of pre-Sokal confirmation that the eggheads in the humanities had gone deep into the swamps of nonsense and pomposity. And this is how the cartoon wormed its way into my head: both as a funny satire of things I did and said and as a salvage operation dredging up an intellectual self already alienated by the distance between what I found myself doing as an academic-in-training and the underlying desires I’d brought with me when I signed up to get a Ph.D.

Which is still how I feel now about “theory” and its alleged overthrow. I warm to the talk that it was an empire, but I’m equally aware that my sense of it as such is a direct personal consequence of my individual experience of academic careerism. I warm to the various critiques and denunciations of theory in the volume but to some extent because I get both the insider and outsider version of them, the same way I could read the cartoon in two idioms, and for the same reason, the glee of some contributors can be a bit off-putting. This is why I tend to bristle on one hand at know-nothing denunciations of theory, like E.O. Wilson’s in Consilience, but also at circle-the-wagons defenses of it, or even those defenses which argue that the problem with theory was only its occasional excesses and over-zealous acolytes.

The main point, and it is one made again and again throughout the anthology, is that theory was above all a professional consciousness, a way of feeling and being academic that was native to a past time and place (the 1980s and early 1990s). You can’t just separate out some of the chief manifestations of the era of theory, like the star system, as unrelated epiphenomena, or insist that we just talk about the actual texts. (Though at the same time, the volume could really use an ethnographic retelling of a conference or conversation from the late 1980s or early 1990s. Anthony Appiah comes closest in his short essay, and maybe there is nothing that really fits the bill besides a David Lodge novel.)

This is not to say that theory’s moment is done and gone, with no harm to anyone. There was lasting damage done in a variety of ways.

A number of contributors observe that one thing that the theoretical moment did which has had lasting effects on academic writing in general is not so much the feared disfigurations of jargon but the escalating grandiosity of scholarly claims, the overinflation of argument, the Kissinger-joke ramping up of the presumed stakes in scholarly writing and speaking. Theory, particularly but not solely in literary studies, withdrew from an imagined relation to public discourse which apportioned it a mostly modest role but in exact inverse proportion to that retreat developed a more and more exaggerated sense of the importance of its own discourse.

You cannot just make this a folly of the theorists, or talk about it in isolation from the economic and institutional changes in the academy itself. Academic literary critics in 1950, like most professors, made poor salaries while working for institutions which were still relatively distant from American mass society. Professors in 1989, particularly those employed by selective colleges and universities, were working for institutions which were relatively wealthy, paid good salaries and offered good benefits, and which were now a familiar component of the American dream. Most research university departments in the humanities and the social sciences at that time also had to confront the seismic shift in the internal budgeting of their institutions, that external grants not only kept the sciences going but also funded the whole institution in major ways. The scientists weren’t usually being modest about the usefulness of their research in their grant applications, and a good deal of that spilled over as a pressure on the rest of the academy.

This inflation has a lot to do with explaining the relation between the first wave of high theory and its evolution into historicism and identity politics of the race/class/gender variety, much discussed in the anthology. (In many ways, this mutation is the central issue under discussion.) On paper, this relation is hard to explain: it is not an easy or natural evolution of argument from the initial round of continental postmodern or poststructuralist philosophy, much less so from the first wave of the high priests of deconstruction in the United States like Paul de Man. The contributors to the anthology hammer on this point again and again, but it iss worth emphasizing: whatever “theory” began as, it quickly metastasized into a much vaguer way of being and acting that could be found in most corners and byways of the academic humanities, and a way of being and acting that was often a new and virulent practice of academic warfare which left a lot of casualties and fortifications in its wake.

It is true that a response to the volume that insists on reconfining theory to a properly constrained set of texts and authors has a valid point. If nothing else, it leads to taking the actual content of actual writing seriously, rather than just a marker of academic sociology. Saussure, Foucault, Derrida, Lacan, and even many of the various American academic superstars who dominated the era of theory like Fish, Jameson, or Spivak had important, substantive arguments to make that cannot just be waved away or ignored. (Nor does this anthology: it collects some smart detailed ripostes to the substantive arguments of Derrida and many other theorists.) Still, I agree with many of Theory’s Empire‘ authors: the geist and historical moment of theory is an equally important part of the subject.

Which maybe is best known experientially, by those of us who lived through the sometimes-subtle, sometimes-blatant transformations high theory brought to academic practice and consciousness. Many are right to say that is a perilous claim, not to mention a potentially narcissistic one: it is a short step from that insistence that “I lived it, so I know it”, to blasting everything you don’t like as “postmodernism”, to ignoring the things that made various mutations and permutations of theory attractive and productive, to alienating your present intellectual self from the self that found it all very exciting and generative.

It is also dangerous because you begin to overread the theoretical moment as the causal agent behind every problem of the contemporary academy. Valentine Cunningham, for example, attributes almost every novelty in the vocabulary and practice of humanistic scholarship since 1960 to theory’s conquest. There are deeper drivers here, and they not only survive theory, but predate it. Among them is academic careerism itself. Theory sharpened its knives, but aspirant scholars in the humanities and elsewhere must still today present an account of themselves as more brilliant, more original and more important than any others of their cohort while also pledging their fidelity to reigning orthodoxies in their discipline. Theory’s overthrow has not changed any of that, nor did theory cause it to happen. Too many talented people chasing too few desirable jobs did. Cunningham argues that “criticism always claims newness”, but really, all humanistic scholarship since modernism or so does, and in this, is really only following on the lead of literature itself, as Morris Dickstein notes in his essay in the volume.

This is not to underrate the particular forms of self-interest that theory serviced in very particular ways. J.G. Merquior’s essay “Theorrhea and Kulturkritik” notes this by commenting, “That a deep cultural crisis is endemic to historical modernity seems to have been more eagerly assumed than properly demonstrated, no doubt because, more often than not, those who generally do the assuming ‘humanist intellectuals’ have every interest in being perceived as soul doctors to a sick civilization”. In many ways, theory was the ultimate careerist maneuver, because its normal operations conferred upon the theorist a position of epistemologically unimpeachable, self-confirming authority (in part by claiming to abjure authority) while also freeing the theorist from having to know anything but theory in order to exert such authority. I cannot be the only person who was subjected in graduate school (or later) to the peculiar spectacle of a dedicated, philosophically rigorous postmodernist proclaiming that only those who had thoroughly read the entire corpus of a particular theorist’s work should be permitted to speak about it. Indeed, such gestures of intellectual hypocrisy, some of them more subtle, some less so, are a particular target of mockery and anger from the authors in Theory’s Empire, and with some justification. It was hard not to see Derrida’s infamous assertion of conventional authorial rights over his interview on Heidegger as one of many such moments of contradiction.

One of the other oddities of the anthology is that almost no one gives a convincing account of their own survival of colonial domination by theory (including those essays contemporaneous with theory’s rise, which already adopt the posture of defeated defiance). I suppose you could say that some paint themselves as autochthonous survivors who dug themselves into the institutional maquis for a long guerilla struggle and are now celebrating as the colonizer’s regime collapses. Others set themselves up more as members of a lost Stone Age tribe who were never contaminated by the colonizer’s modernity, or as archaeologists digging into layers of criticism that lie below the theory strata. A few are positioned as latter-day nativists reaching back to the precolonial era for renovation, and still others, as nationalists who worked with the empire, have assimilated the colonizer’s ways but are now ready to renounce him and declare independence. (Pretty close to my self-presentation here.)

What’s important in this regard is that because the anthology collects many older essays as well as recent ones, it gives rise to some suspicion that theory’s empire was considerably less imperial than its most strident critics tend to claim, that it was always less influential and powerful than either the lords of theory or their enemies suggested. Perhaps I’m only inclined to think that because that’s what I think about other empires, too, but I think many academics simply amiably went about their business in the era of high theory, borrowing a bit from such work here and there, but hardly worshipping at its altars or angrily burning its fetishes. Certainly that’s the way Foucault was commonly appropriated by many historians, as a practical device for identifying new subjects to research (said historians then, as often as not, debunked Foucault’s concrete historical claims in consequence.)

There’s some other points that emerge along the way in the book that strike me as important. One is the amnesia of theory at its high-water mark, which I think was both a substantive feature of theoretical argument and sociological feature of the reproduction of the humanistic academy in those years. So when John Ellis observes of Stanley Fish’s work that it ignored the past, that in Fish’s work, “philosophy of science begins with Thomas Kuhn, serious questions about the idea of truth and the positivist theory of language begin with Derrida, jurisprudence begins with the radical Critical Legal Studies movement”, I think he’s exactly right, and not just about Fish.

I think this became a feature of how many of us were trained and how we trained ourselves, a part of the ordinary discourse of conferences, reading groups, and so on. Theory began with the last person who was commonly authenticated as its progenitor, and that was good enough—largely because it helped younger academics frame themselves as making original gestures or “interventions” into various debates. I had a senior colleague in anthropology who used to fall into amusing rants every time he and I went to hear a presentation by a young anthropologist, and with some reason, because in the vast majority of such presentations, the author would proclaim, often citing critical theory, that they were beginning for the first time to reflexively consider the role of the anthropologist himself or herself in generating anthropological knowledge. He was right, this is a silly gesture: such concerns have haunted anthropology all the way back to its origins. The same affliction affected us all across a wide swath of disciplines: we reinvented wheels, fire, alphabets and chortled in satisfaction at our own cleverness. Theory dropped into our midst like commodities drop into a cargo cult, and our reaction was roughly the same, right up to eagerly scanning the skies for the next French thinker to drop down and inventing our own crude substitutes when the interval between drops grew too lengthy.

This makes me think that another issue which gets discussed here and there but whose importance is underappreciated is the role of theory in shaping the average or ordinary work of scholarship. Almost all the hue and cry in the essays is either about the foundational or canonical theorists or about various academic superstars. While I think it’s true, as I suggested earlier, that many scholars only had a passing and pragmatic relation to theory, I also think theory was a kind of attractor that pulled a wave of “ordinary” scholarship towards it. I remember being paralyzed by one of the first scholarly book reviews I wrote, holding on to it for months, because I found when I had finished that I’d written a very hostile review, largely because of the way that a work which might have had some workaday, craftsmanlike value as a monograph about the history of European representations of African bodies had wrapped itself in a rigid Foucauldian straightjacket and used theory as a justification for its chaotic and empirically weak arguments. (I was paralyzed because I felt bad about roughing up the author so much, but I got over it and published it eventually.)

This would be one of my acute criticisms of the subspecies of theory that became postcolonialism, that the ordinary work of postcolonial scholarship takes the already deeply problematic arguments and style of the dominant superstars like Spivak, Prakash and Bhabha and operationalizes it as yeoman-level banality. There’s a kind of missing generation of monographs as a result, an absence of substantive, minutely authoritative, carefully researched and highly specialized knowledge that serves as a foundation for more sweeping syntheses and broadly argued scholarship. As I look over my shelves, I spot numerous works in history, cultural anthropology, critical theory, literary studies, cultural studies, whose only major lasting usefulness is as a historical document of a theoretical moment, works that you literally wouldn’t consult for any other purpose. As Erin O’Connor notes in her essay, the problem here in part is the dissemination of formulas, of totemic gestures, and more frustratingly, of a scholarship which is consumed by an understanding of its own impossibility, or as M.H. Abrams says of Hillis Miller, of a deliberate dedication not just to labyrinths but to dead ends within labyrinths.

Though once again, it’s also important to remember that some of the deeper driver here is not the boogeyman of theory, but the whole of academic careerism. Our bookshelves still groan with books and articles that need not have been written, but they will continue to be written as long as they are the fetish which proves that the academic apprentice is now a worthy journeyman who can step onto the tenure track. But at least if we must write unnecessary books, it would be nice if those books might add minutely to knowledge of some specialized subject. In fact, one of the good things that came out of the moment of theory was the legitimate expansion of academic subject matter: I was pleasantly surprised to see that the bitching and moaning about cultural studies, popular culture and “trivial subjects” from scholars who superficially call for a return to a high literary canon as the proper subject of literary criticism was kept to a minimum in the volume, indeed, the longest specific criticism of cultural studies, by Stephen Adam Schwartz, never indulges in this vice. (I especially liked Schwartz’ observation that cultural studies is actually governed by methodological individualism, and thus a form of ethnocentrism: my principal answer would be to say that for me that’s a feature rather than a bug.)

It is a straightforwardly good thing that historians now write about a whole range of topics that were relatively unstudied in 1965; a straightforwardly good thing that literary critics read and think about a much wider range of texts than they once did. As Morris Dickstein notes, the era of high theory in the 1980s was not the first to discover the problem that there might not be a hell of a lot left to say about literary works that people had been reading and interpreting for centuries. This is why is makes me all the more regretful that theory dragged so much of the workaday business of academic writing towards its own forms of epistemological blockage and vacuity, because there were at least a great many new things to write about.

I suppose if I had one hope from this volume, it’s that people who read it and take it seriously won’t be the kind of lazy Sokollites that Michael Berube justifiably complains about, because nowhere in the volume does anyone claim that doing literary analysis or humanistic scholarship is easy or straightforward. If this is a roadmap to the future, it does not go from point A to point B, much to its credit.

Posted in Academia, Books | 31 Comments

Flawless Victory

Well, so much for Tribble. Seriously. Whomever he/she is, I hope there’s at least some reflection going on there in the wake of some really very careful, methodical criticism from, oh, just about every academic blogger in existence.

I didn’t notice that the article actually says that the author is in a humanities department, so that much is pinned down.

The real screamer which almost everyone picked up on, which I didn’t mention, is the passage:

“The content of the blog may be less worrisome than the fact of the blog itself. Several committee members expressed concern that a blogger who joined our staff might air departmental dirty laundry (real or imagined) on the cyber clothesline for the world to see. Past good behavior is no guarantee against future lapses of professional decorum.”

If we’re talking academic standards here, that’s a passage that any scholar ought to be ashamed of penning–unless a pseudonymous piece in the Chronicle is subject to even lower standards than Tribble assigns to blogs. Taken seriously, that’s an argument that should lead to hiring no one.

Though if ever there was a department that needed to be taken to the cleaners, I’m beginning to think it’s Tribble’s.

Posted in Academia, Blogging | 4 Comments

The Trouble With Tribble

A pseudonymous professor at a small liberal arts college in the Midwest advises academic job candidates not to blog. His remarks are addressed to graduate students and junior professors, but honestly, they apply to anyone who might ever want to move from one institution to another. I’ve long accepted that if I ever did feel a desire to move, this blog would probably be the thing that would put the final nail in any application. (After the dabbling in cultural studies and game studies and my unorthodox attitudes towards my major field of specialization.)

Tribble’s reasoning isn’t entirely about blogging: it reveals a larger and more typical kind of academic parochialism. Yes, there’s certainly a whiff of pure distaste for blogs. But it’s also not blogs as such, but the decomposition of guild controls over what is verified as legitimate scholarship that they potentially represent. It’s the same attitude that lets other scholars justify opposing electronic publication of journals: all in the name of defending the high standards of peer reviewed publication. Tribble doesn’t tell us what discipline he’s interviewing in. If he’s in the humanities (as I suspect he is), defending the normal practice of peer review as being something worth saving is a bigger problem than an attitude towards blogs. Most peer review in the humanities functions less as a way to authenticate the accuracy and originality of a journal article or manuscript and more as a way to confirm that the author has the necessary hierarchical position within academia to publish the type of work they are trying to publish and as a tool for the enforcement of orthodoxy. Tribble’s defense here is about an entire view of academic knowledge to which blogs are only one small challenge.

It’s also evidence that the hiring process is perhaps the only place where faculty are allowed to let personal feelings run wild. Most of us observe very tight constraints on expressing our feelings about colleagues at other evaluative moments, particularly tenure. That’s one of the issues involved in the “collegiality” debate. On one hand, it feels as if it ought to be legitimate to talk about whether someone works well with others or causes lots of organizational havoc; on the other hand, most of us know that those discussions are a chance to let the devil in, to give people with an agenda (political or personal) a tool that they will misuse. Hiring is a different matter: very few people feel the same constraint.

I wouldn’t want them to be constrained formally, either. You should be able to talk about how your subjective, personal reaction to a candidate. The only problem is that such talk reveals as much about the talker as it does the subject of his or her commentary. Tribble’s comments about the bloggers in his search end up giving me a bad feeling about his own character, a sense of small-mindedness and conformism. I’ve got no problem with a colleague commenting to me during a search that a particular candidate seemed unstable or arrogant or weird–unless the person making the remark is unstable or arrogant or weird, which is sometimes the case. It’s like listening to someone who has published almost nothing in their career fret about a candidate’s publication record. It’s only annoying when it seems to lack self-awareness and proportionality. As Tribble’s complaints seem to, in my subjective feeling.

However, there’s two points he makes that I think have some relevance. I think a blog is in the public sphere, and I think as a contribution to the public sphere, it should be selective. Not because you’re thinking about your potential employers, but because you’re thinking about what does and does not belong in the public sphere. I think reputation, the creation of an externalized self who “speaks”, ought to be an important part of blogging. This sets me apart from the bloggers who see what they do as a diary, as self-revelation and self-exploration. It’s an old debate. I think if I were going to publish a diary, I’d consider being anonymous out of fear for my reputation. It’s also a question of purpose: if the felt need to publish is about doing self-work in a public space, about using revelation to clarify questions about one’s own life or attract support and insight from readers, I’m not sure it has any need to be under one’s own name. So if you’re a diaristic, personal blogger, I wouldn’t connect your blog to your academic career or identify it as academic under your own name. Not even necessarily because you’re afraid of the consequences, but simply because I don’t see what the purpose of doing so might be.

The second thing Tribble says that’s fair is that in a selection process where 10-30 candidates may be perfectly equal on paper, anything that might be used to peel away candidates is going to be used. Any information you provide that doesn’t strictly help or enhance your candidacy is not a good idea. So it’s true in this respect that blogging is a bad idea. I think the prudent academic blogger in graduate school or on the job market would want to constrain their writing for the moment to scholarly or pedagogical topics, and to think mindfully of an audience wider than the Usual Suspects, if they wanted to blog under their own name and draw attention to the blog as an academic one. So, for example, the bloggers at Savage Minds I think could confidently list that blog on their c.v.–but even there, you’d have to be mindful that all it would take is having one person on a search committee who has a particular dislike for an academic argument you’d made on your blog. Until someone sees that, you’re nothing but potential, and almost everyone can tell soothing stories about potential. The moment you make it clear what you think about a scholarly issue of importance, you’re inviting the small-minded, the conformists, the self-absorbed defenders of an orthodoxy, the control freaks and so on to scratch you off a list.

Posted in Academia, Blogging | 9 Comments

Have You Ever…

mistyped a word, noticed it a bit later, gone to fix it, mistyped it again, growled at yourself, then mistyped it again when you tried to fix it?

This means it’s time to stop working on the book for today.

Posted in Miscellany | 1 Comment

The Funny Pages

Ok, seems we have a little consensus here that whether or not bloggers are thin-skinned, Doonesbury isn’t particularly funny or sharply observed any longer and hasn’t been for a while. I can go along with that.

So what do you all read on the standard newspaper funny pages (not webcomics)? Can I confess to actually liking Edge City? (Which, coincidentally, is doing a set of strips on blogging at the moment.) It’s about the only “daily life” strip that seems even vaguely contemporary and non-generic, and at times, it’s actually pretty amusing.

I also read The Boondocks, though it takes itself way too seriously at times and is struggling to say anything new.

I have a guilty taste for Funky Winkerbean (though holy shit, man, it’s a bit depressing at times: it would be nice if the strip could go for a week without referencing alcoholism, cancer or post-traumatic stress syndrome). Also I kind of like the schmaltz of For Better or Worse at times.

Really guilty secret: I actually read Rex Morgan M.D. You get sucked into these things and then you can’t help your eyes from wandering to those little boxes just to see what’s happening. How many people think Buck Foxworth’s box that he hid in Rex and June’s house actually has drugs in it? But then I thought he was going to be a deranged grifter who was going to attack June when he was first introduced…

The Philly Inquirer doesn’t carry The Phantom but I’d read that if it did, simply because it’s one of the weirdest strips of all time. Combine Tarzan plus leotards the color of grapes running around in the jungle. That’s only the beginning of the weirdness. It would make a good candidate for a sort of “Space Ghost Coast to Coast” treatment if King Features ever wanted to try and revitalize the strip in some fashion.

Posted in Popular Culture | 9 Comments

Violence and Agency

Caleb McDaniel makes some important observations here and at his own blog, and I’ve been thinking a lot about how to respond.

These are certainly not the kinds of discussions I had in mind when I asked that we put aside the little extremist hobgoblins for just a bit. Mostly I’d just rather we all stay clear of the kind of whiny and banal partisanship where the challenge of terrorism is just more fuel for the spin-meisters, more occasion for subpar blogger imitations of the punditocracy.

Caleb’s thoughts are quite the opposite, deep and challenging.

To begin, I simply disagree with his elevation of peace as a social aspiration equivalent to justice or freedom, or the proposition that peace is the necessary precondition of either justice or freedom.

It may be true that civility is intrinsically a peaceful state, that to practice it constrains us from not just physical violence but even from totalizing verbal or cultural aggression against an opponent. The ideal democratic civil society is a game, which for me is anything but a trivializing or dismissive metaphor. This is a utilitarian claim that political conflict produces the most generative results for the whole of a society within constraints or rules. A game is a topography of conflict, a map. You can’t go off the edge of the map: there you will find monsters, or the edge of the world.

To play a game, both parties consent to play by the rules. Yes, sometimes one party cheats, but there is a big difference between the kind of cheating that preserves the game’s essential terms and the kind of systematic contempt for the game that ultimately destroys it–or the spoiler who throws the board across the room when they’re going to lose. If one party sits down at the table to play, and obeys the rules, and the other person won’t even acknowledge the game at all, then there are no constraints on either player. There is no game.

A democratic civil society cannot incorporate someone who will not even acknowledge its existence. Some acts of refusal can simply be ignored: they do not challenge or contest civility, merely stand apart from it. A game is not threatened by someone who will not play but does not contend. Some acts of refusal cannot be ignored. You cannot have peace with those who will not make peace. You cannot make peace if the price of peace is to give up the purpose of peace. You cannot make peace if it means an end to justice or freedom.

What would we offer to al-Qaeda? Would we treat with them as if they had already established their sovereignty over various Islamic nations or societies, as if they had a right to decide what should or should not happen in those places? How different would that be from supporting the autocrats that we already support? Al-Qaeda or various groups like it have no more right to demand a particular relation between all people living in particular places and the rest of global society than any other group or interest. Would we offer them a constraint on our own cultural and social exports, a sovereignty over what is seen and known and consumed in some particular place or location in the world? What business do we have doing that? Anything that we can offer in a peace involves the same exercises of imperial will and domination that the critics of the West so vehemently object to, the same intermingled sovereignties.

Unless peace in this context only means foreswearing violence as a response to violence, rather than the achievement of settlement or agreement between antagonists. I don’t see why one should categorically do so. My aspiration would be that someday, al-Qaeda and all the men and women who support it or movements like it should simply cease to exist, that there would be no such thing, that as a movement or worldview or set of practices it would be historical and strange to the world of the present. I don’t have the same aspiration for myself or my society or the institutions I inhabit. So there’s an asymmetry here that can’t be waved away by saying there is no “us” and “them”. There is.

I recognize that military violence is not the main method that will accomplish that purpose. The main method, as far as I can tell, is time coupled with an unswerving dedication to our core values and the enrichment (in many senses) of the world as a whole. However, it would advance the march towards that future day if we catch, try and imprison terrorists whom we can prove by our rules and our terms were conspiring to attack. It would help to successfully defuse their bombs or prevent their assaults with sensible security measures. It would advance the march towards that future if we happened to find out where Osama bin Laden and some of his chief leaders are gathered and kill them. It did advance the march towards that future to destroy the bases of al-Qaeda in Afghanistan and remove a state leadership that openly encouraged non-state organizations to train on its territory for attacks on the civilian populations of other nations. The point is not to forswear violence, but to recognize its necessarily limited role in a conflict that is much more about ideas, about individual and social aspiration, about the achievement and desirability of freedom. Am I here saying that there is no right of secession from global modernity, that it is legitimate under some circumstances to violently compel secessionists to remain? Yes. That was Lincoln’s choice, and I think it was right then as it is now, as long as one understands its limits and its dangers.

The defense of freedom and the aspiration for justice requires the possibility of violence, at least as long as you understand both freedom and justice to be things which are only meaningful on this earth, in this life, to us as living human beings. Putting peace in the same exalted place requires giving up much of what we now understand as a necessity for justice in the here and now. That move is not a targeted critique of the specific policies of the Bush Administration. It’s uncontainable: it quickly swallows all uses of the military, all operations of the police, all acts of incarceration, all civil settlements in which the state enforces a punitive or nonconsensual judgement against one party.

Caleb observes that seeing the London or Madrid bombers or the 9/11 pilots as attacking not just innocent individuals but freedom itself forecloses any understanding of the terrorists’ real motivations, their actual consciousness. I don’t think so. Both in purely empirical terms and in moral argument, we can recognize a difference between intent and result, consciousness of agency and expression of agency–and recognize that the two things affect each other. It may be that the London bombers had a deeply rooted and situated social and cultural understanding of their actions which has nothing to do with the relative superficiality of my claim that they attacked “freedom” itself. We should be interested in how they understand themselves, in seeing the world as they see it. Both to understand the causality of terrorism and to understand what might motivate its practicioners to play the game of modernity by the rules. (It’s obvious that mere wealth or incorporation within cosmopolitan culture is not sufficient, given that the 9/11 hijackers were anything but marginalized or abject in their social background.) But the consequence of their actions is a non-consensual, non-democratic constraint on the freedom of individuals to do what they like, a deprivation of their rights. Moreover, because I don’t think most of the terrorists in this case are idiots, and because they know very well what the perceived consequences of their actions are, I assume there’s something of a feedback loop here. Whatever motivations they begin with, their motivations in the end are necessarily transformed by a consciousness of their effects on the practice and possibility of freedom–and an increasingly depraved and perverse indifference to the humanity of their victims, an intentional sociopathy. Whomever did this, they’ve been living in London for some time, breathing its air, seeing its people. Whatever the first steps they took on the path to setting those bombs, the path ends at war with freedom itself, in consciousness of innocence of their victims.

This answers Caleb when he asks why we don’t treat terrorist attacks in the same way we treat natural disasters. This is where the count of casualties is somewhat misleading, only the surface of things. It’s true we don’t think very clearly at times about the comparative scale of suffering in relation to the efforts we expend to prevent suffering. 50 people dead to terrorists in one sense doesn’t compare well to hundreds of thousands dead to a tsunami, and you might legitimately wonder why the governments of the world are so cheap as to drag their heels about spending money on tsunami warning systems but will expend considerable effort to respond to a single terrorist assault. But the difference in the end is not numbers: it is agency. We understand, and should believe, that death and suffering that comes from the direct, intentional, deliberate agency of human individuals is categorically different from either natural disaster, the abstract consequences of collective human action, or the accidental and unintended consequences of individual action. It’s why we are often legitimately transfixed by sensational criminal trials because we believe, and I think should continue to believe in defiance of bad evolutionary psychology or lazy genetic determinism, that when someone is murdered, some individual deliberately committed murder. You can have a subtle and complicated view of causality, but at the end of the day, an act of murder is a final, fixed and elemental consequence of the intentional agency of a human being.

The one thing that gets me most riled up in some kinds of conversations is the implication that somehow understanding the cultural, social or historical habitus of a terrorist mediates the finality of their agency, that this understanding is an alibi. This is where the critique of the “root causes” crowd has some meaningful teeth to it, largely because (in my judgement), that demand is conventionally applied so unevenly. If we have to understand the habitus of an Islamic terrorist and that understanding leavens or softens our judgement of their contingent responsibility for what they do, if we understand them as a product of and expression of an underlying condition and less an individual agent who chooses freely to murder, then why don’t we have to apply the same understanding to George Bush, Bush’s government officials, compradorial elites in the Arab world, or any other group or individual we might wish to criticize? I’m not saying that this is what Caleb claims, but I would say that this is the danger that lurks further down the road of his argument. If we explain the bombing of London in terms of habitus, as the product or expression of a social and historical condition that precedes and lies outside of the individual agency of the bombers, then we have to understand the American attack on Iraq in the same terms, to apply our ethnographic gaze evenly. There are very few people I can think of that do that. If you don’t, you’re just as horribly ethnocentric as the most overt bigot: your own people are individual agents responsible for their individual actions, while “they” are explained by externalities and root causes.

Or, as Caleb does, you cannot say let us reject “us” and “them” and then try to view the violence of the terrorists as symmetrical to violence used against terrorism. This too is a flattening effect. If the individuals who commit terrorist actions are exactly like us in their motivations, their understanding of violence, their deployment of instrumental reason–if they are playing the game as we play it–then why do they bomb or kill? You’re either going to end up explaining their actions in terms of prior causes external to them (and probably not do the same for the “us” that you want to forswear violence) or you’re going to end up arguing that violence is irrational for all sides, that no one is doing what is in their best interest, everyone commits barbarities while no one is a barbarian. You’re going to end up with an account of violence which has no agency to it whatsoever, as I think Caleb does. Nobody does violence; violence just is. I’m not responsible for violence; neither are the terrorists. Which is a perspective which at the least requires an explanation: if violence is purely non-productive, never in the interests of those who use it, why does it exist at all?

It also requires viewing violence in historical time from some Olympian remove that allows one to ignore its uneven effects on causality and change. As a technology of modern power, violence has done all sorts of things. Whatever else it is, it is not mere or simple futility or destruction. Name me a thing you like about the contemporary world and I’ll wager that violence–state violence, collective violence, individual violence–played a generative role in producing it.

Certainly freedom and justice. Even, and perhaps especially, peace.

Posted in Politics | 2 Comments

Calibrating the Classroom

I’ve done uncompensated overload teaching in the past and I suspect I’ll do it again in the future. Now I admit I’ve cried “uncle” a few times when my service load and course load have conspired to make my work week unmanageable, but really, I have no complaints about teaching the load that I do. I want to. I genuinely find teaching satisfying and I genuinely resent it when some aspect of my service work more or less requires that I have a course release.

It’s not just that I like the activity of teaching, but also that I use my courses as a thinking device, a chance to see how various materials strike others, to see if there are new ways to communicate or produce knowledge, to do a test run on my own writing or arguments.

The mid-summer is the time that I often start seriously thinking about the coming year’s teaching, and particularly when I’m coming back from a sabbatical I like to think about whether I’m going to make any serious structural changes to the way I teach. I’ve tried a few things in past years that I haven’t liked so much in practice. For example, in a previous 3-year cycle of courses, I tried some classes where student research teams would choose some of the reading material for everyone else in the last quarter or so of the semester. The results were a bit too uneven for my tastes, though I still like the concept behind it.

This time I’m planning to try using PowerPoint as a supplement to my lectures in at least one course. I’m just thinking that I miss the opportunity to display genuinely useful visual information far too often, and that’s what I want to use it for, mostly. But also I’m thinking it will discipline some of my lecture prep a bit, force me to think farther ahead about bringing extra material, providing definitions of terms, and so on. I don’t want to just display the outline of my lecture as I talk, that’s no good.

The other two issues I’m considering are grade inflation and discussion management. I have begun to feel a bit as if we’re not asking enough of our students in the humanities and social sciences here. It’s not necessarily that I want to just mechanically knock my grades down a peg, it’s more that I want to be a bit more sharply pointed and less generically encouraging in the kinds of feedback that I give, with some slightly more drastic consequences when somebody really blows off an assignment. Right now I tend to unload a B minus or C plus on weak work, a B for mediocre work: that’s what I’d like to adjust some, to put some teeth into it. A B should mean something: right now in my own grading, it can describe lazy work by a strong student or decent work by a weak one. I need to differentiate between those, and the first type needs to get a swifter kick in the pants.

Still, when I look at all the inquiry into grade inflation, I do wonder if sometimes the simpler explanations are overlooked. For one, the lack of explicit discussion of pedagogy in graduate training is not a recent shift in academic life: it runs very deep, back into the old days when average grades were much lower. As a result, then and now, you tend to start your teaching career in higher education with almost no sense of what other people are doing in terms of grading or how they do it. You get a vague, possibly erroneous sense of what the local norm is and you try to hit close to it. That’s a system which is almost intrinsically vulnerable to positive feedback effects. If the perceived norm drifts even slightly in one direction, that drift is going to feed on itself, and push the entire system towards an attractor. It doesn’t require any deeper underlying explanation or intent, as long as there’s nothing that “pushes back” or corrects on the system. I think to some extent providing extensive information about grade inflation and the distribution of grades within an institution is just such a corrective, and now that many institutions are doing that regularly, I suspect that there will be some push-back.

More pragmatically, there’s also a simple reason why assistant professors grade more generously than associates, on average. It’s not necessarily a calculating attempt to get high evaluations from students (though surely that is part of it). It’s also that you have an incredibly small data set on which to base any kind of comparative idea of your grades. If you’re evaluating students subjectively (as surely we must in the humanities), you have no idea how the first twenty or fifty or two hundred students you teach in a single year compare to the larger universe of your potential students over a longer interval of time. I keep altering my sense of who the best 1% of my past students are. In my first year of teaching, there are students I would have classed as such who I now would regard as great students but not that distinguished from the top 20% of students I’ve taught. A few of the students that I described as among the very best I had taught then remain among the very best now, but the level of confidence I bring to that judgement is much stronger. If I want to make the consequences for weak or mediocre work a bit sharper now, it’s because I have a far more precise sense of where the dividing line is between mediocrity and strong competency at this institution.

As far as discussion management goes, the big thing for me is that on one hand, I want to learn to endure silences a bit more often. I still have a terrible tendency after many years to anxiously fill a silence with a leading prompt that gets the discussion to where I feel it must go. On the other hand, I think I want to be ever-so-slightly less generous to students who like to talk a lot without having engaged the material. There’s one or two in every class, and I tend to try and make neutrally encouraging remarks whenever anybody speaks up. I’m not about to cut people off at the knees, it’s not my style, but I do think I want to challenge students a bit more when they make straightforwardly incorrect statements about the material because they’re bluffing their way through it. Socratic pedagogy has its limits, much as I find it a comfortable way to go much of the time.

Posted in Academia | 13 Comments