The Potential Condescension of “Informed Consent”

Many years ago, I was involved in judging an interdisciplinary grant competition. At one point, there was an intense discussion about a proposal where part of the research involved ethnographic research that concerned illegal activity in a developing country. We were all convinced of the researcher’s skill and sensitivity and the topic itself was unquestionably important. We were also convinced that it was plausible and that the researcher could handle immediate issues of safety for the researcher and the people being studied. The disagreement was about whether the subjects could ever give “informed consent” to being studied in a project that might ultimately identify enough about how they conducted their activities to put them at risk no matter how carefully the researcher disguised the identities of the informants.

I had to acknowledge that there was potential risk. When I teach Ellen Hellman’s classic sociological study of African urban life, Rooiyard, I point out to the students that she learned (and disclosed) enough about how women carried out illegal brewing to potentially help authorities disrupt those activities, which is one of the reasons (though surely not the only one) for the degree of suspicion that Hellman herself says she was regarded with.

But I thought this shouldn’t be an issue for the group because I believed (and still believe) that the men being studied could make up their own minds about whether to participate and about the risks of disclosure. Several of the anthropologists on the panel disagreed strongly: they felt that there was no circumstance under which these non-Western men in this impoverished society could accurately assess the dangers of speaking further with this researcher (who already knew the men and had done work with them on other aspects of their social and cultural lives). The disparities in power and knowledge, they felt, made something like “informed consent” impossible. Quite explicitly, my colleagues were saying that even if the men in the study felt like it was ok to be studied, they were wrong.

Now this was long enough ago that on many campuses, Institutional Review Boards were only just getting around to asserting their authority over qualitative and humanistic research, so in many ways our committee was providing that kind of oversight in the absence of it existing on individual campuses. Over multiple years of participating, I only saw this kind of question come up three or four times, and this was the most “IRB-like” of all these conversations.

I was alarmed then and have remained alarmed at the potential for unintended consequences from this perspective. Much as we might like to blame those consequences on bureaucratic overreach or administrative managerialism, which today often functions as all-purpose get-out-of-jail-free card for faculty, the story at least starts with wholly good intentions and a generative critique of social power.

From a great many directions, academics began to understand about forty years ago that asymmetries of power and wealth didn’t simply disappear once someone said, “Hey, I’m just doing some research”. There were a great many critical differences between an ethnographic conversation between an American professor and an African villager on one hand and a police interrogation room on the other, but those differences didn’t mean that the former situation was a frictionless meeting between totally equal people who just decided to have a nice conversation about a topic of mutual interest.

The problem with proceeding from a more self-aware, self-reflexive sense of how power pervades all social relations and interactions to a sense that everyone with less power must be protected from everyone with more power is that this very rapidly becomes a form of racism or discrimination vastly more objectionable than the harm it alleges to prevent. What it leads to is a categorical assertion that entire groups of people are systematically less able to understand and assess their self-interest, less able to understand the consequences of their actions, less able to be trusted with their own agency as human beings. The difference between this view and the imperial and racist version of colonial subjects is small to nonexistent. Yes, there may be contexts like prisons or the aforementioned interrogation room where it takes specific attention to protect and recognize moments of real consent and communication, but it is important that we see those contexts as highly specific and bounded. There are moments where it is strategically, ethically, and even empirically important to defend universals, and this is one of them. Subjectivity has difference, but the rights and perogatives of modern personhood should be assumed to apply to everyone.

A good researcher, in my experience, knows when something’s been said in a conversation that it’s best not to translate into scholarship. Much as a good colleague knows when to keep a confidence that they weren’t directly asked to keep. We’re all sitting on things that were said to us in trust, sometimes by people who were trying to impress us or worried about what we might think, that we never use and often consciously try to forget that we heard. The problem occurs when this kind of sensitive, quintessentially situational judgment call gets translated into a rule, a committee, a structure, a dictum because we’re afraid of, and occasionally encounter, a bad researcher (or a good one who makes a bad judgment call).

I accepted my colleagues’ call in that long-ago conversation though I thought and still think they were wrong, because it was one project being evaluated in one discussion for one organization. I don’t accept it when I think I think the call is being made categorically, in whatever context. If you want an example of what can happen when that sort of view of human subjects settles in to stay and becomes a dictum, I think a distinction between an American doctor being judged capable of making informed consent to taking an experimental drug for ebola and a Sierra Leonean doctor being judged of not being capable of informed consent.

Posted in Academia, Africa, Oath for Experts | 1 Comment

Feeling For You

Just about every day, my social media feeds surge at some point with anger at judgmental comments, sometimes specific comments by a public figure, sometimes collections or assemblies of common forms of implied or ‘polite’ judgmental remarks directed at entire groups of people, aka microaggressions.

If you have a wide enough range of social groups and people represented in your feeds, you will sooner or later hear one group of people saying some of the things in an untroubled or unselfconscious manner that fuel anger over in another group. Very rarely will the two groups actually be talking to each other, however, unless you choose to identify yourself as the Venn overlap and expose them to one another. Most of us know that little good can come of that: more typically, if you’re in basic agreement with the angry people, the simultaneity of conversations may spark you to unfriend or unsubscribe.

You have to have a really wide-ranging network of social media contacts or a really expansive taste for political and social variety to encounter certain overlaps. Almost everyone in my Facebook network is careful about any comments on race or expresses strongly within one major discourse that is critical of racial supremacy and racial injustice, for example, which I’m sure says something about my own professional and personal identity. Generally, I only see some of that overlap when one of my few friends who has a sizeable following to the right says something that seems too liberal in racial terms for the rest of his or her followers.

One place where I do see circles like this in my feeds are two rivalrous groups that are deliberately working to avoid any kind of intimate or insider understanding (and thus possible sympathy) for one another. The obvious case is Palestinians (and their sympathizers) and Israelis (and their sympathizers), but in a more quotidian vein I see it between faculty and administrators (though the latter group tend to be much more circumspect about expressing anything in social media, which I think is a pity).

Where I’m more likely to see this kind of overlap is in comments about body size/body image, mental health, parenting and family, age and youth, and in certain discourses about gender but not others. If I had to sum it up, in conversations about other people that concern attributes and experiences that are historically associated with the private, domestic and personal.

One example. What I see in this instance is one cluster of people for whom the existence of judgmental comments about body size and shape are powerfully explained by their views about social justice and discrimination. And then another group of people who unselfconsciously talk about weight and body size and exercise in terms of public health and private happiness. The second group is barely aware that the first group exists, and if they were aware, would regard them as risible or extreme. The second group is also often politically progressive and regard their views on body size and health as an outgrowth of other commitments they have to avoiding mass-produced food, to self-care and autonomy, to environmental justice and much else.

As an overweight person, I’m sympathetic to the first group. I’m often a bit stunned at how colleagues and acquaintances I know who would absolutely flip out if anyone “microaggressed” in their presence about race, gender or sexuality have zero problem asking me about my diet, commenting on my weight, wondering whether I’m healthy, or in one case, poking my in my belly several times during a conversation and saying, “How about that, eh?”

On the other hand, for all sorts of reasons, I don’t really feel like signing on with the conventional set of moves made within identity politics on this issue. Much of that is that I’m not really the kind of person who suffers serious consequences of body-image, body-shape discrimination: as always, white men get away with stuff that other people can’t. But it’s also that I just am not prepared to identify with or claim anything that’s based on the fact that other people feel it’s ok to be stunningly rude and actually touch my body, even though I find that always annoying and sometimes emotionally distressing.

I am more interested in figuring out what’s going on in this and many other cases, and the assumption that this is part of a coherent structure for the maintenance of discriminatory power seems premature, to interfere with that investigation.

It’s relevant today with the undercurrent here and there of a few people expressing anger with Robin Williams for committing suicide rather than sadness. We hear that kind of expression about public figures, more or less depending on how what they did relates to conventional wisdom. Why didn’t this person do that? Why is that person doing that thing? What’s wrong with them? Sometimes the discourse is fairly unanimous that it’s ok to pass judgment (say, on Justin Bieber); sometimes the discourse is fairly unanimous that only an asshole would say something like that (say, on Robin Williams). The most interesting cases for me are when it’s not only evenly divided, but the two groups are not really talking to one another.

There are days where I feel a sort of generic libertarianism is the right answer to all of this discourse, to all those circles in my feeds where someone is concerning themselves with another person’s body, behavior, looks. Just tend to your own knitting, judge not lest ye be judged, beam in your eye, all that.

But not only is that an impossible prescription to live up to, it’s too incurious. Why do circles form where it’s not only permitted but almost mandatory to pass judgment on some group or behavior? The conventional answer in most identity politics is that the judgments are produced by an infrastructure of stereotypes that is a functional part of structures of discrimination. E.g., that dominant groups use such judgments (and communicate them through microaggressions) in order to buttress their own power and status.

I think that’s part of the story, but when you wander away from the histories and structures whose connections to power and injustice blaze in neon, when you wander into that more personal, domestic, private space, I think some other dimensions crop up as well.

When the streams do cross and someone in a group or a discussion suddenly says, “Actually, I feel pretty hurt or offended by the way you folks are talking about this issue, because I’m actually the thing you’re talking about”, what happens? Sometimes people make non-apology apologies (“sorry that you’re offended”), sometimes people double-down and say, “You’re crazy, there’s nothing offensive about talking about X or Y”. A turn or two in the conversation, though, and what you’ll often hear is this: “Look, I just care about you and people like you. So I want to help.” (Or its close sibling: “Look, not to insult you personally, but people like you/behavior like that costs our society a lot of money and/or inflicts a lot of pain on other people. Don’t you think it would be better if…”)

I’d actually like to concede the sincerity of that response: that we get drawn into these discussions and the judgments they create out of concern for other people, out of concern for moral and social progress. That we feel passionately about people who let their children go to the park by themselves, about people who train their children to go hunting, about people who are overweight, about people who drive big SUVs, about people play their radios too loudly in their cars, about people who buy overly expensive salsa, about people who play video games, about people who raise backyard chickens, about people who demand accommodations for complex learning disabilities, about people who follow the fashion industry, about people who post to Instagram, about people who feed their kids fast food twice a week to save time, and so on.

I’d like to concede the sincerity but the problem is that most of these little waves of moral condemnation or judgmental concern don’t seem to be particularly compassionate or particularly committed. The folks who say, “I just want to help, because I care about you” show no signs of that compassion otherwise. They usually aren’t close friends to the person they’re commenting on, they usually have little empathy or curiosity overall. The folks who say, “Because I care about progress, about solving the bigger problem” don’t show much interest in that alleged bigger problem. The person who hates the big SUVs because they’re damaging the environment is often environmentally profligate in other ways. If the SUV-judger is consistently environmentally sensitive, some other aspect of their concern for the world, their vision of a better society, may be woefully out of synch or weakly developed.

The people I know who really care about others generally aren’t the people going on Facebook to say, “Man, I’m sick of people hiding behind claims of depression” or “If I meet another mother who thinks it’s ok to bring cupcakes to my child’s class, I’m going to go berserk”. The people I know who are really think about incremental moves to improve the world don’t get hung up on passing judgments on someone they’ve witnessed fleetingly in public.

I’m in strong agreement with the idea that there is no such thing as “reverse racism”, if by that we mean the capacity for a white person to suffer systematic consequences for being white. Even if a white person works in a specific context where the professional consequences of felt animus towards whites might have an impact on them, that’s still a very limited and constrained kind of consequence.

But any single individual can deliver emotional suffering to any other individual, sometimes consciously and directly, other times without any awareness of doing so. My feeds are lighting up right now with very well-meaning people reminding everyone that middle-aged men, including white and wealthy men, are both prone to depression and prone to keeping their feelings private. The categorical part of that point is sociological. It’s the same way that we rightfully identify the problems that our society suffers from and ought to confront. But the compassionate part of that point might be to think about specific individuals with whom we’re in specific social connection. To be aware that we can always hurt someone else, that we have hurt someone else. Sometimes that’s not our fault, and sometimes what we said was needful or important or defined own freedom to express and imagine and explore. In a world full of familiar strangers and strange friends, there’s no way to anticipate all the minds and hearts that might be touched by what we say and do. There are ways, though, to be mindful of the possibilities.

This is something that many of us found out in the first wave of going online. The classic sequence was that first we all self-disclosed and felt a sense of intimacy, but not because we knew the other people in the conversation as people. We did it because a sense of anonymity: talking with a million other strangers was like shouting across a cliff in a wilderness. Who was there to remember? And then we discovered that the familiar strangers could actually reply and engage in dialogue. Some of them said things that we hated or disagreed with, so we unloaded on them with greater and greater intensity. Many of us still do that in various online hang-outs. Sooner or later, most of us discovered the hard way that a person on the other end was real. Sometimes we found, painfully, that their reality was radically other than their online persona. That the person who engaged everyone with their tales of being victimized by a family member or otherwise was the victimizer, the person dying of cancer wasn’t, the person who spoke with authority knew nothing. Sometimes we found, equally painfully, that the person we’d attacked or disparaged or belittled was writhing in emotional pain about it. Or that someone we’d never thought we were attacking had felt that way.

Social problems, oppression, injustice: we shouldn’t apologize ever for trying to engage them and change our world. When we justify what we say because we claim to have a sense of compassion: I just want people to be well, I just want children to be raised in a way that makes them happy and strong, I just want people to be more considerate of others, I just want people to know that actions have consequences? Then I think we have to be sure that it’s compassion we’re speaking from, rather than an insecure attempt to assure ourselves of our own superiority to others. Compassion, it seems to me, grows not from judgment but curiosity. Not from certainty, but humility. I’d love to see social media feeds where the Venn overlap on “curiosity” and “humility” in my various circles was one hundred percent.

Posted in Miscellany, Oh Not Again He's Going to Tell Us It's a Complex System, Popular Culture | 4 Comments

The Listicle as Course Design

I’ve been convinced for a while that one of the best defenses of small classes and face-to-face pedagogy within a liberal arts education would be to make the process of that kind of teaching and coursework more visible to anyone who would like to witness it.

Lots of faculty have experimented with publishing or circulating the work produced by class members, and many have also shared syllabi, notes and other material prepared by the professor. Offering the same kind of detailed look at the day-to-day teaching of a course isn’t very common and that’s because it’s very hard to do. You can’t just videotape each class session: being filmed would have a negative impact on most students in a small 8-15 person course, and video doesn’t offer a good feel for being there anyway. It’s not a compressed experience and so it doesn’t translate well to a compressed medium.

I have been trying to think about ways to leverage participation by crowds to enliven or enrich the classroom experience of a small group of students meeting face-to-face and thus also give observers a stake in the week-by-week work of the course that goes beyond the passive consumption of final products or syllabi.

In that spirit, here’s an idea I’m messing around with for a future course. Basically, it’s the unholy combination of a Buzzfeed listicle and the hard, sustained work of a semester-long course. The goal here would be to smoothly intertwine an outside “audience” and an inside group of students and have each inform the other. Outsiders still wouldn’t be watching the actual discussions voyeuristically, but I imagine that they might well take a week-to-week interest in what the class members decided and in the rationale laid out in their notes.

——————–

History 90: The Best Works of History

Students in this course will be working together over the course of the semester to critically appraise and select the best written and filmed works that analyze, represent or recount the past. This will take place within a bracket tournament structure of the kind best known for its use in the NCAA’s “March Madness”.

The initial seeding and selection of works will to be read by class members will be open to public observers as well as enrolled members of the class. The professor will use polls and other means for allowing outside participants to help shape the brackets. One side of the bracket will be works by scholars employed by academic institutions; the other side will be works by independent scholars, writers, and film-makers who do not work in academia.

The first four weeks of the class will be spent reading and discussing the nature of excellence in historical research and representation: not just what “the historian’s craft” entails, but even whether it is possible or wise to build hierarchies that rely on concepts of quality or distinctiveness. Class members will decide through discussion what they think are some of the attributes of excellent analytic or representational work focused on the past. Are histories best when they mobilize struggles in the present, when they reveal the construction of structures that still shape injustice or inequality? When they document forms of progress or achievement? When they teach lessons about common or universal challenges to human life? When they amuse, enlighten or surprise? When they are creatively and rhetorically distinctive? When they are thoroughly and exhaustively researched?

At the end of this introductory period, students will craft a statement that explains the class’ shared criteria, and this statement will be published to a course weblog, where observers can comment on it. Students will then be divided into two groups for each side of the bracket. Each group will read or view several works each week on their side of the overall bracket. During class time, the two groups will meet to discuss their views about which work in each small bracket should go forward in the competition and why, taking notes which will eventually be published in some form to the course weblog. Students will also have to write a number of position papers that critically appraise one of the books or films in the coming week and that examine some of the historiography or critical literature surrounding that work.

The final class meeting will bring the two groups together as they attempt to decide which work should win the overall title. In preparation, all students will write an essay discussing the relationship between scholarly history written within the academic and the production of historical knowledge and representation outside of it.

Posted in Academia, Defining "Liberal Arts", Digital Humanities, Production of History, Swarthmore | 3 Comments

On The Invisible Bridge

I’ve been following some of the discussion about Rick Perlstein’s new book on the 1970s.

I agree with many scholars that the basic problem with online endnotes is the persistent danger of the main text and the sourcing becoming disconnected over time unless there’s a heavily institutionalized plan for any necessary migration of the citations. At this point, there’s really nothing of the sort, so I think both publishers and authors would be well-advised to just stick with putting the notes in the printed text.

I’m guessing that Rick Perlstein might be wishing he’d done just that at this point. It’s not clear that it would have protected him from the basically spurious claim that his new book plagiarizes an earlier book by Craig Shirley, but from the current state of the back-and-forth between the two authors and their various defenders and lawyers, it may be that Shirley jumped to the conclusion that Perlstein had paraphrased him without proper attribution because the numerous attributions were in the online endnotes. It’s more likely, though, that Shirley objected because Perlstein looked at the same things that Shirley did and came to very different conclusions. Following the discussion online and looking at the evidence, I really don’t see anything that I would call plagiarism and not much that I would even call careless.

I’m just starting the book for its actual content, but I’m sympathetic to David Weigel’s suggestion that Perlstein is being targeted because Reagan is a more sacred figure for contemporary cultural conservatives than Goldwater or Nixon. Most of them abjure Nixon as a RINO, if they remember him at all, and Goldwater is at this point as relevant to many of them as Calvin Coolidge. Many current conservatives, however, have a strongly vested interest in not remembering Reagan in his actual context, where he presents some real puzzles in terms of our contemporary moment.

For me, though, the persistent argument I like most in Perlstein’s previous two books applies with more force to progressives than to conservatives. I suspect his new book will continue the general thrust of his analysis in this respect. I think Perlstein shows (and means to show) that postwar American conservatism has surprisingly extensive and complex social roots and that at least some of its social roots have a kind of genuine “from below” legitimacy. This might account for why his previous two books initially received appreciative readings from conservatives, in fact.

In his book on Goldwater, Perlstein documents, among other things, that one of Goldwater’s enduring sources of support was from small business owners, especially away from the major coastal cities. I read Perlstein as being genuinely surprised not only that there was a sort of coherent social dimension to this vein of support but that the antipathy of this group towards the federal government had some legitimacy to it, primarily because as federal authority expanded after the war, small businesses got hit with a wave of regulatory expectations that had a serious economic impact on them.

In general in his books, Perlstein does a great job of careful investigatory attention to the social origins of conservative sentiment and ideology and then couples that investigation to a critical appraisal of how political elites and party leaders reworked or mobilized those sentiments. The layered account he gives of the rise of postwar conservatism explains a great deal about how we got to the point we’re at today. While he’s not at all sympathetic to either the content or consequences of conservatism as he describes it (then and now) what I think his account comprehensively rebukes is the kind of progressive response to right-wing political power that falls back on tropes like “astroturfing” or that otherwise assumes that conservatism is the automated, inorganic response of a dying demographic to the loss of social power, that there is nothing real to it or that its reality is simple and self-interested.

I remarked briefly on Twitter that I think most of Perlstein’s progressive fans miss the implications of his work in this respect (and he replied that this needed more than 140 characters for him to make sense of my point). In a way, I’d see Perlstein’s work as a modern companion to the richer kinds of histories of “whiteness” that Nell Irvin Painter, David Roedinger and Noel Ignatieff have written, none of which encourage us to see whiteness as a subjectivity or social formation that was defined solely by instrumental self-interest or that was constructed entirely “from above” with conscious design.

The implications of an analysis like Perlstein’s for actual participation in contemporary politics would be to first peel apart the sources of historical and social energy within your opposition and look carefully at where there are real and imagined grievances that you can actually appreciate, address or be in conversation with. Communitarians have one axis of sympathy they can try to traverse; liberals and libertarians another.

The second is to never assume the charge of astroturfing does much of anything to advance a meaningful politics or for understanding why things actually happen in elections, in governance, in popular consciousness: that is the move of a largely intra-elite war of position that gains inches at best, not yards. Focusing on astroturfing, even when it is undoubtedly happening and has significance for controlling dominant “framing” narratives that influence politics, is mostly an alibi for not doing the much harder work of understanding what’s happening in the larger lived experience of communities and regions. The astroturfing charge is ultimately a sort of degeneration of an older left belief in ideology, a belief that coherent formations of thought and belief crafted by self-conscious elites then structured consciousness and directed political action outside the elite. Thus you get folks like Thomas Frank thinking that losing Kansas is largely a matter of dastardly hegemons cunningly and deliberately blinding people to their authentic self-interest, rather than a slower organic history in which people connected some existing religious, cultural, and social convictions to an increasing disenchantment with the role of the state in their everyday lives, a connection that they have held to with some degree of deliberate agency.

Third, stop assuming that postwar conservatism’s content is wholly protean or arbitrary. “Big government” in this sense may be in all sorts of ways a really messed-up construction that obscures the degree to which mostly-conservative voting districts are actually the enthusiastic recipients of all sorts of public money, but it’s not a random or senseless trope at its origin point, either, at least not as I read the history that Perlstein so ably distills. Which doesn’t mean that the social reality of its derivation is positive, either, since at least one of the aspects of “government” that had become an issue by the early 1970s in Perlstein’s view, as per Nixonland, is its interventions into the political economy of race and racial discrimination.

Fourth, restore some contingency to the story. Perlstein is very good on this in particular when he’s talking about political elites, politicians and party leaders, that the ways in which the fusion of popular and party agendas happened was full of false starts, unpredictable gambits, and improvisations.

All of which to me imply that progressives today habitually underestimate the historicity, rootedness and local authenticity of what they regard as conservatism, and therefore mostly end up stuck with intra-elite theaters of struggle and debate within familiar institutions and communities, all the while misperceiving those as more than they really are. I’ll be curious to see whether this part of what I see in Perlstein’s history changes as we move in to his “invisible bridge”.

Posted in Politics, Production of History | Comments Off on On The Invisible Bridge

Playing the Odds

The idea that higher education makes you a better person in some respect has long been its soft underbelly.

The proposition makes most current faculty and administrators uncomfortable, especially at the smaller teaching-centered colleges that are prone to invoke tropes about community and ethics. The discomfort comes both from how “improvement” necessarily invokes an older conception of college as a finishing school for a small, genteel elite and from how genuinely indispensible it seems for most definitions of “liberal arts”.

Almost every attempt to create breathing room between the narrow teaching of career-ready skills and a defense of liberal arts education that rejects that approach is going to involve some claim that a liberal arts education enlightens and enhances the people who undergo it in ways that aren’t reducible to work or specific skills, that an education should, in Martha Nussbaum’s words, “cultivate humanity”.

This is part of the ground being worked by William Deresiewicz’s New Republic critique of the elitism of American higher education. One of the best rejoinders to Deresiewicz is Chad Wellmon’s essay “Twilight of an Idol”, which conjoins Deresiewicz with a host of similar critics like Andrew Delbanco and Mark Edmundson.

I see much the same issue that Wellmon does, that most of these critiques are focused on what the non-vocational, non-instrumental character of a college education was, is and should be. Wellmon and another critic, Osita Nwanevu, point out that there doesn’t need to be anything particularly special about the four years that students spend pursuing an undergraduate degree. As Wellmon comments, “There is, thankfully, no going back to the nineteenth-century Protestant college of Christian gentlemen. And that leaves contemporary colleges, as we might conclude from Deresiewicz’s jeremiad, still rummaging about for sources of meaning and ethical self-transformation. Some invoke democratic citizenship, critical thinking, literature, and, most recently, habits of mind. But only half-heartedly—and mostly in fundraising emails.”

Half-heartedly is right, precisely because most faculty know full well that all the substitutes for the older religious or gentlemanly ideals of “cultivation” still rest upon and invoke those predicates. But we can’t dispense with this language entirely because we have nothing else that spans academia that meaningfully casts shade at the instrumental, vocational, career-driven vision of education.

The sciences can in a pinch fall back on other ideas about utility and truth: their ontological assumptions (and the assumptions that at least some of the public make about the sciences) are here a saving grace. This problem lands much harder on the humanities, and not just as a challenge to their reproduction within the contemporary academy.

I wrote last year about why I liked something Teju Cole had said about writing and politics. Cole expressed his disappointment that Barack Obama’s apparent literacy, his love of good books, had not in Cole’s view made Obama a more consistently humane person in his use of military power.

I think Cole’s observation points to a much more pressing problem for humanistic scholars in general. Intellectuals outside the academy have been and still are under no systematic pressure to justify what they do in terms of outcomes. As a novelist or essayist or critic you can be a brutal misanthropist, you can drift off into hallucinogenic dream-states, you can be loving or despairing or detached. You can claim your work has no particular instrumental politics or intent, or that your work is defined by it. You don’t have to be right about whether what you say you’re doing is in fact what you actually do, but you still have a fairly wide-open space for self-definition.

Humanists inside the academy might think they have the same freedom to operate, but that clashes very hard with disciplinarity. Most of us claim that we have the authority that we do because we’ve been trained in the methods and traditions of a particular disciplinary approach. We express that authority within our scholarly work (both in crafting our own and in peer reviewing and assessing the work of others) and in our curricular designs and governance. And most of us express, to varying degrees, a whiggish or progressive view of disciplinarity, that we are in our disciplines understanding and knowing more over time, understanding better, that we are building upon precedent, that we are standing on the shoulders of someone–if not giants, at least people the same size as us. If current disciplinary work is just replacing past disciplinary work, and the two states are essentially arbitrary, then most of our citational practices and most of our curricular practices are fundamentally wasted effort.

So if you’re a moral philosopher, for example, you really need to think in your own scholarly work and in your teaching of undergraduates that the disciplined study of moral philosophy provides systematic insights into morality and ethics. If it does, it shouldn’t seem like a big leap to suggest that such insight should allow those who have it to practice morality better than those who have not. This doesn’t mean necessarily that a moral philosopher has to be more moral in the conventional terms of a dominant moral code. Maybe the disciplinary study of morality and ethics leads scholars more often to the conclusion that most dominant moral codes are contradictory or useless. Or that morality is largely an arbitrary expression of power and domination. Doesn’t really matter what the conclusions are, just that it’s reasonable to think that the rigorous disciplinary study of morality through philosophy should “cultivate the humanity” of a moral philosopher accordingly.

But if you’ve known moral philosophers, you’ve known that there is not altogether much a notable difference between them and other academics, between them and other people with their basic degree of educational attainment, between them and other people with the same social backgrounds or identities, between them and other people from the same society, and so on, in terms of morality and ethics. It seems to me that what they know has strikingly little effect on who they are, how they act, what they feel.

Many humanist scholars would say that reading fiction gives us insights into what it means to be human, but it’s pressingly difficult to talk about what those insights have done to us, for us, to describe what transformations, if any, we’ve undergone. Many historians would argue that the disciplined study of history teaches us lessons about the human condition, about how human societies navigate both common social and political challenges and about what makes the present day distinctively different from the past.

I’m often prepared to go farther than that. Many of my colleagues disliked a recent assessment exercise here at the college where we were asked about a very broad list of possible “institutional learning goals”. I disliked it too, mostly because of how assessment typically becomes quantitative and incremental. I didn’t necessarily dislike the breadth, though. Among the things we were asked to consider is whether our disciplines teach values and skills like “empathy”. And I would say that yes, I think the study of history can teach empathy. E.g., that a student might through studying history become more able to feel empathy in a wider and more generative range.

The key for me is that word, “might”. If moral philosophers are not significantly more moral, if economists are not significantly more likely to make superior judgments about managing businesses or finances, if historians are not significantly better at applying what they know about past circumstances to their own situations, if literary critics don’t seem altogether that better at understanding the interiority of other people or the meaning of what we say to one another, then that really does call into question that vague “other” that we commonly say separates a liberal arts approach to education from a vocational strategy.

No academic (I hope) would say that education is required to achieve wisdom. In fact, it is sometimes the opposite: knowing more about the world can be, in the short-term, an impediment to understanding it. I think all of us have known people who are terrifically wise, who understand other people or the universe or the social world beautifully without ever having studied anything in a formal setting. Some of the wise get that way through experiencing the world, others through deliberate self-guided inquiry.

What I would be prepared to claim is something close to something Wellmon says, that perhaps college might “might alert students to an awareness of what is missing, not only in their own colleges but in themselves and the larger society as well”.

But my “might” is a bit different. My might is literally a question of probabilities. A well-designed liberal arts education doesn’t guarantee wisdom (though I think it can guarantee greater concrete knowledge about subject matter and greater skills for expression and inquiry). But it could perhaps be designed so that it consistently improves the odds of a well-considered and well-lived life. Not in the years that the education is on-going, not in the year after graduation, but over the years that follow. Four years of a liberal arts undergraduate experience could be far more likely to produce not just a better quality of life in the economic sense but a better quality of being alive than four years spent doing anything else.

I think I can argue that the disciplinary study of history can potentially contribute to the development of a capacity for empathy, or emotional intelligence, an understanding of why things happen the way that they do and how they might happen differently, and many other crafts and arts that I would associate as much with wisdom as I do with knowledge, with what I think informs a well-lived life. But potential is all I’m going to give out. I can’t guarantee that I’ll make someone more empathetic, not the least because I’m not sure how to quantify such a thing, but also because that’s not something everybody can be or should be counted upon to get from the study of history. It’s just, well, more likely that you might get that than if you didn’t study history.

This sense of “might” even justifies rather nicely the programmatic hostility to instrumentally-driven approaches to education among many humanists. Yes, we’re cultivating humanity, it’s just that we’re not very sure what will grow from any given combination of nutrients and seeds. In our students or ourselves.

This style of feeling through the labyrinth gives me absolutely no title to complacency, however. First, it’s still a problem that increased disciplinary knowledge and skills do not give us proportionately increased probability of incorporating that knowledge into our own lives and institutions. At some point, more rigorous philosophical analyses about when to pull the lever on a trolley or more focused historical research into the genesis of social movements doesn’t consistently improve the odds of making better moral decisions or participating usefully in the formation of social movements.

Second, I don’t think most curricular designs in contemporary academic institutions actually recognize the non-instrumental portion of a liberal-arts education as probabilistic. If we did see it that way, I think we’d organize curricula that had much less regularity, predictability and structure–in effect, much less disciplinarity.

This is really the problem we’re up against: to contest the idea that education is just about return-on-investment, just about getting jobs, we need to offer an education whose structural character and feeling is substantially other than what it is. Right now, many faculty want to have their cake and eat it too, to have rigorous programs of disciplinary study that are essentially instrumental in that they primarily encourage students to do the discipline as if it were a career, justified in a tautological loop where the value of the discipline is discovered by testing students on how they demonstrate that the discipline is, in its own preferred terms, valuable.

If we want people to take seriously that non-instrumental “dark side of the moon” that many faculty claim defines what college has been, is and should remain, we have to take it far more seriously ourselves, both in how we try to live what it is that we study and in how we design institutions that increase the probabilities that our students will not just know specific things and have specific skills but achieve wisdoms that they otherwise could not have found.

Posted in Academia, Defining "Liberal Arts", Generalist's Work, Swarthmore | 10 Comments

It Is Better to Have Wanted and Lost Than Never to Have Wanted At All

Kickstarter is, not at all on purpose, saying some interesting things about this moment in the history of capitalism and about this moment in terms of the availability of disposable income.

About capitalism, I think this: people will give to Kickstarter even more than what they’d pay for the delivery of the product they’re backing if the product were available on a store shelf. Kickstarter is being used to signal desire. What’s striking is that it shows that consumer capitalism is in some sense just as hamstrung as the modernist state in its ability to deliver what people want and will pay for. All our institutions and organizations, of all kinds, are now tangled up in their own complexity, all of them are increasingly built to collect tolls rather than build bridges.

All that money spent on market research, on product development, on vice-presidents of this and that, and what you have, especially in the culture industry, is a giant apparatus that is less accurate than random chance in creating the entertainment or products that consumers can quite clearly describe their desire for. So clearly that the consumers are giving money to people they like who have no intention of or ability to make what the donors say they want. Because, rather like the lottery, at least you can imagine the chance of the thing you want coming into being. Waiting around for Sony or EA or Microsoft or Ubisoft to make it feels like an even bigger longshot.

Which also says something about money and its circulation. The crisis of accumulation isn’t just visible in the irrepressible return of subprime loans, or in the constant quest of financiers to find more ways to make money by speculating on the making of money by people who are making money. It’s even visible in more middle-class precincts. Who wants to invest a bit of spare cash in the long-term deal or the soberly considered opportunity now? It’s like waiting in line to deposit a small check while the bank gets robbed repeatedly.

Posted in Consumerism, Advertising, Commodities, Politics | 5 Comments

Subtraction Stew

If the greatest trick the Devil ever played was convincing people he didn’t exist, the greatest trick of a certain kind of neoliberalism has been to convince people that in all circumstances and times we live in the shadow of austerity. Because once we accept that at all times budgets are parlous, resources are shrinking, and crisis lurks in every dark corner, austerity doesn’t have to be done to us any longer. We do it to ourselves: we create scarcity.

There are many institutions which really experience austerity and experience it as something external, something that’s done to them, something which does overt damage to programs and hidden damage to people. Sometimes the damage of austerity is inevitable and necessary. Faculty and students can protest, but if a university suddenly loses a large portion of its revenues, as some have in recent years, cuts will have to follow. If a university slowly loses students or public funds or has poorly-performing investments, that means cuts too, though at a slow enough pace that’s more like adaptation and less like the sudden shock of austerity.

The problem at many institutions, however, is that there isn’t enough information available to most employees, to most students, to most of the public, to know whether the cry of austerity is at all justified, and even less to know whether what’s being cut is what ought to be cut. Any leadership that claims austerity should also accept an extraordinary burden to demonstrate that the call is legitimate.

This is also where the frequent complaint against corporatization in academia has some of its most potently legitimate bite to it. Austerity in higher education is often justified in language remarkably similar to the way that corporations speak about layoffs and economizing.

It’s hard enough even for corporations to clearly understand where they’re actually making money and where they’re losing it, whether that’s a whole office or down to the level of individual workers. There are a host of ways for people who know how to manipulate information and to play office politics to pin the blame for losses on the wrong people or the wrong project. Business history is also full of executives who didn’t understand that a project which seemed to be losing money was really the key to long-term profits. But at least companies mostly have a single metric for deciding where the axe falls: are you making money or are you losing it?

That metric is self-evidently wrong for any non-profit organization, whether it’s a university, a hospital or a community group. All of those groups are provisioning goods that can’t be valued as profit and loss. They still must care about the sustainability of what they’re doing: they cannot spend more money than they have coming in the door. But in an austerity situation, where there is a sudden shortfall in revenues, a university can’t simply ask who isn’t making enough money and get rid of them.

It’s not just that this strategy contradicts the stated objectives of almost all educational institutions. It’s also that it is remarkably difficult to clearly and consistently demonstrate that some departments in a university are less valued or needed by students. It is completely fair to ask whether at some point employing expensive, highly-trained faculty to teach a miniscule handful of students is sustainable. But that can’t be the only measure of sustainability in an institution whose mission is not limited to the production of profit. More importantly, I’m not sure I can think of an institution that has imposed austerity where the accounting of sustainability has been remotely transparent and consistent. The judgment about what students or communities no longer need or want never seems to come from open scrutiny of the entire range of a university’s activities, both teaching and administrative. Nor does it ever seem to employ any real depth of thought or imagination.

Austerity always comes down the way a lion comes down on a herd of antelope: it catches whatever it can catch and then finds a post-facto logic for its kills.

————-

But the hidden injuries of austerity, as Matt Reed put it, afflict even institutions that haven’t had to go through any of that. Neoliberalism has made everyone think that they don’t have enough to go around, that we’re all struggling with scarcity. So even universities and colleges that are the equivalent of the 1% in their budgets and revenues spend right up to the limits of what they can afford, and then rather than feel happiness at their good fortune, complain of what lies just beyond the boundaries of their budgets.

I heard Eldar Shafir on Radio Times a while back. He’s a psychologist who has co-authored a recent study of the impacts of scarcity on human behavior. I was a bit skeptical about some of his arguments about how the experience of scarcity reinforces or causes poverty, but when Shafir changed gears to talk about how otherwise well-off people talk themselves into thinking they’re experiencing scarcity and how a belief in scarcity changes behaviors, I thought he was really on to something.

That kind of scarcity-thinking is a close cousin to desire, defining aspiration always by lack and absence. It happens at Swarthmore, it happens at other wealthy private colleges and universities. Faculty don’t take stock of the facilities they use every day, the small size of their classes, the excellence of their students, the generous support for their research, or the richness of the intellectual environment around them. Once they’re thinking scarcity, they only see the specialists they don’t have in their departments, the requests that they haven’t had answered, the research they didn’t get to do. More importantly, scarcity-thought paves the road to the war of all against all, to zero-sum thinking, that whatever another department or unit or person gets must be a pre-emptive insult against one’s own aspirations and desires. Imagining scarcity atomizes and isolates, it embitters and diminishes. It becomes impossible under the sign of scarcity to take pleasure in growth or enrichment of resources elsewhere within the same institution. Everyone imagines in loneliness that they are on the downward spiral to deprivation.

Where scarcity-thought takes hold, it also leads people to lose any sense of proportional relation between what they have and what others in other situations have. Scarcity convinces itself that it is experiencing austerity, and mimicks even its historical imagination. (Because austerity, whether genuinely necessary or a lie, is stuck with telling the story of the Golden Age, is condemned to look backward full of regret and envy.) It’s not that different than what happens very wealthy people set their sights on someone wealthier still and see the gap between themselves and those others as illuminating the limitations of their own situation.

And I really think scarcity is something that’s been done to us all, not that we have arrived at independently. Neoliberalism tells the story of universal scarcity in part to explain why we can expect so much less of government, so much less of public institutions, so much less even of corporations. Because apparently no matter how much wealthier we get, we always have less and less. Subtraction stew indeed: every bowl you eat makes you hungrier than you were before.

Posted in Academia, Politics, Swarthmore | 1 Comment

King of Pain

As Jackson Lears and many other scholars and observers have noted, many Americans throughout the cultural history of the United States have accepted that the circumstances of life are inevitably determined by luck, that economic life is a matter of good or ill fortune. Which some have suggested explains the current popular aversion to increased taxation on the rich: even the poor think they have a chance of being rich someday, and want to keep all the imaginary money they might get.

I think there’s a less-told but equally important trope in the American imaginary: the loophole. The finding of the trick, the turning of the fine print back on the lawyer who wrote the contract. The victimless crime of cheating the government or the big company out of something it mindlessly and wastefully demanded of the little man. The free money, the thing that your friend fixed up for you. Topsy-turvy, the quick score that makes the smart and the sly rich without distress to anything. The beads-for-Manhattan.

It’s that last I’m thinking about when I think about King Jeremiah Heaton, who became Internet-famous for a few days when he travelled to southern Egypt to plant a homemade flag on a small area of land that he believed was unclaimed by any existing sovereign state and therefore his for the taking. All for the sake of his 7-year old daughter, who wanted to be a princess.

There’s a lot to say about the story, most of it properly accompanied by much rolling of the eyes. But I do think Heaton is a canary in the coal mine of sorts, a window into a psychic cauldron seething inside the consciousness of a fading empire. Heaton himself invoked history in the coverage: what he did, others had done, he acknowledged, but they did it out of greed or hatred. He did it for love, he says, love of his daughter. But if ever first time tragedy, second time farce applied, this is it.

The basis of Heaton’s claim is the rarely-invoked principle of terra nullius, which as several analyses point out, was one (though not the only) justification invoked by Western colonizers in their land claims after the 17th Century. The hard thing about Heaton is that I can’t tell if he thinks this is a joke or not. He’s aware, in part because the press has queried him, that a flag and terra nullius mean precisely nothing if the claim is unrecognized by other states. I’m not sure he’s aware that Bir Tawil is terra nullius because Egypt and Sudan are still fencing with each other about their postcolonial border, that to claim Bir Tawil cedes a claim to another far more valuable unresolved territory to the east.

But even as a joke, it’s a very telling one, and pursued at a level of earnestness in terms of cost and effort that it seems a rather elaborate joke for an age where a silly YouTube video generally is as far as one need go. There are so many other things available in the treasure chest of American popular culture for a princess and her patriarch: the home-as-castle (another legal doctrine, even!), the imaginary kingdom in the backyard or the woods, constructing an elaborate heritage fantasy complete with family crest and lost inheritances in the auld country. Americans make utopian communities and religious movements all the time. They go out into the wilderness that their own internal empire secured and made for them and make retreats and hermitages, towns and communes, pilgrimmages and wanderings. What’s wrong with all that?

To say instead, “I shall go to Africa, plant a flag, claim a country, and as long as I’m at it, it will be a very nice kingdom that has some good agricultural development policies”? Well, that is not exactly a random idea, though I don’t get the sense that Heaton knows exactly who and what the other members of the club he’s trying to join actually are. But once upon a time this was the kind of fantasy that got people killed and maimed, and not just by aspiring Kings and their Princesses. For every Leopold of Belgium, there was a Leon Rom whose principality was small and short-lived. Some of the nineteenth-century and twentieth-century men (and a smaller handful of women) who flocked to Africa looking for land they could imagine to be empty then demanded that new colonial states do just that: empty the land of human beings and return them as obedient laborers. Most of the new settlers were delusional in some way or another, but they wandered through a world where their dreams could spur nightmares.

That’s not going to be Heaton, but that’s not by any great understanding on his part. It’s just that in dreaming his little dream of a kingdom for his princess, he’s managed in a little, inexpensive way to show what it has otherwise taken the United States billions of dollars and tens of thousands of lives to demonstrate: that we are slipping into the fever-dream stage of superpowerdom, in a Norma Desmond haze so deep and foggy that we don’t even know any more what we don’t know. All we think is that somehow out there, there must be a trick that gets it all back. A law, a loophole, some fine print. Some Manhattan that we can have for a few beads and a couple of pamphlets on using irrigation in agriculture.

Posted in Africa, Miscellany, Politics | 9 Comments

Fighting Words

Days pass, and issues go by, and increasingly by the time I’ve thought something through for myself, the online conversation, if that’s the right word for it, has moved on.

One exchange that keeps sticking with me is about the MLA Task Force on Doctoral Study in Modern Language and Literature’s recent report and a number of strong critical responses made to the report.

One of the major themes of the criticisms involves the labor market in academia generally and in the MLA’s disciplines specifically. Among other things, this particular point seems to have inspired some of the critics to run for the MLA executive with the aim of shaking up the organization and galvanizing its opposition to the casualization of academic labor. We need all the opposition we can get on that score, though I suspect that should the dissidents win, they are going to discover that the most militant MLA imaginable is nevertheless not in a position to make a strong impact in that overall struggle.

I’m more concerned with the response of a group of humanities scholars published at Inside Higher Education. To the extent that this response addresses casualization and the academic labor market, I think it unhelpfully mingles that issue with a quite different argument about disciplinarity and the place of research within the humanities. Perhaps this mingling reflects some of the contradictions of adjunct activism itself, which I think has recently moved from demanding that academic institutions convert many existing adjunct positions into traditional tenure-track jobs within existing disciplines to a more comprehensive skepticism or even outright rejection of academic institutions as a whole, including scholarly hierarchies, the often-stifling mores and manners that attend on graduate school professionalization, the conventional boundaries and structures of disciplinarity, and so on. I worry about baby-and-bathwater as far as that goes, but then again, this was where my own critique of graduate school and academic culture settled a long time ago, back when I first started blogging.

But on this point, the activist adjuncts who are focused centrally on abysmal conditions of labor and poor compensation in many academic institutions are right to simply ignore much of that heavily freighted terrain since what really matters is the creation of well-compensated, fairly structured jobs for highly trained, highly capable young academics. Beyond insuring that those jobs match the needs of students and institutions with the actually existing training that those candidates have received, it doesn’t really matter whether those jobs exist in “traditional” disciplines or in some other administrative and intellectual infrastructure entirely. For that reason, I think a lot of the activists who are focused substantially on labor conditions should be at the least indifferent and more likely welcoming to the Task Force’s interest in shaking the tree a little to see what other kinds of possibilities for good jobs that are a long-term part of academia’s future might look like. Maybe the good jobs of the academic future will involve different kinds of knowledge production than in the past. Or involve more teaching, less scholarship. If those yet-to-exist positions are good jobs in terms of compensation and labor conditions, then it would be a bad move to insist instead that what adjuncts can only really want is the positions that once were, just as they used to be.

They should also not welcome the degree to which the IHE critics conflate the critique of casualization with the defense of what they describe as the core or essential character of disciplinary scholarship.

The critics of the Task Force report say that the report misses an opportunity to “defend the value of the scholarly practices, individual and collective, of its members”. The critics are not, they say, opposed in principle to “innovation, expansion, diversification and transformation”, but that these words are “buzzwords” that “devalue academic labor” and marginalize humanities expertise.

Flexibility, adaptability, evolution are later said to be words necessarily “borrowed” from business administration (here linking to Jill Leopore’s excellent critique of Clayton Christiansen).

For scholars concerned with the protection of humanistic expertise, this does not seem to me to be a particularly adroit reading of a careful 40-page document and its particular uses of words like innovation, flexibility, or evolution. What gets discounted in this response is the possibility that there are any scholars inside of the humanities, inside of the MLA’s membership, who might use such words with authentic intent, for whom such words might be expressive of their own aspirations for expert practice and scholarly work. That there might be intellectual arguments (and perhaps even an intellectual politics for) for new modes of collaboration, new forms of expression and dissemination, new methods for working with texts and textuality, new structures for curricula.

If these critics are not “opposed in principle” to innovation or flexibility, it would be hard to find where there is space in their view for legitimate arguments about changes in either the content or organization of scholarly work in the humanities. They assert baldly as common sense propositions that are anything but: for example, that interdisciplinary scholarship requires mastering multiple disciplines (and hence, that interdisciplinary scholarship should remain off-limits to graduate students, who do not have the time for such a thing).

If we’re going to talk about words and their associations, perhaps it’s worth some attention to the word “capitulation”. Flexibility and adaptability, well, they’re really rather adaptable. They mean different things in context. Capitulation, on the other hand, is a pretty rigid sort of word. It means surrendering in a conflict or a war. If you see yourself as party to a conflict and you do not believe that your allies or compatriots should surrender, then if they try to, labelling their actions as capitulation is a short hop away from labelling the people capitulating as traitors.

If I were going to defend traditional disciplinarity, one of the things I’d say on its behalf is that it is a bit like home in the sense of “the place where, when you have to go there, they have to take you in”. And I’d say that in that kind of place, using words that dance around the edge of accusing people of treason, of selling-out, is a lousy way to call for properly valuing the disciplinary cultures of the humanities as they are, have been and might yet be.

The critics of the MLA Task Force say that the Task Force and all faculty need to engage in public advocacy on behalf of the humanities. But as is often the case with humanists, it’s all tell and no show. It’s not at all clear to me what you do as an advocate for the humanities if and when you’re up against the various forms of public hostility or skepticism that the Task Force’s report describes very well, if you are prohibited from acknowledging the content of that skepticism or prohibited from attempting to persuasively engage it on the grounds that this kind of engagement is “capitulation”. The critics suggest instead “speaking about these issues in classes” (which links to a good essay on how to be allies to adjunct faculty). In fact, step by step that’s all that the critics have to offer, is strong advocacy on labor practices and casualization. Which is all a good idea, but doesn’t cover at all the kinds of particular pressures being faced by the humanities, some of which aren’t confined to or expressed purely around adjunctification, even though those pressures are leading to the net elimination of jobs (of any kind) in many institutions. Indeed, even in the narrower domain of labor activism, it’s not at all clear to me that rallying against “innovation” or “adaptability” is a particularly adroit strategic move for clawing back tenure lines in humanities departments, nor is it clear to me that adjunct activists should be grateful for this line of critical attack on the MLA Task Force’s analysis.

Public advocacy means more than just the kind of institutional in-fighting that the tenurati find comfortable and familiar. Undercutting a dean or scolding a colleague who has had the audacity to fiddle around with some new-fangled innovative adaptability thing is a long way away from winning battles with state legislators, anxious families, pragmatically career-minded students, federal bureaucrats, mainstream pundits, Silicon Valley executives or any other constituency of note in this struggle. If the critics of the MLA Task Force think that you can just choose the publics–or the battlegrounds–involved in determining the future of the humanities, then that would be a sign that they could maybe stand to take another look at words like flexible and adaptable. It’s not hard to win a battle if you always pick the fights you know you can win, whether or not they consequentially affect the outcomes of the larger struggles around you.

Posted in Academia, Digital Humanities, Generalist's Work | 4 Comments

Of Shoes and Ships and Sealing Wax–and Commencement Speakers

I found myself really annoyed in the last week when I came across the many cases of faculty approvingly endorsing the fate of commencement speakers like Robert Birgeneau and Christine Lagarde, and scolding William Bowen for scolding students for their scolding of Birgeneau.

The approving remarks of faculty have made some of the following points:

1) commencement is an empty ritual anyway full of platitudes from various elites and therefore who cares, it’s not an important venue for free speech or ideas in the first place

2) a commencement speech isn’t a debate or exchange of ideas, so Bowen’s full of shit when he says that students were trying to shut down a debate or exchange of ideas

3)students (and faculty) were just exercising their own academic freedom by criticizing or rejecting commencement speakers, you can’t tell them to be silent and say you’re championing free speech

4) all of the people criticizing these speakers were perfectly right to criticize them, they aren’t people that we should be honoring anyway

——–

What I want to do mostly is talk about what I think no one is talking about in this discussion, but as a prologue, a response to some of these critiques. The first thing to point out is that “we shouldn’t honor these people” and “commencement is empty and platitudinous” don’t add up very well: you can’t take the event seriously enough to worry about who is and is not honor-worthy and then dismiss it as pointless ritual.

#2 is really a bad point, because it applies to everything that isn’t immediately structured as a dialogue or an exchange. Faculty give lectures in their classes; invited guests give talks that often have minimal or compressed times for “dialogue” at the end, and even then, are dialogic only in the sense of passive audiences getting to ask a question that is or is not answered. If commencement speeches don’t count, then most speech in academic environments by faculty doesn’t count as “dialog” or “debate” either.

Three, on the other hand, is a familiar dog-chasing its own tail point that works against (or for) any position in any argument about the limits of free speech. If the students have the right to protest the speakers, the speakers have the right to protest the students, and so on ad infinitum. This is the kind of position that a free-speech purist can love in a way, but it requires ignoring the actual content of speech and ignoring that speech acts have power beyond their content. The commenters that I’ve seen invoking the right to complain against commencement speakers typically ignore that such complaints in the last two years have asked that the speaker be disinvited or have threatened to disrupt commencement if he or she is not disinvited. But praise for someone like Bowen attacking the students also overlooks that it’s not exactly the soundest pedagogy to harshly call out 18-22 year olds at a ritual of this kind.

What I really want to do is talk about #4: the view that it is self-evident that all of these speakers are bad people who shouldn’t be honored, and that honoring should be saved for those who are worthy of it.

——

Often, that view goes along with an assertion that this stance is not aimed at suppressing academic freedom or viewpoint diversity, that any of these speakers would be welcome at any other event, just not as honorees at commencement.

There are two things that are really wrong with most (though not all) of the current commencement-speaker critiques, and this is the first of them. Let’s suppose that we can make this distinction, between “ordinary” events and “honoring” events. The evidence of the last two years in higher education leave me in considerable doubt that this distinction is meant as anything more than an ad hoc justification, that when you press into what might count as an “ordinary” event, you tend to discover that “ordinary” is only acceptable if Bad People agree to come and meekly submit to a wave of critical indictments.

But let’s suppose the intention to distinguish is genuine. It rests on a difference that people inside of colleges and universities would recognize between types of events, and also between types of invitations and inviters. But this is a distinction that many publics outside of higher education simply don’t see or understand.

Many of my colleagues across higher education seem frustratingly oblivious to the degree of popular as well as political ill-will towards the academy and its prerogatives, or are accustomed to thinking of that ill-will as entirely an ideological product of conservative politics. I don’t think the Obama Administration’s ghastly, destructive current policy initiatives aimed at higher education would be a part of his lame-duck agenda if his team didn’t perceive higher education as a politically advantageous target.

Academic freedom is one of the prerogatives that we tend to treat as a self-evident good, usually against the backdrop of some stirring rhetoric about McCarthyism and censorship and the need for innovative and creative thinking. It’s a harder sell than we guess for two reasons. First, because most scholarship and teaching is actually rather timid and risk-averse due to tenure and the general domesticating force of academic professionalism. Second, because all the other professions have been brought to heel in one way or another. There is literally no other workplace left in America where there is any expectation whatsoever of a right or even a utility to speaking one’s mind about both the subject of work and about the conditions of employment. Once upon a time, at least some of the professions had some similar ideas about the value of professional autonomy and about a privileged relationship between the profession and the public sphere. We’re the last left as far as that goes.

Which means that if we’re going to defend academic freedom, we have to defend it in bluntly absolutist terms. The moment we start putzing around with “Well, not so much at commencement” or “Well, not if the speaker is homophobic” or “Well, not if it’s the IMF, come on, they’re beyond the pale”, we’ve pretty much lost the fight for academic freedom and might as well come out with our hands up. It is not that there aren’t distinctions to be made, but that making fine-grained distinctions and saying, “It’s an academic thing, you wouldn’t understand” is a sure way to appear as hypocrites who don’t understand the value of the right they’re defending if we’re talking to already hostile publics who can be fired at whim for seeming to criticize the choice of pastries at the morning office meeting.

———

Let’s talk about another problem with saying, “Ok, academic freedom, but not for Really Bad People” or “Ok, academic freedom, but not for honoring people”.

This year’s wave of disinvitations started with the Rutgers faculty and students objecting to Condoleeza Rice’s appearance as a commencement speaker.

There is legitimately much to object to about the selection of Rice, and it starts not with Rice herself but with the model that many large research universities use for selecting speakers. Rutgers is especially blighted in this respect, as it is and has been controlled by a number of arrogant, distant administrators whose contempt for the faculty is fairly explicit. Commencement speakers shouldn’t be chosen by a small, secretive clique of board members and top administrators and they shouldn’t cost a dime to bring. When you have to shell out tens of thousands of dollars for a commencement speech, you’re voiding the right to regard it as a special ritual occasion. The transaction in all cases should be: we give you an honorary degree, you show up and say what you will. We’ll fete you and take care of your expenses, but that’s it. And you should be choosing such people in a more consultative fashion. You can’t have the entire community in on it from the beginning–anyone who has been involved in selecting an honorary speaker of any kind knows that there are invitations declined, invitations deferred, invitations lost and found, second choices rediscovered and so on. But you can represent everyone at the outset, if nothing else by genuinely seeking and valuing community suggestions and input.

The problem with the whole debate in the last two years is that commencement critics have typically seen Rutgers as the typical or normal example of how the honorary sausage gets made. And at least in this year’s count of kerfuffles, it’s not. At Smith and Haverford, I’m pretty sure that the process is closer to Swarthmore’s process, which involves an administrator who has been lovingly involved in thinking about speakers for many years and a rotating cast of faculty appointed to a committee, plus solicitations of community suggestions.

I know: I was on the committee one year and in another year, I sent in a suggestion and lo! my suggestion was heeded. So when local critics complained about one selection last year, I took it personally even if the candidate wasn’t my selection and I wasn’t on the committee that year. Because you can’t love the process when it gets you crusaders for social justice and poets and philosophers and inventors and then hate it the one time it gets you someone you don’t like.

I haven’t liked everyone we’ve had in the twenty years I’ve been here. Most years, we select alumni, including last year’s controversial selection, Robert Zoellick. Bill Cosby bored and annoyed me, deep in the dotage of his contempt for higher education. But did I give somebody shit about it afterwards? No, because my colleagues were a part of the choice. And the committee eventually asked all the faculty about it and we shrugged and said ‘eh, whatever’. And Zoellick was suggested similarly and one faculty member said, “I don’t like it”, but the rest of us said either “Yes” or “Eh, whatever”.

So here’s the thing. The other practice that we cherish as faculty that’s under assault nationwide is faculty governance. If your idea, as a faculty member, of faculty governance is that the one person who says, “I don’t like X” should override a committee and a process and an entire faculty, then guess what, we deserve to lose the fight for governance. If your idea of faculty governance is that you demand the outcomes you wanted in the first place after the meeting is done, and think it’s ok to rock the casbah to get there, then we deserve to lose the fight for governance.

When some Smith students and some faculty rise to say, “We don’t want Christine Lagarde to speak because the IMF is imperialist”, they’re effectively saying, “We don’t care who decided that or how”, and thus they’re also embedding an attack on governance along the way. Because surely to disdain the IMF (or the World Bank) so wholly that you will do what you can, what you must, to stop them from being honored guests is to also disdain anyone who might have, in any context, ever have thought otherwise. As, for example, in most Departments of Economics, perhaps here and there in pockets of usage and support and consultancies in other departments as well.

At which point we deserve to lose the fight for academic freedom as well as governance: some of us are seeking to win not a fight against the subjects of their critique “out there” but against intimate enemies, to win what can’t be easily won in committee meetings and faculty senates and classrooms and curricula and enrollments and persuasive addresses to wider publics.

Posted in Academia, Politics | 16 Comments