It Is Better to Have Wanted and Lost Than Never to Have Wanted At All

Kickstarter is, not at all on purpose, saying some interesting things about this moment in the history of capitalism and about this moment in terms of the availability of disposable income.

About capitalism, I think this: people will give to Kickstarter even more than what they’d pay for the delivery of the product they’re backing if the product were available on a store shelf. Kickstarter is being used to signal desire. What’s striking is that it shows that consumer capitalism is in some sense just as hamstrung as the modernist state in its ability to deliver what people want and will pay for. All our institutions and organizations, of all kinds, are now tangled up in their own complexity, all of them are increasingly built to collect tolls rather than build bridges.

All that money spent on market research, on product development, on vice-presidents of this and that, and what you have, especially in the culture industry, is a giant apparatus that is less accurate than random chance in creating the entertainment or products that consumers can quite clearly describe their desire for. So clearly that the consumers are giving money to people they like who have no intention of or ability to make what the donors say they want. Because, rather like the lottery, at least you can imagine the chance of the thing you want coming into being. Waiting around for Sony or EA or Microsoft or Ubisoft to make it feels like an even bigger longshot.

Which also says something about money and its circulation. The crisis of accumulation isn’t just visible in the irrepressible return of subprime loans, or in the constant quest of financiers to find more ways to make money by speculating on the making of money by people who are making money. It’s even visible in more middle-class precincts. Who wants to invest a bit of spare cash in the long-term deal or the soberly considered opportunity now? It’s like waiting in line to deposit a small check while the bank gets robbed repeatedly.

Posted in Consumerism, Advertising, Commodities, Politics | 5 Comments

Subtraction Stew

If the greatest trick the Devil ever played was convincing people he didn’t exist, the greatest trick of a certain kind of neoliberalism has been to convince people that in all circumstances and times we live in the shadow of austerity. Because once we accept that at all times budgets are parlous, resources are shrinking, and crisis lurks in every dark corner, austerity doesn’t have to be done to us any longer. We do it to ourselves: we create scarcity.

There are many institutions which really experience austerity and experience it as something external, something that’s done to them, something which does overt damage to programs and hidden damage to people. Sometimes the damage of austerity is inevitable and necessary. Faculty and students can protest, but if a university suddenly loses a large portion of its revenues, as some have in recent years, cuts will have to follow. If a university slowly loses students or public funds or has poorly-performing investments, that means cuts too, though at a slow enough pace that’s more like adaptation and less like the sudden shock of austerity.

The problem at many institutions, however, is that there isn’t enough information available to most employees, to most students, to most of the public, to know whether the cry of austerity is at all justified, and even less to know whether what’s being cut is what ought to be cut. Any leadership that claims austerity should also accept an extraordinary burden to demonstrate that the call is legitimate.

This is also where the frequent complaint against corporatization in academia has some of its most potently legitimate bite to it. Austerity in higher education is often justified in language remarkably similar to the way that corporations speak about layoffs and economizing.

It’s hard enough even for corporations to clearly understand where they’re actually making money and where they’re losing it, whether that’s a whole office or down to the level of individual workers. There are a host of ways for people who know how to manipulate information and to play office politics to pin the blame for losses on the wrong people or the wrong project. Business history is also full of executives who didn’t understand that a project which seemed to be losing money was really the key to long-term profits. But at least companies mostly have a single metric for deciding where the axe falls: are you making money or are you losing it?

That metric is self-evidently wrong for any non-profit organization, whether it’s a university, a hospital or a community group. All of those groups are provisioning goods that can’t be valued as profit and loss. They still must care about the sustainability of what they’re doing: they cannot spend more money than they have coming in the door. But in an austerity situation, where there is a sudden shortfall in revenues, a university can’t simply ask who isn’t making enough money and get rid of them.

It’s not just that this strategy contradicts the stated objectives of almost all educational institutions. It’s also that it is remarkably difficult to clearly and consistently demonstrate that some departments in a university are less valued or needed by students. It is completely fair to ask whether at some point employing expensive, highly-trained faculty to teach a miniscule handful of students is sustainable. But that can’t be the only measure of sustainability in an institution whose mission is not limited to the production of profit. More importantly, I’m not sure I can think of an institution that has imposed austerity where the accounting of sustainability has been remotely transparent and consistent. The judgment about what students or communities no longer need or want never seems to come from open scrutiny of the entire range of a university’s activities, both teaching and administrative. Nor does it ever seem to employ any real depth of thought or imagination.

Austerity always comes down the way a lion comes down on a herd of antelope: it catches whatever it can catch and then finds a post-facto logic for its kills.


But the hidden injuries of austerity, as Matt Reed put it, afflict even institutions that haven’t had to go through any of that. Neoliberalism has made everyone think that they don’t have enough to go around, that we’re all struggling with scarcity. So even universities and colleges that are the equivalent of the 1% in their budgets and revenues spend right up to the limits of what they can afford, and then rather than feel happiness at their good fortune, complain of what lies just beyond the boundaries of their budgets.

I heard Eldar Shafir on Radio Times a while back. He’s a psychologist who has co-authored a recent study of the impacts of scarcity on human behavior. I was a bit skeptical about some of his arguments about how the experience of scarcity reinforces or causes poverty, but when Shafir changed gears to talk about how otherwise well-off people talk themselves into thinking they’re experiencing scarcity and how a belief in scarcity changes behaviors, I thought he was really on to something.

That kind of scarcity-thinking is a close cousin to desire, defining aspiration always by lack and absence. It happens at Swarthmore, it happens at other wealthy private colleges and universities. Faculty don’t take stock of the facilities they use every day, the small size of their classes, the excellence of their students, the generous support for their research, or the richness of the intellectual environment around them. Once they’re thinking scarcity, they only see the specialists they don’t have in their departments, the requests that they haven’t had answered, the research they didn’t get to do. More importantly, scarcity-thought paves the road to the war of all against all, to zero-sum thinking, that whatever another department or unit or person gets must be a pre-emptive insult against one’s own aspirations and desires. Imagining scarcity atomizes and isolates, it embitters and diminishes. It becomes impossible under the sign of scarcity to take pleasure in growth or enrichment of resources elsewhere within the same institution. Everyone imagines in loneliness that they are on the downward spiral to deprivation.

Where scarcity-thought takes hold, it also leads people to lose any sense of proportional relation between what they have and what others in other situations have. Scarcity convinces itself that it is experiencing austerity, and mimicks even its historical imagination. (Because austerity, whether genuinely necessary or a lie, is stuck with telling the story of the Golden Age, is condemned to look backward full of regret and envy.) It’s not that different than what happens very wealthy people set their sights on someone wealthier still and see the gap between themselves and those others as illuminating the limitations of their own situation.

And I really think scarcity is something that’s been done to us all, not that we have arrived at independently. Neoliberalism tells the story of universal scarcity in part to explain why we can expect so much less of government, so much less of public institutions, so much less even of corporations. Because apparently no matter how much wealthier we get, we always have less and less. Subtraction stew indeed: every bowl you eat makes you hungrier than you were before.

Posted in Academia, Politics, Swarthmore | 1 Comment

King of Pain

As Jackson Lears and many other scholars and observers have noted, many Americans throughout the cultural history of the United States have accepted that the circumstances of life are inevitably determined by luck, that economic life is a matter of good or ill fortune. Which some have suggested explains the current popular aversion to increased taxation on the rich: even the poor think they have a chance of being rich someday, and want to keep all the imaginary money they might get.

I think there’s a less-told but equally important trope in the American imaginary: the loophole. The finding of the trick, the turning of the fine print back on the lawyer who wrote the contract. The victimless crime of cheating the government or the big company out of something it mindlessly and wastefully demanded of the little man. The free money, the thing that your friend fixed up for you. Topsy-turvy, the quick score that makes the smart and the sly rich without distress to anything. The beads-for-Manhattan.

It’s that last I’m thinking about when I think about King Jeremiah Heaton, who became Internet-famous for a few days when he travelled to southern Egypt to plant a homemade flag on a small area of land that he believed was unclaimed by any existing sovereign state and therefore his for the taking. All for the sake of his 7-year old daughter, who wanted to be a princess.

There’s a lot to say about the story, most of it properly accompanied by much rolling of the eyes. But I do think Heaton is a canary in the coal mine of sorts, a window into a psychic cauldron seething inside the consciousness of a fading empire. Heaton himself invoked history in the coverage: what he did, others had done, he acknowledged, but they did it out of greed or hatred. He did it for love, he says, love of his daughter. But if ever first time tragedy, second time farce applied, this is it.

The basis of Heaton’s claim is the rarely-invoked principle of terra nullius, which as several analyses point out, was one (though not the only) justification invoked by Western colonizers in their land claims after the 17th Century. The hard thing about Heaton is that I can’t tell if he thinks this is a joke or not. He’s aware, in part because the press has queried him, that a flag and terra nullius mean precisely nothing if the claim is unrecognized by other states. I’m not sure he’s aware that Bir Tawil is terra nullius because Egypt and Sudan are still fencing with each other about their postcolonial border, that to claim Bir Tawil cedes a claim to another far more valuable unresolved territory to the east.

But even as a joke, it’s a very telling one, and pursued at a level of earnestness in terms of cost and effort that it seems a rather elaborate joke for an age where a silly YouTube video generally is as far as one need go. There are so many other things available in the treasure chest of American popular culture for a princess and her patriarch: the home-as-castle (another legal doctrine, even!), the imaginary kingdom in the backyard or the woods, constructing an elaborate heritage fantasy complete with family crest and lost inheritances in the auld country. Americans make utopian communities and religious movements all the time. They go out into the wilderness that their own internal empire secured and made for them and make retreats and hermitages, towns and communes, pilgrimmages and wanderings. What’s wrong with all that?

To say instead, “I shall go to Africa, plant a flag, claim a country, and as long as I’m at it, it will be a very nice kingdom that has some good agricultural development policies”? Well, that is not exactly a random idea, though I don’t get the sense that Heaton knows exactly who and what the other members of the club he’s trying to join actually are. But once upon a time this was the kind of fantasy that got people killed and maimed, and not just by aspiring Kings and their Princesses. For every Leopold of Belgium, there was a Leon Rom whose principality was small and short-lived. Some of the nineteenth-century and twentieth-century men (and a smaller handful of women) who flocked to Africa looking for land they could imagine to be empty then demanded that new colonial states do just that: empty the land of human beings and return them as obedient laborers. Most of the new settlers were delusional in some way or another, but they wandered through a world where their dreams could spur nightmares.

That’s not going to be Heaton, but that’s not by any great understanding on his part. It’s just that in dreaming his little dream of a kingdom for his princess, he’s managed in a little, inexpensive way to show what it has otherwise taken the United States billions of dollars and tens of thousands of lives to demonstrate: that we are slipping into the fever-dream stage of superpowerdom, in a Norma Desmond haze so deep and foggy that we don’t even know any more what we don’t know. All we think is that somehow out there, there must be a trick that gets it all back. A law, a loophole, some fine print. Some Manhattan that we can have for a few beads and a couple of pamphlets on using irrigation in agriculture.

Posted in Africa, Miscellany, Politics | 9 Comments

Fighting Words

Days pass, and issues go by, and increasingly by the time I’ve thought something through for myself, the online conversation, if that’s the right word for it, has moved on.

One exchange that keeps sticking with me is about the MLA Task Force on Doctoral Study in Modern Language and Literature’s recent report and a number of strong critical responses made to the report.

One of the major themes of the criticisms involves the labor market in academia generally and in the MLA’s disciplines specifically. Among other things, this particular point seems to have inspired some of the critics to run for the MLA executive with the aim of shaking up the organization and galvanizing its opposition to the casualization of academic labor. We need all the opposition we can get on that score, though I suspect that should the dissidents win, they are going to discover that the most militant MLA imaginable is nevertheless not in a position to make a strong impact in that overall struggle.

I’m more concerned with the response of a group of humanities scholars published at Inside Higher Education. To the extent that this response addresses casualization and the academic labor market, I think it unhelpfully mingles that issue with a quite different argument about disciplinarity and the place of research within the humanities. Perhaps this mingling reflects some of the contradictions of adjunct activism itself, which I think has recently moved from demanding that academic institutions convert many existing adjunct positions into traditional tenure-track jobs within existing disciplines to a more comprehensive skepticism or even outright rejection of academic institutions as a whole, including scholarly hierarchies, the often-stifling mores and manners that attend on graduate school professionalization, the conventional boundaries and structures of disciplinarity, and so on. I worry about baby-and-bathwater as far as that goes, but then again, this was where my own critique of graduate school and academic culture settled a long time ago, back when I first started blogging.

But on this point, the activist adjuncts who are focused centrally on abysmal conditions of labor and poor compensation in many academic institutions are right to simply ignore much of that heavily freighted terrain since what really matters is the creation of well-compensated, fairly structured jobs for highly trained, highly capable young academics. Beyond insuring that those jobs match the needs of students and institutions with the actually existing training that those candidates have received, it doesn’t really matter whether those jobs exist in “traditional” disciplines or in some other administrative and intellectual infrastructure entirely. For that reason, I think a lot of the activists who are focused substantially on labor conditions should be at the least indifferent and more likely welcoming to the Task Force’s interest in shaking the tree a little to see what other kinds of possibilities for good jobs that are a long-term part of academia’s future might look like. Maybe the good jobs of the academic future will involve different kinds of knowledge production than in the past. Or involve more teaching, less scholarship. If those yet-to-exist positions are good jobs in terms of compensation and labor conditions, then it would be a bad move to insist instead that what adjuncts can only really want is the positions that once were, just as they used to be.

They should also not welcome the degree to which the IHE critics conflate the critique of casualization with the defense of what they describe as the core or essential character of disciplinary scholarship.

The critics of the Task Force report say that the report misses an opportunity to “defend the value of the scholarly practices, individual and collective, of its members”. The critics are not, they say, opposed in principle to “innovation, expansion, diversification and transformation”, but that these words are “buzzwords” that “devalue academic labor” and marginalize humanities expertise.

Flexibility, adaptability, evolution are later said to be words necessarily “borrowed” from business administration (here linking to Jill Leopore’s excellent critique of Clayton Christiansen).

For scholars concerned with the protection of humanistic expertise, this does not seem to me to be a particularly adroit reading of a careful 40-page document and its particular uses of words like innovation, flexibility, or evolution. What gets discounted in this response is the possibility that there are any scholars inside of the humanities, inside of the MLA’s membership, who might use such words with authentic intent, for whom such words might be expressive of their own aspirations for expert practice and scholarly work. That there might be intellectual arguments (and perhaps even an intellectual politics for) for new modes of collaboration, new forms of expression and dissemination, new methods for working with texts and textuality, new structures for curricula.

If these critics are not “opposed in principle” to innovation or flexibility, it would be hard to find where there is space in their view for legitimate arguments about changes in either the content or organization of scholarly work in the humanities. They assert baldly as common sense propositions that are anything but: for example, that interdisciplinary scholarship requires mastering multiple disciplines (and hence, that interdisciplinary scholarship should remain off-limits to graduate students, who do not have the time for such a thing).

If we’re going to talk about words and their associations, perhaps it’s worth some attention to the word “capitulation”. Flexibility and adaptability, well, they’re really rather adaptable. They mean different things in context. Capitulation, on the other hand, is a pretty rigid sort of word. It means surrendering in a conflict or a war. If you see yourself as party to a conflict and you do not believe that your allies or compatriots should surrender, then if they try to, labelling their actions as capitulation is a short hop away from labelling the people capitulating as traitors.

If I were going to defend traditional disciplinarity, one of the things I’d say on its behalf is that it is a bit like home in the sense of “the place where, when you have to go there, they have to take you in”. And I’d say that in that kind of place, using words that dance around the edge of accusing people of treason, of selling-out, is a lousy way to call for properly valuing the disciplinary cultures of the humanities as they are, have been and might yet be.

The critics of the MLA Task Force say that the Task Force and all faculty need to engage in public advocacy on behalf of the humanities. But as is often the case with humanists, it’s all tell and no show. It’s not at all clear to me what you do as an advocate for the humanities if and when you’re up against the various forms of public hostility or skepticism that the Task Force’s report describes very well, if you are prohibited from acknowledging the content of that skepticism or prohibited from attempting to persuasively engage it on the grounds that this kind of engagement is “capitulation”. The critics suggest instead “speaking about these issues in classes” (which links to a good essay on how to be allies to adjunct faculty). In fact, step by step that’s all that the critics have to offer, is strong advocacy on labor practices and casualization. Which is all a good idea, but doesn’t cover at all the kinds of particular pressures being faced by the humanities, some of which aren’t confined to or expressed purely around adjunctification, even though those pressures are leading to the net elimination of jobs (of any kind) in many institutions. Indeed, even in the narrower domain of labor activism, it’s not at all clear to me that rallying against “innovation” or “adaptability” is a particularly adroit strategic move for clawing back tenure lines in humanities departments, nor is it clear to me that adjunct activists should be grateful for this line of critical attack on the MLA Task Force’s analysis.

Public advocacy means more than just the kind of institutional in-fighting that the tenurati find comfortable and familiar. Undercutting a dean or scolding a colleague who has had the audacity to fiddle around with some new-fangled innovative adaptability thing is a long way away from winning battles with state legislators, anxious families, pragmatically career-minded students, federal bureaucrats, mainstream pundits, Silicon Valley executives or any other constituency of note in this struggle. If the critics of the MLA Task Force think that you can just choose the publics–or the battlegrounds–involved in determining the future of the humanities, then that would be a sign that they could maybe stand to take another look at words like flexible and adaptable. It’s not hard to win a battle if you always pick the fights you know you can win, whether or not they consequentially affect the outcomes of the larger struggles around you.

Posted in Academia, Digital Humanities, Generalist's Work | 4 Comments

Of Shoes and Ships and Sealing Wax–and Commencement Speakers

I found myself really annoyed in the last week when I came across the many cases of faculty approvingly endorsing the fate of commencement speakers like Robert Birgeneau and Christine Lagarde, and scolding William Bowen for scolding students for their scolding of Birgeneau.

The approving remarks of faculty have made some of the following points:

1) commencement is an empty ritual anyway full of platitudes from various elites and therefore who cares, it’s not an important venue for free speech or ideas in the first place

2) a commencement speech isn’t a debate or exchange of ideas, so Bowen’s full of shit when he says that students were trying to shut down a debate or exchange of ideas

3)students (and faculty) were just exercising their own academic freedom by criticizing or rejecting commencement speakers, you can’t tell them to be silent and say you’re championing free speech

4) all of the people criticizing these speakers were perfectly right to criticize them, they aren’t people that we should be honoring anyway


What I want to do mostly is talk about what I think no one is talking about in this discussion, but as a prologue, a response to some of these critiques. The first thing to point out is that “we shouldn’t honor these people” and “commencement is empty and platitudinous” don’t add up very well: you can’t take the event seriously enough to worry about who is and is not honor-worthy and then dismiss it as pointless ritual.

#2 is really a bad point, because it applies to everything that isn’t immediately structured as a dialogue or an exchange. Faculty give lectures in their classes; invited guests give talks that often have minimal or compressed times for “dialogue” at the end, and even then, are dialogic only in the sense of passive audiences getting to ask a question that is or is not answered. If commencement speeches don’t count, then most speech in academic environments by faculty doesn’t count as “dialog” or “debate” either.

Three, on the other hand, is a familiar dog-chasing its own tail point that works against (or for) any position in any argument about the limits of free speech. If the students have the right to protest the speakers, the speakers have the right to protest the students, and so on ad infinitum. This is the kind of position that a free-speech purist can love in a way, but it requires ignoring the actual content of speech and ignoring that speech acts have power beyond their content. The commenters that I’ve seen invoking the right to complain against commencement speakers typically ignore that such complaints in the last two years have asked that the speaker be disinvited or have threatened to disrupt commencement if he or she is not disinvited. But praise for someone like Bowen attacking the students also overlooks that it’s not exactly the soundest pedagogy to harshly call out 18-22 year olds at a ritual of this kind.

What I really want to do is talk about #4: the view that it is self-evident that all of these speakers are bad people who shouldn’t be honored, and that honoring should be saved for those who are worthy of it.


Often, that view goes along with an assertion that this stance is not aimed at suppressing academic freedom or viewpoint diversity, that any of these speakers would be welcome at any other event, just not as honorees at commencement.

There are two things that are really wrong with most (though not all) of the current commencement-speaker critiques, and this is the first of them. Let’s suppose that we can make this distinction, between “ordinary” events and “honoring” events. The evidence of the last two years in higher education leave me in considerable doubt that this distinction is meant as anything more than an ad hoc justification, that when you press into what might count as an “ordinary” event, you tend to discover that “ordinary” is only acceptable if Bad People agree to come and meekly submit to a wave of critical indictments.

But let’s suppose the intention to distinguish is genuine. It rests on a difference that people inside of colleges and universities would recognize between types of events, and also between types of invitations and inviters. But this is a distinction that many publics outside of higher education simply don’t see or understand.

Many of my colleagues across higher education seem frustratingly oblivious to the degree of popular as well as political ill-will towards the academy and its prerogatives, or are accustomed to thinking of that ill-will as entirely an ideological product of conservative politics. I don’t think the Obama Administration’s ghastly, destructive current policy initiatives aimed at higher education would be a part of his lame-duck agenda if his team didn’t perceive higher education as a politically advantageous target.

Academic freedom is one of the prerogatives that we tend to treat as a self-evident good, usually against the backdrop of some stirring rhetoric about McCarthyism and censorship and the need for innovative and creative thinking. It’s a harder sell than we guess for two reasons. First, because most scholarship and teaching is actually rather timid and risk-averse due to tenure and the general domesticating force of academic professionalism. Second, because all the other professions have been brought to heel in one way or another. There is literally no other workplace left in America where there is any expectation whatsoever of a right or even a utility to speaking one’s mind about both the subject of work and about the conditions of employment. Once upon a time, at least some of the professions had some similar ideas about the value of professional autonomy and about a privileged relationship between the profession and the public sphere. We’re the last left as far as that goes.

Which means that if we’re going to defend academic freedom, we have to defend it in bluntly absolutist terms. The moment we start putzing around with “Well, not so much at commencement” or “Well, not if the speaker is homophobic” or “Well, not if it’s the IMF, come on, they’re beyond the pale”, we’ve pretty much lost the fight for academic freedom and might as well come out with our hands up. It is not that there aren’t distinctions to be made, but that making fine-grained distinctions and saying, “It’s an academic thing, you wouldn’t understand” is a sure way to appear as hypocrites who don’t understand the value of the right they’re defending if we’re talking to already hostile publics who can be fired at whim for seeming to criticize the choice of pastries at the morning office meeting.


Let’s talk about another problem with saying, “Ok, academic freedom, but not for Really Bad People” or “Ok, academic freedom, but not for honoring people”.

This year’s wave of disinvitations started with the Rutgers faculty and students objecting to Condoleeza Rice’s appearance as a commencement speaker.

There is legitimately much to object to about the selection of Rice, and it starts not with Rice herself but with the model that many large research universities use for selecting speakers. Rutgers is especially blighted in this respect, as it is and has been controlled by a number of arrogant, distant administrators whose contempt for the faculty is fairly explicit. Commencement speakers shouldn’t be chosen by a small, secretive clique of board members and top administrators and they shouldn’t cost a dime to bring. When you have to shell out tens of thousands of dollars for a commencement speech, you’re voiding the right to regard it as a special ritual occasion. The transaction in all cases should be: we give you an honorary degree, you show up and say what you will. We’ll fete you and take care of your expenses, but that’s it. And you should be choosing such people in a more consultative fashion. You can’t have the entire community in on it from the beginning–anyone who has been involved in selecting an honorary speaker of any kind knows that there are invitations declined, invitations deferred, invitations lost and found, second choices rediscovered and so on. But you can represent everyone at the outset, if nothing else by genuinely seeking and valuing community suggestions and input.

The problem with the whole debate in the last two years is that commencement critics have typically seen Rutgers as the typical or normal example of how the honorary sausage gets made. And at least in this year’s count of kerfuffles, it’s not. At Smith and Haverford, I’m pretty sure that the process is closer to Swarthmore’s process, which involves an administrator who has been lovingly involved in thinking about speakers for many years and a rotating cast of faculty appointed to a committee, plus solicitations of community suggestions.

I know: I was on the committee one year and in another year, I sent in a suggestion and lo! my suggestion was heeded. So when local critics complained about one selection last year, I took it personally even if the candidate wasn’t my selection and I wasn’t on the committee that year. Because you can’t love the process when it gets you crusaders for social justice and poets and philosophers and inventors and then hate it the one time it gets you someone you don’t like.

I haven’t liked everyone we’ve had in the twenty years I’ve been here. Most years, we select alumni, including last year’s controversial selection, Robert Zoellick. Bill Cosby bored and annoyed me, deep in the dotage of his contempt for higher education. But did I give somebody shit about it afterwards? No, because my colleagues were a part of the choice. And the committee eventually asked all the faculty about it and we shrugged and said ‘eh, whatever’. And Zoellick was suggested similarly and one faculty member said, “I don’t like it”, but the rest of us said either “Yes” or “Eh, whatever”.

So here’s the thing. The other practice that we cherish as faculty that’s under assault nationwide is faculty governance. If your idea, as a faculty member, of faculty governance is that the one person who says, “I don’t like X” should override a committee and a process and an entire faculty, then guess what, we deserve to lose the fight for governance. If your idea of faculty governance is that you demand the outcomes you wanted in the first place after the meeting is done, and think it’s ok to rock the casbah to get there, then we deserve to lose the fight for governance.

When some Smith students and some faculty rise to say, “We don’t want Christine Lagarde to speak because the IMF is imperialist”, they’re effectively saying, “We don’t care who decided that or how”, and thus they’re also embedding an attack on governance along the way. Because surely to disdain the IMF (or the World Bank) so wholly that you will do what you can, what you must, to stop them from being honored guests is to also disdain anyone who might have, in any context, ever have thought otherwise. As, for example, in most Departments of Economics, perhaps here and there in pockets of usage and support and consultancies in other departments as well.

At which point we deserve to lose the fight for academic freedom as well as governance: some of us are seeking to win not a fight against the subjects of their critique “out there” but against intimate enemies, to win what can’t be easily won in committee meetings and faculty senates and classrooms and curricula and enrollments and persuasive addresses to wider publics.

Posted in Academia, Politics | 16 Comments

Sovereignty Is Bunk

During the run-up to the Iraq War, I remember a few hot conversations among opponents of the war about whether there was such a thing as a “sovereignty left”, e.g., a tendency towards seeing the achievement and maintenance of inviolable national sovereignty as the first commitment that activists should defend internationally.

I thought and still think that this does describe the actual politics of at least some European and American progressives, though I’d agree that it’s hard to find a reference where this view is laid out as a fully-fleshed philosophical or theoretical argument. It’s more a question of what kinds of reflexive responses to international crises you tend to see within spaces that are broadly or loosely encoded as “left” of some kind or another. These responses are not about the defense of sovereignty in general, but are instead a sort of echo or continuation of anti-colonialism. They defend the sovereignty of postcolonies against imperial power, defend the non-West against the hegemony of the West. Intrusions on the sovereign borders and territory of the EU and US (such as undocumented immigration) tend to code in the other direction: the left often understands those problems as issues of economic and social justice within nations, but the right often sees them as being about the maintenance of legal authority over national sovereignty.

The problem I have with this kind of response, however much it is or is not a real political faction (please, let’s not do the ‘no true Scotsman’ schtick), is two-fold. The first is that even if we understand this as a tendentious re-labelling of anti-colonialism, it mostly serves to recall the poor match between the content of anti-colonial protest (all the way back to the 1940s) and the demands for national sovereignty. The second is that Westphalian sovereignty is one of the biggest collective lies the human race has ever told itself, and continues to be the primary rhetorical justification for dangerous and mendacious arguments from pundits and bureaucrats of all ideological stripes. (And so is not just a problem of some kind of “left”.)

So, to the first point. The actual content of anti-colonial politics within Western states from the 1940s onward has been almost uniformly liberal, even when the people engaged in that politics self-identify as radicals of some kind or another, whether socialist or otherwise. Anti-colonial activists usually proclaim that the citizens of liberal democracies bear responsibility for what their countries do in the world, and therefore if their own countries act as repressive imperial powers, citizens must stop those actions and ameliorate the damage they’ve caused.

They also argue that what makes imperialism peculiarly or especially repressive is its illiberalism. The complaint is that imperial rulers are especially prone to violating and ignoring the supposedly universal human rights of colonial subjects. Colonial subjects are often killed, wounded, imprisoned, and tortured even when an imperial ruler or occupier professes to be serving as an emancipator or protector. Settler colonialism tends towards exterminationist or genocidal logics in particular. Modern colonial subjects from the early 20th Century all the way up to the present have been excluded from work within their own societies, denied the right to speak or move freely, have had any and all political rights stripped arbitrarily from them at the whim of the imperial ruler, have had their property and land stripped from them arbitrarily and been treated as profoundly unequal in social and economic life.

The problem is that the first instrumental goal of anti-colonial politics has usually been the expulsion or retreat of the imperial hegemon and the creation (or restoration) of national sovereignty. That’s where the mismatch comes in, because an illiberal national sovereign is not really that different from an illiberal imperial occupier. Once upon a time, in the 1960s and 1970s, it was perhaps reasonable to hope otherwise, but that time is long past. We now know that there is nothing in Westphalian sovereignty per se that guarantees or even makes more probable that a postcolonial national state will be systematically more respectful of basic human rights and dignity.

The contemporary situation where this disjuncture is most acutely visible is in activism directed at the Israel-Palestine conflict. It’s fair to describe Israel as an imperial occupier of Palestinian territory. It’s fair to characterize numerous injustices and violations of human rights suffered by Palestinians (and even Israelis of Arab descent) as the consequences of colonial control of Palestinian territory by Israel. But when we run down the bill of indictments, it’s not at all clear that any particular Palestinian nation with real and meaningful sovereign independence from Israel will be particularly superior in correcting those injustices and violations. The best we might say is that sovereignty would remove a condition that is certain to prevent justice, that there can never be fairness, justice or equality while one society rules another. The probability that sovereignty in and of itself will change much else besides removing one layer or cause of oppression is, judging from post-1945 world history, relatively low.

It might be better for progressives to be more indifferent to whether a situation is imperial. What makes an imperial situation an inviting one from the perspective of an activist who would like to work for justice and peace is not that there is something especially repugnant about colonialism besides its illiberal outcomes, it is that an imperial situation is one where an activist in the West may have a greater degree of influence over that situation. I could participate in a campaign against the government of Equatorial Guinea and hope perhaps that some limited kinds of international pressure could have a useful effect. The embarrassment that might stem from working to have UNESCO refuse to give a prize in the name of its ruling family, perhaps. But this is very minor, and easily ignored. But if it’s my government torturing and killing and discriminating directly? There are all sorts of avenues for meaningful pressure, so many in fact that progressives can have vicious intramural fights amongst themselves about what the most effective tactics might be.

But if we were in some sense less hung up on whether imperialism per se is the reason for mobilization, and clearer about a commitment against any sort of similar injustice, then first, that might lead to some sort of pressure brought within colonial situations, less writing of blank checks to any potential future sovereigns. It is not that we should ever say, as in Palestine, that approval of national independence has to wait until a properly virtuous sort of nationalist steps forward. That’s setting an impossible standard. It’s also doubling-down on imperialism by imagining that bunch of people who don’t live there have the right to decide what is an acceptable regime and what isn’t. They don’t and they shouldn’t. Who rules and how they rule is not anybody’s choice to make but the people of a given nation, region or community.

But the other point would be: people are involved in this struggle or any other because of what’s unjust in the situation. If that’s why imperialism qua imperialism is bad–its propensity to produce injustice–then those commitments should carry over after a change in sovereignty, and for the most part, they don’t. Activists who were ardently mobilized by the misrule of the Rhodesian Front were at best passively unhappy with the misrule of ZANU-PF in independent Zimbabwe.

If we were clear that we look first for situations where our mobilization can lead to meaningful pressure against injustice, that might lump together imperial situations with other situations that are not imperial but where some other kind of special tactical leverage exists: particular kinds of international interdependency, particular sorts of entangled institutional relationships. And maybe we could along the way get less posturing about whether certain kinds of tactical commitments are radical and others are wishy-washy liberal. Because in the end most of what “radicals” and “liberals” offer as evidence of actionable injustice in such situations is the same.


The other reason to care less about sovereignty is simply that it’s a nonsense fiction that describes almost nothing about the world we have lived in for the last century. This isn’t a problem with progressives, it’s a problem at almost every end of the contemporary ideological spectrum.

It’s very hard to take anybody seriously who gets excited about which nation has control over the Crimean peninsula. There is no ethical or political wisdom to be found in earnestly reciting the history of which state has owned the Crimea at which times. That answers absolutely nothing about what would be fair or right for the people who live there and answers absolutely nothing about what is true or untrue about Crimea. The two states contending over it right now are both recent creations within their present boundaries, descended from an empire that rearranged (and extinguished) boundaries and peoples frequently during its existence, descended from another empire that did the same.

The “international law” now being solemnly delineated and defended by American politicians and their more earnest or jingoistic pundit-class hangers-on has been thoroughly rubbished by those same American politicians over the last twelve years. It’s hard to take anyone seriously who gets terribly, terribly concerned for the territorial inviolability of Crimea but who supported the invasion of Iraq, who legitimated black site prisons and extraordinary rendition, or who feels that using the ridiculous loophole status of Guantanamo to hold prisoners is basically acceptable.

Borders in much of the world are sites of extraction and petty authoritarianism for the ordinary travellers, laborers and merchants who must traverse them. For the rich and powerful, borders are protective shrouds for shadow economies that should be under national control and regulation but are not. Many physical and natural phenomena that shape all our lives have little to do with borders and yet may be very strongly affected by human institutions, human agency.

The answer that some progressives give to all this is to empower the nation-state further and to empower interstate institutions to enforce and expand the reach of the power of sovereign states. That if only all states were genuinely Westphalian, without porous borders, without the subversions of capital or international organizations, life would be better. I doubt it partially because of where I started this essay, because there is nothing that guarantees that a sovereign nation will be a particularly just or free society.

But I also doubt it because that will always require being overly solemn and certain about essentially arbitrary and mutable questions: where this or that chunk of land belongs, whether this or that people share enough of a history or language or a sense of everyday culture to belong within the same sovereignty, whether these communities or those communities qualify as being properly indigenous, and so on.

It’s possible to make a Churchillian statement that Westphalian sovereignty is better than all the other ways we could organize power over space and territory. But I don’t think that’s true at all even in the world we live in and the world we have lived in for several centuries. Certain kinds of porousness, certain kinds of flexibility and contingency in the architecture of sovereignty, certain kinds of alternative ways of owning, controlling, or inhabiting territory are not only part of the real world as it stands but in some circumstances preferable to a classic “strong” sovereignty. As things stand, I find the passion that many invest in questions like “Who properly owns Crimea?” or even “How can a state of Palestine come into being?” is a distraction from what we should really care about, which is “Are people living in Crimea living in the light of justice, fairness, and freedom?” or “What is unfree, unfair or injust about the life of Palestinians in the Occupied Territories and why is it so?” In some cases those questions are not just distractions but active contributors to more misery and injustice–say, when disputes about what’s right for Crimea enable the contemplation of war or allow those who have habitually violated sovereignty when they find it convenient to do so to whitewash their own practices.

If those questions could be asked without the presumption that they must be resolved out within the usual calculus of Westphalian sovereignty, we might have a better chance at finding generative and creative answers.

Posted in Politics | 12 Comments

Canary in the iTunes

A thought about the media industry’s antipiracy efforts, seen in retrospect back to the beginnings of the digital age. In the NYT today, the question comes up as to whether consumers would pay to watch more movies in digital players if more movies were priced reasonably and the restrictions on viewing were permissive. This is the usual spectrum of debate: between media industry watchdogs who think this is about the culture of theft and those who think it’s about pseudo-monopolies defending lazy, entitled revenue models in which they sold a copy of their product four or five times to the same consumers in different formats and circumstances.

Ruth Vitale, the anti-piracy executive covered in the article, suggested that the falling production of movies is a sign of the damage that piracy (in the “culture of theft” model) is inflicting on the industry.

What if the entire debate is a misfire? What if the 1990s were a final apex decade of a leisure-oriented, consumer-driven society? The last time a middle-class existed and was working to earn more time at home, more time to themselves, more time to consume culture? The last time there was enough money (fueled by debt) to support the mass consumption of leisure? What if piracy is the canary in the coal mine for the growth of income inequity and the collapse of white-collar labor? What if no one has the time to really consume more than a small fraction of even the diminished current output of the media industries, because they’re working longer hours just to keep from getting fired or even just to make ends barely meet? What if no one has the money, because of flat salaries and debt loads?

At that point, debating piracy per se is sort of like getting caught up in managing the ecological future of polar bears without noticing that you’re dealing with a very small part of a very big story. More importantly, it’s not just victims of income inequality that need to defend themselves against the new gilded age: if mass-consumer corporations want to have a future, they had better throw in with a broad “middling class” while there’s still time.

Posted in Popular Culture | 14 Comments

Read the Comments

I keep coming back, obsessively and neurotically, to the question of what a liberal arts education is good for.

I do think it helps with the skills that pay the bills. I do think it can make you a better citizen. I do think it can help you lay the foundation for the examined life. It doesn’t always do that, and there are many other ways to get skills, learn to be a better participant in your social and political worlds, be a critical thinker.

A modest example of the possibilities occurred to me today. The concept of social epistemology is becoming more important in philosophy as it is applied both analytically and technically to various kinds of digitally-mediated crowdsourcing. One strain of thought about social epistemology might suggest that philosophy could be as much an ethnographic discipline as an interpretative one, that it could look for how social groups generate epistemological or philosophical frameworks out of experience. There are plenty of other ways to take an interest in how people think in their social practices and everyday lives about ethics, knowledge, and so on, in any event. The question in part is, “What could a liberal arts education–or formal scholarship–add to such everyday, lived thinking that it doesn’t already have?”

I’m going to do something a bit unusual. Rather than the usual “don’t read the comments!” I’m going to suggest that at least sometimes comments on Internet sites offer some insights into how people in general think.

Take a look at this Gawker thread about a tailgater and the “karmic justice” meted out to him for following the driver ahead of him too closely and aggressively. (He eventually passes to the right at high speed, gives the driver the finger multiple times, merges back left on a lightly wet road and loses control of his truck, crashing into the median.)

The main story accepts the “karmic justice” narrative. But in the comments, three different interpretations eventually emerge.

The first validates the main story: the tailgater was unambiguously in the wrong and it is right to feel some vindication at his misfortune.

The second holds that the tailgater was acting poorly but also the driver making the videotape was also acting poorly, for several reasons. First, that the driver being tailgated was videotaping (and was therefore indulging in dangerous behavior as well) and second, that the driver being tailgated (the tailgatee?) should just have pulled to the right and let the faster driver go ahead.

The third is unabashedly on the side of the tailgater. These commenters hold that tailgating is a practical, even necessary, response to drivers who insist on blocking the left lane of any roadway at a speed slower than the speed that the tailgater wishes to go. They support both the tailgating and the obscene gesture and regret that the tailgater had an accident.

There’s a minor fourth faction that is primarily irritated at yet another person videotaping with a smartphone held in portrait mode. Protip hint: they at least are completely right.

What’s interesting in the comments is that each group has strategies for replying to the other two. The anti-tailgaters point out that the roadway in question is not a major highway, that the driver being tailgated was going the maximum speed limit, that the driver says she did not look at the camera while holding it, that she says she was going to be turning left very soon and that traffic to the right was fairly heavy. The blame-on-all-sides find that the videotaping driver has a history of being aggrieved about a lot of things, that there seemed to be plenty of space to the right, and that it’s unwise (especially in Florida) to tangle with a person demonstrating road rage. The pro-tailgaters…well, they don’t seem to have much other than a view that tailgating is necessary and justified.

It’s easy to just say, “A pox on all their houses” or to simply join in the debate on one side or another. I guess what I’m struck by is that when you pull back a little, each of these approaches is informed, whether the people are consciously aware of it or not, by some potentially consistent or coherent views of what’s right and wrong, wise and unwise, fair and unfair.

What I wonder sometimes is whether we could construct a coherent underlying credo or statement about our views, if we were all asked to step back from the views we can express so hotly in comments threads in social media or other contexts. So much of our discourse, online and offline, is reactive or dialectical. That’s actually good in the sense that real cases or experiences are a better place to start, perhaps, than arid thought-experiment scenarios about pulling trolley levers to save or not save lives. But maybe where some sort of liberal-arts experience could help. It could help us to go from a reactive reading to a more contemplative description of why each of us thinks what we think.

Suppose I’m against the tailgater: why? Because I object morally to tailgating period–its aggression, its danger? Is it ok to be aggressive in return? (The driver in the video apparently has specified that she did not break-check the tailgater.) How confident am I that tailgating is the result of road rage? How much do I actually know about another driver, and why should I be confident about my strong moral readings of someone whom I only know in a single dimension of their behavior? If was going really slowly, would tailgating me be justified?

Suppose I’m against both of them: why? Can I trust that someone can in fact be a good driver while holding up a smartphone and not looking at it? Why do I trust or not trust in that proposition? Why not, as this approach suggests, just yield to someone determined to be antisocial and get out of their way? Is being righteous in opposing a tailgater just a kind of self-indulgent or egotistical response? Or an aggression of another kind? What does that imply about other cases?

Suppose I’m certain that if I want to go a particular speed, it’s right to allow me to do so until or unless I am charged with the crime of speeding or unless I have an accident as a result? What else does that imply? Do I mean it in all cases or is driving a special case? Am I right that I’m a better driver than most others? What does that entitle me to if so?

I suspect that in a lot of cases, driving (or other everyday practices) are held to be “special cases”–that to try and work back to some bigger or more comprehensive view of the world isn’t going to work for many people in the Gawker thread. But that too is interesting: if much of how we read the “manners” of everyday life is ad hoc, that’s not necessarily bad, just significant.

Posted in Academia, Defining "Liberal Arts", Miscellany | Comments Off


High Anxiety

In modernity, dread only takes a holiday once in a while. Right now Mr. Dread is hard at work all around the world, and he’s not just sticking to the big geopolitical dramas or some single-issue fear. He’s kicking back and making himself comfortable everywhere where uncertainty holds sway, which is to say everywhere: homes, workplaces, boardrooms, the shop, the street, the wilderness.

So asking: why so anxious? of anyone is an almost pointless question. Who isn’t anxious? All the tigers in our souls are prowling the bars of whatever cage we’re in. But I’ll go ahead and ask.

What I’ll ask about is this: what stirs many tenured faculty in humanities departments at wealthy private colleges and universities to so often pick and fret and prod at almost any perturbation of their worlds of practice–their departments, their disciplines, their publications, their colleges and universities? Why do so many humanistic scholars rise to almost any bait, whether it is a big awful dangling worm on a barbed hook or some bit of accidental fluff blown by the wind into their pond?

The crisis in the humanities, we’re often assured, doesn’t exist. Enrollments are steady, the business model’s sound, the intellectual wares are good.

The assurance is, in many ways, completely correct. The trends are not so dire and many of the criticisms are old and ritualized. Parents have been making fun of the choice to major in philosophy for five decades. Or longer, if you’ve read your Aristophanes.

And yet humanists are in fact anxious. Judging from a number of experiences I’ve had in the last year at Swarthmore and elsewhere, there’s more and more tense feelings coming from more directions and more individuals in reaction to a wider and wider range of stimuli.

Just as one example, I just got back from a workshop with other faculty from small private colleges who have been working with various kinds of interdisciplinary centers and institutes and almost all of them reported that they’re constantly peppered by indirect or insinuated complaints from colleagues. We even heard a bit of it within the workshop: at one point, an audience member at the keynote said to the speaker, “Whatever it is you’ve just shown us, it’s not critique, and if it’s not critique, it’s not humanities”. When faculty are willing to openly gatekeep in a public or semi-public conversation, that’s a sign that shit is getting real.

I’d call it defensiveness, but that word is enough to make people legitimately defensive: it frames reaction as overreaction. Worried faculty are not overreacting. Maybe the humanities aren’t in crisis, but the academy as professors have known it in their working lives is. It is in its forms of labor, in its structures of governance, in its political capital, in its finances. That’s what makes the tension within the ranks of the few remaining tenured faculty who work at financially secure private institutions so interesting (because otherwise they are so atypical of what now constitutes academic work). Why should anxiety about the future afflict even those who have far less reason for anxiety?

The alarm, I think, is about the possibility (not yet the accomplishment) of transformations across a broad spectrum of everyday academic habitus: in the purposes and character of scholarship, in the modes of its circulation and interpretation, in the methods and affect of inquiry, in the incentives and commands that institutions deploy, in the goals and practice of teaching. With these fears coupled to the unbearable spectacle of many real changes that have taken place in the political economy of higher education, many of them unambiguously destructive, in the terms and forms of labor and in practices of management. A tenured humanist at a well-resourced private university or college might feel secure in their own working future, but that is the security (and guilt) of a survivor, a security situated in a world where it feels increasingly irresponsible to encourage young people to pursue academic careers as either vocation or job.

Change comes to every generation in academia. Rarely has any generation of academic intellectuals ceded power and authority gently or kindly to the next wave of upstarts. But most transitions are a simple matter of disciplinary succession: old-style political and intellectual history to social history to the “cultural turn” and so on. Whatever is at stake now seems beyond, above and outside those kinds of stately progressions.

When academia might or could change fundamentally (as it did at the end of the 19th Century, as it did in the 1920s, as it did after the Second World War), that tends to harshly expose the many invented traditions that usually gently sediment themselves into the working lives and psyches of professors. What we sometimes defend or describe as policies and practices of long antiquity and ironclad necessity are suddenly exposed as relatively recent and transitory. We stop being able to pretend that sacred artifacts of disciplinary craft like the monograph or peer review are older than a generation or two in their commonality. We draw lines of descent between ourselves and those intellectuals and professors we imagine to be our ancestors, but it only takes a few generations before we’re desperately appropriating and domesticating people who lived and worked in situations radically unlike our own. We try to whistle our way across jagged breaks and disjunctures: do not mind the gaps! Because if past intellectuals carried on writing, thinking and interpreting without tenured and departmentalized disciplinarity to support them, then arguably future intellectuals could (and will!) too.

American professors have figuratively leapt upon melancholic bonfires in gloomy protest all through the 20th Century over such retrospectively small perturbations as the spread of electives, the fall of Western Civilization (courses), the admission of women into formerly all-male institutions, the introduction of studio arts and performance-based inquiry into liberal arts curricula, the rise of pre-professional majors. Even going back the creation of new private religious colleges and universities or to the secularization of much academic study in the mid-19th Century. As we celebrate Swarthmore’s sesquicentennial this year, it’s hard to remember that once upon a time American small liberal-arts colleges might have seemed as much a kind of faddish vanity born out of every congregation and municipality wanting to put itself on the map with its own college.

Not that these changes were not major changes with a range of consequences, but well, here we are. The world did not end, the culture did not fall, knowledge was not lost forever. Often quite the contrary. Life went on.

In the end, when academics vest too much energy in discussions of particular, sometimes even peculiar, forms of process and structure within their institutions, they lose the ability to speak frankly about interestedness, both their own and the larger interests of their students and their societies. Simon During, whose recent essay “Stop Defending the Humanities” very much informs my own thinking in this piece, writes that “The key consequence of seeing the humanities as a world alongside other broadly similar worlds is that the limits of their defensibility becomes apparent, and sermonizing over them becomes harder”. An argument about whether a particular department gets a line or not, whether a particular major has this course or that course, about whether students must learn this or that theory, is always a much more parochial argument than the emotional and rhetorical tone of those discussions in their lived reality would imply. Nothing much depends upon such arguments except our own individual sense of self in relation to our profession. Which of course is often a very big kind of dependency when you’re inside your own head.

Perhaps counter to the general trend, I personally feel as if I have little invested in the fortunes of history as a discipline or African studies as a specialization. I have a great deal invested in the value of thinking about and through the past, and in the methods that historians (in and out of the academy) employ, but I don’t see such thinking as necessarily synonymous with the discipline of history as it exists in its academic form circa 2014. I have a lot invested in my own fortunes, and were I working for an institution where the fortunes of history or African studies in their institutional forms continuously determined the future of my own terms of employment, my sense of vestment in those things would have to change. I’m just lucky (perhaps) to work in a place that gives me the institutional freedom to cultivate my own sensibility.

There’s nothing wrong with self-interest. Keeping self-interest consciously in the picture is what keeps it from becoming selfishness, it’s what allows for some ethical awareness of where self-interest stops and the interests of other selves begin. That awareness can allow people to tolerate or even happily embrace a much wider range of outcomes and changes.

If it turns out, for example, that there are ways to reorganize labor within the academy that will create a much larger number of fairly good jobs, at the expense of exploitative forms of adjuncting but also at the expense of a very small number of extravagantly great jobs, well, that’s a good thing. If it turns out that more energy, attention and resources put into humanities labs or other new institutional structures leads to less energy, attention and resources to some more traditional structure of disciplinary study, well, what the hell, why not? Que sera, sera. If I need to teach one kind of course less often and another kind more often because of changes in student interest, then the main thing that change affects is me, my labor, my satisfaction, my sense of intellectual authenticity. Not the discipline or the major or the university I work for, except inasmuch as my sense of self is entangled in those things. Some entanglement is good: that’s what makes faculty good custodians of the larger mission of education.

A lot of entanglement is bad: that’s what leads to grandiose misidentifications of an individual’s transitory circumstances with the ultimate fate of huge collective projects (like disciplines or institutions or even departments) or society as a whole. That’s what leads to trying to control that fate through the lens of those individual circumstances.

There is a lot of entanglement in the academic humanities at the moment.

Hacking and Yacking

Scholars in STEM disciplines have their own concerns and worries, but they do not tend to feel the same kind of existential dread about the future of their own practices nor worry so much about the kinds of misremembered and misattributed “traditions” of scholarship and teaching that many humanists allow themselves to be weighted down with. This is not to say that they should get off lightly. STEM professors are also frequently prone to think that the structures of their majors or the organization of their disciplines or the resource flows that sustain their scholarship are precisely as they must be and have been at any given moment, and find it just as difficult to accept that not that much depends upon whether this or that course gets taught at this moment or in that fashion.

More to the point, most STEM faculty are copiously invited by the wider society to define their research as having immediate and urgent instrumental impact on the world. That’s what often leads to scientism in disciplines like psychology, sociology, economics and political science, wherein a demand for resources to support research is justified by strong claims that such research will identify, manage and resolve pressing social problems. In many ways, natural scientists and mathematicians are often more careful about (or even actively opposed to) claims that their work solves problems or improves the world than social scientists tend to be.

Hardly anyone in the academy seems able to refuse in principle the claim that their work might make the world a better place. Because of course, this could be true of anyone. Even more modestly self-interested people hope that in some small way they will leave the world better than they found it.

The problem here with humanists is the characteristic tropes and ways that they use to position themselves in relationship to the world (or as During aptly puts it, worlds), at least in the last three decades or so.

I found myself a bit embarrassed last year while attending a great event that my colleagues organized that showcased scholars and creators working with new media forms. After one presentation of a really amazing installation work, one of our students eagerly asked the artist, “What are the politics of your work?” and followed the question by stating that the work had accomplished important reframings of the politics of embodiment, of gender, of sexuality, of identity, of race, of technology, and of neoliberalism. There is almost no artist or scholar who is simply going to say, “No, none of that” in reply to something so earnest and friendly, and so it was in this case: the speaker politely demurred and asserted that the politics of the work were in some sense yet to be known even (perhaps especially) to the artist herself. I was embarrassed by the moment because the first part of the question was a performance of studied incuriosity, a sort of hunting for the answers at the back of the book. Cut to the chase! What’s the politics, so I know where to place this experience in my catalog of affirmations and confirmations. It was in its own way as instrumentalized a response as an engineering major listening to a presentation by a cosmologist about string theory and then saying, “Ok, but what can I make with this?” The catalog of attributions that formed the second part of the question both preceded and superceded any experience of witnessing the work itself.

Ok, I know: student! We all had such moments as students, and the thinking of our students is not necessarily an accurate diagnosis of our teaching and scholarship. But there seemed to me in that moment something of an embryonic and innocent reflection of something bigger and more pervasive.

Harvard faculty who recently surveyed the state of the humanities at their university identified many issues and problems, many of which they attribute to forces and actors outside of their own disciplines. However, one of the problems that the Humanities Project accepted ownership over was this: “Among the ways we sometimes alienate students from the Humanities is the impression they get that some ideas are unspeakable in our classrooms.” Or similarly, that some ideas are required. Recall my mention early on of the scholar who protested, “If you aren’t doing critique, you aren’t doing humanities”—and what the Harvard authors imply is that for some humanists, critique is not just a method or act, it is a fully populated rubric that dictates in advance a great many specific commitments and postures, many of which are never fully referenced back to some coherent underlying philosophy.

Scholars who identify with “digital humanities” know that they can quickly get a rise out of colleagues (both digital and analog) by reciting the phrase, “More hack, less yack”. Rightly so! First because working with digital technology and media requires lots of thoughtful yacking if you don’t just want to make the latest Zynga-style ripoff of a social media game or whatever. Second because theory and interpretation are hacks in their own right, things which act upon and change the world. The phrase is sometimes read as a way of opting out of critique, and thus retreating into the privileged invisibility of white masculinity while continuing to claim a place in the humanities. Sometimes that’s a fair reading of what the phrase enables or intends.

The problem with critique, however, is not that it’s not a hack, but that many times the practice of critique by humanistic scholars is not terribly good at hacking what it wants to hack. This is not a new problem, nor is it a problem of which the practitioners of critique are unaware. This very thought was the occasion for fierce debates between left intellectuals (both in and outside of the academy) in the 1980s, and one of the sharpest interventions into that dialogue was crafted by the recently deceased Stuart Hall.

In the 1980s, Hall was working out of an established lineage of questions about the relationship between intellectuals and the possibility of radical transformation of capitalist modernity, most characteristically associated with the works of Western Marxists like Gramsci, Adorno, and Lukacs but also other lineages of critical work associated with Bourdieu, Foucault, and others. For me, since this was one of the formative moments in my own development as a scholar, the most electric thing for me about Hall’s reading of the 1980s in Britain was his insistence that Thatcherism had gained its political ascendancy in part because of its adroit reworkings of public discourse, that it managed to connect in new ways with the subjectivity and intimate cultural worlds of the constituencies that it brought into a new conservative coalition. E.g., Thatcherism was not merely a question of command over a repressive apparatus, not merely an expression of innate structural power, but that it was the contingent outcome of a canny set of tactical moves within culture, moves of rhetorical framing and sympathetic performance. The position was easily applied to Reaganism as well, in particular to explaining the rise of the so-called Reagan Democrats.

This was of course exciting to left intellectuals (like me) who saw themselves as having expert training in the interpretation of culture, because it seemed to imply that left intellectuals could make a countermove on the same chessboard and potentially hope to have a big impact. But here came some problems, which Hall himself always seemed to have a better grasp on than many of those who claimed him as an influence. Namely, that knowing how identities are constructed, how frames operate, how common sense is produced, is not the same as knowing how to construct, how to frame, how to produce common sense.

Critique commonly embeds within itself Marx’s commandment to not just interpret the world but also to change it. That’s the commitment to “hack”, to act upon the world. What Hall and similar critics like Gayatri Spivak or Judith Butler had to ask during the debates of the 1980s and 1990s was this: what kinds of frames and rhetorical moves create transformative possibilities or openings? Hall played around with a number of propositions, such as “strategic essentialism”: e.g., leverage the ways that the language of essentialism is powerfully mobilizing within communities formed around identity while not forgetting that this is a strategic move, a conscious “imagining” in Benedict Anderson’s sense. Forgetting that it’s a strategy risks appropriation by reactionary movements and groups associated with nationalism or sectarianism. Which is in some cases more or less what has happened.

But the risk or the problem was more profound than that. In the very best case this scenario involves anointing yourself as part of a vanguard party or social class with all the structural and moral problems that vanguardism entails. E.g., the reason you believe you can play the chess game of framings and positionality is that you know more and know better than the plebians you’re trying to move and mobilize. And you believe that’s equally true of the guys on the other side: that the Reaganites and their successors win because they know which buttons to push without themselves being captive to those same buttons, that they know what they’re doing, not that they authentically feel and believe what they say. It is a conception of critique that puts the critic (or enemy of the critic) up and outside of the battlefield of culture, as capable of framing because they are not produced by frames. And in the case of humanistic critique from the left, the critic holds that their own engagement not even produced by the defense or advancement of self-interest. The position has to hold that interests of critique are simultaneous with the interests of everyone who is not grossly self-interested: e.g., by a true yet-to-be pluralistic kind of universal good that negates the self-interest of capitalist modernity. That it is working for the Multitude rather than the Empire. This is one of the oldest problems for any radical left: how to account for its circumstances of its own possibility. There are many venerable ways out of that intellectual and political puzzle, but it is always an issue and one that becomes more acute in a politics that names culture as a battleground and intellectuals as one important force in that struggle.

What humanists who aspire to critique understand best about rhetoric, language, culture (both expressive and everyday) through both theoretical and empirical inquiry is often at odds with effective action within culture, with the crafting of powerful interventions into public rhetoric, with the shaping of consciousness through framing gestures. Humanists are rightly suspicious of foundationalist, positivistic claims about the causes and sources of culture and consciousness, whether they come from evolutionary psychology or economics. That often means that only highly particularistic, highly local understandings of why people think, talk, and imagine in certain ways will do as a basis for expert knowledge of people thinking, representing, talking and imagining. But much of the time when we wrap up our scholarly work that has that kind of attention to particularism, we don’t end up more confident in our understandings of how and where we might mobilize or act. The particularism of much humanistic study is frequently even more fiercely inhibiting to the possibility of a deliberate instrumental reframing of the themes or mindsets that have been studied. Why? Because such study often convinces us that consciousness and discourse are the massively complex outcomes of the interaction of many histories, many actions, many institutions. It convinces us that frames and discourse often shape public culture and private interaction in ways that only partially involve deliberate intent and that also often escape or refract back upon that intent. And, if we’re at all self-aware, it often reveals to us that we’re the wrong people in the wrong place at the wrong time to be trying to reframe the identities, discourses and institutions that we have identified as being powerful or constitutive.

One way out of that disappointing moment is to assert that when the other guys win, it’s because they cheat: they have structural power, they have economic resources, they astroturf. Which just takes us back to some of Hall’s critics on the left who always thought messing around with cultural struggles was a waste of time. At least some of them more or less got stuck instead with hanging around waiting for the structural contradictions of capitalism to finally reach their preordained conclusion. Or alternatively anointed themselves not as the captains of counter-hegemonic consciousness but as the direct organizers of direct struggles, a posture which has usually lead up and out of direct employment within the academy.

Accepting the alibi that the right wins in battles for public consciousness because they have overwhelming structural advantages prevents the development of a meaningful curiosity about why some discursive interventions into public culture (conservative and otherwise) are in fact instrumentally powerful. Many humanistic critics seem doomed to take power and domination as always-known, always-transparent subjects. There have been significant attempts to undo that doom–the history of whiteness offered by scholars like David Roedinger and Nell Irvin Painter is one great example, and there are others. But always there is the problem: to treat the interiority of power and domination as being as interesting, as unknown, as contingent as anything else we might study is to open a space of vulnerability, to make critique itself contingent not just in its means but its ends. If it turns out, for example, that both powerful and subaltern conservatives in contemporary American society are as produced by and within culture as anyone else, then that potentially activates a whole range of embedded intellectual and ethical obligations that we tend to be guided by when we’re looking at something we imagine to be a bounded “culture” defining a place, a community, a people.

If it turns out that the other guys win sometimes not because they’re cheating but because they’re more present and embedded in the game than the academic intellectual, then what? Hall was always aware of this dimension of Thatcherism: that it worked in part because Thatcher herself and a few of her supporters were acutely aware of the ressentiments of some lower middle-class Britons, because of her fluency in some of their social discourses and dispositions. It stopped working because most of the rest of her party only spoke upper-class twittery or nouveau-riche vulgarism but also because ressentiment as a formation tends to press onward to vengeance and cruelty, to overstep. But this goes for many causes and ideals that progressives treasure as well. The growing acceptance of gay marriage in the United States, unless you believe Michele Bachmann’s views that it’s the work of a sinster conspiracy, has at least as much to do with a long, patient appeal to middling-class American views of decency and fairness as it does to sharp confrontational attacks on the fortresses of heteronormativity. It’s an achievement, as some queer theorists have noted, that has the potential cost of the bourgeois domestication of sexuality and identity as a whole, but it’s still an example of a deliberate working of the culture towards an end, and it’s a working that scholars and activists can rightly say they contributed to.

But this is the thing: every move that’s justified as a move within and about the culture then needs to be thought through in terms of what its endgame might be. You can justify tone-policing and calling people out on social media as a way to mobilize the marginalized, as a strategy of making people visible. You can justify it as catharsis. But I’m not sure, as some seem to be, that there’s much in the way of evidence that it works as a strategy for controlling, suppressing or transforming dominant speech.

The critical humanist wants to lift up the hood of the culture and rebuild the engine, but it turns out the toolkit they’ve actually got is for the maintenance of some other machine entirely. Which means in some sense that all the framings, all the hackings, all the interventions into rhetoric have tended come squarely back to that other machine: to the academy itself. Which explains why the anxieties of critique are visited back so intensely upon academic life and upon academic colleagues who seem in some fashion or another to have wavering loyalties. Humanistic critique might not have hacked the culture, but it definitely remade the academy. We are our own success story, but critique dare not let itself believe that success is in any way firmly accomplished, and it must also believe that any such accomplishment is always in deathly peril. It is, in any event, not enough: the remaking of the academy alone is never what critique had an aspiration to achieve.

I don’t think that bigger aspiration was wrong, but I do think that taking it seriously should always have implied a fundamentally different kind of approach to professionalism and institutionalization for critical humanists than it ultimately did. It’s not surprising in that sense that Stuart Hall always insisted that he wasn’t really an academic or a scholar, just an intellectual who happened to work in an academic environment. But of course even that “happened to” raises questions that were almost impossible for Hall and others to explore or explain. What if the deeply humanistic and progressive intellectuals who really make powerful or influential moves on the chessboard are not, cannot be, in the academy, whether by design or a “happening”? What if they’re app designers or filmmakers or preachers or entrepreneurs or community activists or advertisers? And what if the powerful moves to be made in the public culture are not a function of profound erudition and methodological disciplinarity but emotional intelligence? Or the product of barely articulated intuitions about the histories and structures circulating in the body politic rather than the formal scholarly study of the same? (More uncomfortably on the “happened to” front, what logic would entice disciplines to hire intellectuals rather than scholars? I’ve met more than a few academic humanists who insist that they, like Hall, are only intellectuals passing through the university only to see them turn around and be wholly committed to the most stringent enforcement of intensified and narrow disciplinary authority over who gets hired, tenured and promoted.)

The scholar devoted to critique could seek consolation by imagining they supply tools and weapons to other actors in the public sphere. That they give the intuitive critic and the culture worker information, ideas, frameworks. Hey, the Wachowskis read their philosophers when they made the Matrix films, right? And that would be a fair enough consolation in many cases: many people have been indirectly influenced by Foucault’s anatomization of power who could not cite him; Judith Butler changed the inner life of gender for people who have never heard of her. With a touch of humility, it’s not at all hard to claim our place as one more strand on the loom of cultural struggle.

Maybe that humility should be more than a touch. In recent discussions at Swarthmore over controversial events and a series of protests, I’ve heard it said more than once that academic institutions should never legitimate oppression by voluntarily inviting it inside their walls. Some of my colleagues have rolled their eyes in derision at the riposte of one student in the student newspaper who pointed out that we frequently and often respectfully read the works of people who were deeply involved in oppression: isn’t that legitimation, too? Well, why is that a silly response? It’s silly to some humanists because they believe that their own critical praxis allows both for awareness of how past (and present) works are implicated in power and for a plasticity and creativity in how we appropriate or create productive readings out of texts, lives, practices that we otherwise reject or abjure.

But this is where the hubris of an attachment to “framing” comes in. Like the Mythbusters, we come on at the beginning of the show and say: do not try this at home. We are trained, and so we can frame and reframe what we offer to produce an openness in how our students interpret and do it without producing too much openness. That novel can mean this thing or that thing or oh! how delightful, a new thing that it’s never meant before. But no, it doesn’t mean that thing, and no you shouldn’t think that of it, and oh dear, please you know that part is just awful. And so, if (for example) a terrible reactionary comes to campus and doesn’t perform his terribleness on cue and the wrong thing gets thought by many in the audience as a result, that’s a failure of framing. You know the frame has failed when the anticipated and required readings of the text are not performed. That’s not a failure of the audience and it’s not a success of the text. It’s alleged to be a failure of pedagogy, of scholarship, of intellectual praxis. The ringmaster forgot to flick his whip to get the clowns to caper when they were supposed to. All roads always lead back to us, ourselves, because that’s where we’ve vested our professionalism as both scholars and teachers: we are those who produce consciousness, at least within our own dominion.

The thing is, why do academic institutions legitimate? Because they do, they really do. There’s a reason why public figures and politicians who’ve just done something wrong or who have had the morality of their actions called into question often gratefully accept the opportunity to speak at a university, to accept an honorary degree, to teach a course. There’s a reason why the current government of Israel worries about the prospect of an academic boycott.

We legitimate not because we are adroit (re)framers, not because we put the Good Humanist Seal of Approval on some performances and the Stamp of Critique on others. We legitimate because after all the populist anti-intellectualism, after all the asshole politicians trash-talking the eggheads who waste money on gender studies and art history, after all the billionaire libertarians who trash universities as a part of their own preening self-flattery, because after all that most people still trust and value academia, both their ideal vision of academia and even much of its reality.

Look it up: on the list of must-trusted and least-trusted professions (in an age of profound alienation and mistrust) teachers, professors and scientists all still fare very well. We legitimate because people expect us to do our homework, to be deeply knowledgeable, to be honest, to be curious, to be temperate and judicious, and to be fair. And they even trust us despite the fact that we are the gatekeepers of the economic fates of many of our fellow citizens, and often even trust us more in proportional relationship to the degree to which we anoint the future elites of a society that is growing more unequal and unjust by the second.

This is not a liability: it’s a strength, but you have to use it as it comes. If there’s one thing that the theoretical indebtedness to Foucault among many humanists today should lead to is an awareness that virtue does not arise as an automatic consequence of your distance from power. If you want to practice critique, you work first with what you got and with who you are, you work the power you possess rather than pining for power elsewhere. The master’s tools can dismantle the master’s house: they built it, after all. Or they can change what’s inside of it. If that’s not acceptable, then you make something else, somewhere else, as someone else. Humanistic-critique-as-mastery-over-framing wants the legitimacy and influence of academic institutions without accepting the histories and readings that produce that legitimacy. It wants to be intellectuals elsewhere just happening to be here. It wants to hack without really understanding the code base it’s working with.

Academic Freedom as a Positive Liberty

Ok, but I too am anxious. I too do not want to work with what I’ve got and accept what I am. Can you tell that, 5,000 odd words later? And no, it’s not the anxiety of loss, not that old white liberal spiel about “oh, back in my day, the students were all very such-and-such, now we have that awful critique and multiculturalism and postcolonialism”. Inasmuch as I can and do perform that kind of ghastly professorial nostalgia, I’m probably indistinguishable from most of my humanist colleagues: oh, dear, I remember that great directed reading on Marxist critical theory with that student; oh dear, I used to have students who knew who Fanon was; and so on. Inasmuch as I am mournful in my expressions on social media, it’s often about my profound sense that many things I thought were irreversible signs of social progress have turned out to be profoundly reversible. Inasmuch as I rage about political trends, I sound very much like your average left-leaning humanistic professor.

It is not the anxiety of loss I feel most in my work these days. It’s the anxiety of a mostly-never-was and maybe never-will-be understanding of what I think the main or dominant professional ethos of an academic intellectual ought to be in scholarship and teaching and public persona.

It’s the opposite of what I think is embedded inside the idea of critique-as-reframing, critique as chess move in a war of position. When someone says to me, “Why didn’t you frame that event differently? Why do you let those words stand out there implying that the event means this? Those words out there that permit people to think that?”, my gut wants to reply as Justice Harry Blackmun did to the death penalty: “I shall tinker no more with the machinery of framing”.

By this I do not mean to say that I do not hope, as a writer, to mean what I say and say what I mean, and to influence people accordingly. But the worst problem with believing that any politics, intellectual or otherwise, is a matter of framing is ultimately the way that it encodes the framer as an agent and the framed as a thing. That both tempts the person who hopes to control the frame into a hubris that intensifies the ways in which they come off as inauthentic and manipulative (and therefore defeat their own goal) and paradoxically keeps the aspirant framer from a richer understanding of how and why other people come to think and feel and act as they do. That undersanding is actually crucial if you hope to persuade (rather than frame) others.

With all of their defects, including potential blindness to power and an air of liberal blandness, the terms persuasion and dialogue are, if you’ll excuse the irony for a moment, a better frame for what a critical humanist intellectual, or maybe just a critically aware human being, might want to be and do in relation with others. Because they start at least with the notional humanity of everyone in the room, in the conversation, in the culture, in the society. That’s not a gesture of extravagant respect to other people, it’s not generosity. It’s a gesture of self-love and self-empowerment, because you are going to get precisely jackshit nowhere in moving people to where you think they ought to be if you permit yourself the indulgence of thinking some people are things who can be dogwhistled wherever you want them to be. Even the most crass and awful kinds of dogwhistles don’t work that way, really. Maybe that gets you some votes in the primary election but it doesn’t change hearts and minds, doesn’t change how people live and act. As Raymond Williams once said of advertisers, there are a lot of people working the culture who are magicians that don’t know how their own magic tricks work.

So part of how I want an institution devoted to thoughtful, scholarly inquiry and conversation to work is to stop overthinking everything. And I don’t think I’ll get that.

But it is also this. One reason I absolutely did not want to defend the presence of Robert George at Swarthmore in conventionalized terms of free speech, in conventional languages of academic freedom, is first that this is just the most tedious kind of counterpunch in the stupid pantomime show that American national politics have become. The outsiders who tut-tutted at Swarthmore students and faculty on Twitter and so on have not a fuck to give about academic freedom when it extends to something they don’t like or respect. If there is anything a decade of blogging often about academic freedom has convinced me of, it is that there is almost no one who can be counted upon to be an honest broker on the subject, but most especially not many of the right’s most dedicated concern trolls.

This begs the question of what exactly I am looking for as I wander around with my lamp in the daytime.

The idea that academic freedom means that the academy should be a perfect mirror of the wider society is stupid. That would not be the outcome of an honest and balanced approach to academic freedom. That would just be evidence that the academy had become completely pointless. As indeed I would say of any specific social or political institution: nothing with a mission or a purpose should be judged success or failure on whether it is a precise microcosm of society as a whole. You make institutions to be a part, a piece, that the whole cannot be or isn’t already.

I’ve suggested in the past that academic freedom also doesn’t particularly accomplish what its defenders allege it does. It doesn’t liberate scholars and teachers to speak honestly and openly, it doesn’t incentivize the production of new ideas and innovation. Even less so now of course with the corrosion of tenure and the rise of adjunctification, but tenure never really protected most of what is claimed for academic freedom. It has long tended to domesticate, to conventionalize, to restrict scholarly speech and thought.

Academics still insist on defining academic freedom, like freedom of speech more broadly, as a negative freedom. A freedom from power, from restriction, from constraint, from retaliation. What if, instead, we defined it as a positive liberty? Meaning, something we were supposed to create more of for more people in more ways. What if we saw it as an entitlement, a comfort, a richness and saw ourselves not as the people protected from harm but as those who are obliged to set the table as extravagantly as we could?

What would that mean? It starts here: nothing human is alien to me. So then this: our curricula, our writing, our events, our conversations, should be cornucopia bursting to the brim with everything, with anyone. Our learning styles, our teaching styles, our everyday world of learning and thought, should run the spectrum and we should love each thing and everyone in that range. Love (but challenge!) the slacker, the romantic, the specialist, the literalist, the dissenter, the generalist, the cynic, the critic. The only thing you don’t love is the one who is trying to keep everyone else from their thing, who is consciously out to destroy and hurt.

Don’t build departments and legacies and traditions. Don’t hire people to cover fields, hire people because they’re different in their thinking and methods and styles and lived experiences and identities than the last person you hired. Build ecosystems full of niches and habitats. Let them change. Be surprised at what’s living over there in that place you haven’t looked at lately. Be intrigued when there’s some new behavior or relationship appearing.

Stop framing, stop managing. Because here’s the other thing: academic freedom retold as a positive liberty would be about accepting the ethical and professional responsibility to populate the academy with as much different kinds of shit as it can hold. It would be about giving up the responsibility to guarantee in advance what the outcomes will be. It’s about not quickly putting up the guard rails every time it looks like someone is going off-message or having an unapproved interpretation. Not freedom to speak, not guarantees against suppression. The active responsibility to cultivate more speech! More speech and thoughts of any kind! All kinds in all the people! All the things!

I build most of my classes as environments and see my students as agents. I’m not empowering them in the conventional Promethean sense, taking them paternistically from marginality into authority. Sure, I have boundaries to what I’m doing, and I have responsibilities to enforce some standards—-both those I agree with myself and those that I am the custodian for. I’m not everyone and everything: I have things I know well, things I know less well, things I don’t know at all, and I steer clear of the latter. I have my hangups and my obsessions: if you’re in my class, you’ll hear about them. But outside of that? Anything’s a good outcome. Anything has to be, if you’re really committed to teaching into the agency of students rather than teaching as the control over that agency. I learned that from my best graduate advisor, who helped Afrocentrists and Marxists and liberals and postmodernists and pretty much every foundling or lost puppy who ended up on his doorstep to be better and smarter at what they were, rather than remolding them into kinfolk in his lineage house. Almost all outcomes are good. Almost all lives that pass through education are good, and all of them should feel as if they grew and were enriched by that passage.

Which I think is frustratingly sometimes not the case, and I think it’s often because we the faculty in all our disciplines and all our institutions want to control too much, want to be not the gardeners of an ecosystem but the bosses of a workplace. Or the aspirant framers of a culture-to-come whose imagined transformations can only be thus and not that.

This is in the end the other place where the critical theories that inform so much of contemporary academic humanism are frustratingly mismatched with the substance of much practice. We should know better than to place “power” and “virtue” as opposites—but we should also know better than to embrace predictability and control. Both because systems, societies, futures are not predictable or easily controllable, and because many of the most beloved theorists among progressive humanists don’t want them to be. Don’t just describe some ideal possible future way of being as rhizomic, be the rhizome.

There are many powerful forces that would rise to stop such a vision, have already risen to do so. We can’t teach and speak and think this way in higher education as long as most of the teaching and thinking is happening at sub-poverty wages among adjuncts who have zero security and institutional power. We can’t teach and speak and think this way if our administrations are gigantic corporate-style bureaucracies or if our public funding is completely removed.

But the way I’m thinking the academy, and especially the humanities could be, might actually be the solution to many of those interminable debates about process and structure and even about public acceptance. If we could live with, even embrace, the profound indeterminacy of culture and transformation and knowledge, if we could build ecosystems and be rhizomes, I think we’d be more consistent with the indeterminacy and unpredictability of the world that we hope to serve.

But yes, I’m anxious and a bit sad. I don’t expect this to ever be the way we are, and I fear it won’t be not just because something alien or sinister will move in to stop us. It’ll be because we won’t. Maybe we can’t. I think there are lots of humanists I know that are doing some or all of what I think we should do, lots of humanists who are wise enough, most of the time, to avoid thinking they can control the horizontal and the vertical. But it’s a reflex that jerks very hard at precisely the moments where it shouldn’t, and each time it does a niche in the ecosystem goes dead. Cliched as the Serenity Prayer might be, what we need is the wisdom to know the difference between what we can (and should) change and what we can’t (or shouldn’t). If not for our institutions and our students and our disciplines, for ourselves. Because I think that’s where there’s some relief from anxiety. Let it go.

Posted in Academia, Generalist's Work, Oh Not Again He's Going to Tell Us It's a Complex System, Politics, Swarthmore | 8 Comments

Nervous Conditions

Nick Kristof’s call to cloistered, monastic faculty to come out and speak to wider publics has already been lambasted, dissected and critiqued by a wide range of academics.

My knee jerked pretty hard as well when I read it, for many of the same reasons that other writers have already articulated:

1) that many faculty are and have been speaking in public in a variety of ways for a long time;
2) that the “cloistered monks” trope is at best a tired one with roots in long-standing habits of American anti-intellectualism and at worst a specific nod to many present interests that would like to strip-mine higher education;
3) that academics who speak to larger publics, who synthesize and generalize knowledge, depend deeply on the work of specialists (who may or may not be equally involved in speaking to various publics);
4) that focusing this appeal on faculty and their temperaments is aiming the persuasive power of a columnist at the wrong target: the real issues are located in structures of promotion and tenure for tenure-track faculty and in the casualization of academic labor for the vast majority of teachers and researchers. Speaking to wider publics as an adjunct is both wholly unrewarded (as is any professional commitment or effort by an adjunct) and extraordinarily risky.

The third point is especially crucial: scholars who engage publics as experts are navigating across rich, deep, complex oceans of knowledge. Take away what Kristof disparages and the public scholar is just one more bullshitter in an endless desert of bullshit.

However, I was also struck a bit at the ferocity and intensity of the reaction by academics to Kristof, and worried about it for two reasons. The first is simply about rhetorical politics and the danger of appearing to protest too much. It’s fairly predictable that Kristof would smugly affirm that this reaction shows that he touched a sensitive spot and therefore must be right.

It doesn’t mean he’s right, but it does mean that he touched a sensitive spot. So the question worth thinking about more deeply is, “What makes this such a sore subject for faculty?” Kristof’s goad is something that academics themselves worry about quite a lot. Even before wistful speculation about the loss of “public intellectuals” became common fodder for conversation among academics in the 1990s, there were long-running discussions about whether professors owed something to wider publics, and if so, what it exactly it was that they owed: scholarship? teaching? engagement? The rise of digital media intensified and shifted this long-running conversation, and sharpened its stakes. The strategic challenge of public engagement for print-era scholars, especially on the left, was how to gain access to the carefully guarded fortresses of print capitalism or how to construct powerful alternative media outlets in a world where media technologies of production and circulation were scarce or expensive. Digital media allowed scholars to think instead about less visible processes that shaped access and attention, and about whether “the public sphere” was an obsolete, never-was or poisoned concept in the first place. Should a scholar seeking engagement speak instead with already-engaged audiences with an interest in the scholar’s particular expertise? Was engagement only meaningful in relationship to subcultures? Was an engaged scholar instead someone who listened to publics rather than spoke to them? What if engagement meant less a kind of synthesis or summary of long-form scholarship in an otherwise familiar print format and voice and more some kind of radically different way of speaking?

These are questions that many academics have been exploring and inhabiting for the last two decades, so in some sense, we shouldn’t necessarily begrudge a columnist asking something of the same questions. That is, if he bothered to ask them as questions and bothered to ask them in a way that didn’t use lazy tropes about the incomprehensibility of professorial writing.

The useful conversation that might be possible if our knees stopped jerking and a columnist like Kristof stopped playing to the peanut gallery centers on a point I’ve already raised: what permits an academic to perform a public role in a distinctive way? Kristof sees academics as “smart” people who can contribute their intelligence and insight to public discussions. But the problem is that the American public sphere has become a difficult place for some people to speak and be heard. Beyond the obvious issues with making a contribution to public conversations via digital media that are rife with bullying and often toxic levels of sexism and racism, there is an equally pressing problem with the capture of expertise by lobbyists and closed political institutions. Kristof ought to be familiar with that issue, considering the company he keeps at the Times op-ed page, but the very way he makes his call suggests how unconcerned he is about it. Who exactly wants “smart” academic input? And what kind? Does Kristof want to hear from anthropologists or historians about the issues he wants to confront? Judging from past behavior, no. Do policy-makers really want to hear from any expert whose thinking might disrupt or confound coming to solutions that are already inevitably going to be come to? What kinds of public and political action are actually open to the unexpected input of already-existing academic expertise, and might actually be transformed were it made available? The answer, I fear, is “not much, not many, not really”. Maybe the issue is with the “we” that thinks they need professors, not with the professors. Kristof might ask–using himself as a test–what exactly is dependent upon this input that he thinks is lacking. If what he means by accessibility is “I want professors who agree with what I already think, and I want them to say so clearly”, that’s very different than saying, “There’s something I don’t understand, something I can’t do, something beyond my knowledge”. The former is just hunting for a few more bits of costume jewelry to burnish the finery of the powerful. The latter would be a welcome invitation, but given that it starts with humility, don’t hold your breath.

This might bring us around to a real issue that’s worth taking seriously, past all the dramatics of the academic response to Kristof. If we react strongly, it isn’t just because he’s insulting. It is also because, without really intending to, he is genuinely raising a difficult problem. We already know that in an open-source, open-access, digital media, crowdsourcing world, op-ed columnists in print media are dispensible. The issue that troubles all academics, however they write, wherever they teach, is whether the same is true of expertise in general. We haven’t yet been able to imagine what the new circumstances governing the circulation of expertise might ultimately be. We aren’t going to get a good conversation going about that from Kristof’s prompt, but the time is coming soon where we had better do so or risk sounding just as out of touch with the reality around us.

Posted in Academia, Blogging | 3 Comments