Canary in the iTunes

A thought about the media industry’s antipiracy efforts, seen in retrospect back to the beginnings of the digital age. In the NYT today, the question comes up as to whether consumers would pay to watch more movies in digital players if more movies were priced reasonably and the restrictions on viewing were permissive. This is the usual spectrum of debate: between media industry watchdogs who think this is about the culture of theft and those who think it’s about pseudo-monopolies defending lazy, entitled revenue models in which they sold a copy of their product four or five times to the same consumers in different formats and circumstances.

Ruth Vitale, the anti-piracy executive covered in the article, suggested that the falling production of movies is a sign of the damage that piracy (in the “culture of theft” model) is inflicting on the industry.

What if the entire debate is a misfire? What if the 1990s were a final apex decade of a leisure-oriented, consumer-driven society? The last time a middle-class existed and was working to earn more time at home, more time to themselves, more time to consume culture? The last time there was enough money (fueled by debt) to support the mass consumption of leisure? What if piracy is the canary in the coal mine for the growth of income inequity and the collapse of white-collar labor? What if no one has the time to really consume more than a small fraction of even the diminished current output of the media industries, because they’re working longer hours just to keep from getting fired or even just to make ends barely meet? What if no one has the money, because of flat salaries and debt loads?

At that point, debating piracy per se is sort of like getting caught up in managing the ecological future of polar bears without noticing that you’re dealing with a very small part of a very big story. More importantly, it’s not just victims of income inequality that need to defend themselves against the new gilded age: if mass-consumer corporations want to have a future, they had better throw in with a broad “middling class” while there’s still time.

Posted in Popular Culture | 14 Comments

Read the Comments

I keep coming back, obsessively and neurotically, to the question of what a liberal arts education is good for.

I do think it helps with the skills that pay the bills. I do think it can make you a better citizen. I do think it can help you lay the foundation for the examined life. It doesn’t always do that, and there are many other ways to get skills, learn to be a better participant in your social and political worlds, be a critical thinker.

A modest example of the possibilities occurred to me today. The concept of social epistemology is becoming more important in philosophy as it is applied both analytically and technically to various kinds of digitally-mediated crowdsourcing. One strain of thought about social epistemology might suggest that philosophy could be as much an ethnographic discipline as an interpretative one, that it could look for how social groups generate epistemological or philosophical frameworks out of experience. There are plenty of other ways to take an interest in how people think in their social practices and everyday lives about ethics, knowledge, and so on, in any event. The question in part is, “What could a liberal arts education–or formal scholarship–add to such everyday, lived thinking that it doesn’t already have?”

I’m going to do something a bit unusual. Rather than the usual “don’t read the comments!” I’m going to suggest that at least sometimes comments on Internet sites offer some insights into how people in general think.

Take a look at this Gawker thread about a tailgater and the “karmic justice” meted out to him for following the driver ahead of him too closely and aggressively. (He eventually passes to the right at high speed, gives the driver the finger multiple times, merges back left on a lightly wet road and loses control of his truck, crashing into the median.)

The main story accepts the “karmic justice” narrative. But in the comments, three different interpretations eventually emerge.

The first validates the main story: the tailgater was unambiguously in the wrong and it is right to feel some vindication at his misfortune.

The second holds that the tailgater was acting poorly but also the driver making the videotape was also acting poorly, for several reasons. First, that the driver being tailgated was videotaping (and was therefore indulging in dangerous behavior as well) and second, that the driver being tailgated (the tailgatee?) should just have pulled to the right and let the faster driver go ahead.

The third is unabashedly on the side of the tailgater. These commenters hold that tailgating is a practical, even necessary, response to drivers who insist on blocking the left lane of any roadway at a speed slower than the speed that the tailgater wishes to go. They support both the tailgating and the obscene gesture and regret that the tailgater had an accident.

There’s a minor fourth faction that is primarily irritated at yet another person videotaping with a smartphone held in portrait mode. Protip hint: they at least are completely right.

What’s interesting in the comments is that each group has strategies for replying to the other two. The anti-tailgaters point out that the roadway in question is not a major highway, that the driver being tailgated was going the maximum speed limit, that the driver says she did not look at the camera while holding it, that she says she was going to be turning left very soon and that traffic to the right was fairly heavy. The blame-on-all-sides find that the videotaping driver has a history of being aggrieved about a lot of things, that there seemed to be plenty of space to the right, and that it’s unwise (especially in Florida) to tangle with a person demonstrating road rage. The pro-tailgaters…well, they don’t seem to have much other than a view that tailgating is necessary and justified.

It’s easy to just say, “A pox on all their houses” or to simply join in the debate on one side or another. I guess what I’m struck by is that when you pull back a little, each of these approaches is informed, whether the people are consciously aware of it or not, by some potentially consistent or coherent views of what’s right and wrong, wise and unwise, fair and unfair.

What I wonder sometimes is whether we could construct a coherent underlying credo or statement about our views, if we were all asked to step back from the views we can express so hotly in comments threads in social media or other contexts. So much of our discourse, online and offline, is reactive or dialectical. That’s actually good in the sense that real cases or experiences are a better place to start, perhaps, than arid thought-experiment scenarios about pulling trolley levers to save or not save lives. But maybe where some sort of liberal-arts experience could help. It could help us to go from a reactive reading to a more contemplative description of why each of us thinks what we think.

Suppose I’m against the tailgater: why? Because I object morally to tailgating period–its aggression, its danger? Is it ok to be aggressive in return? (The driver in the video apparently has specified that she did not break-check the tailgater.) How confident am I that tailgating is the result of road rage? How much do I actually know about another driver, and why should I be confident about my strong moral readings of someone whom I only know in a single dimension of their behavior? If was going really slowly, would tailgating me be justified?

Suppose I’m against both of them: why? Can I trust that someone can in fact be a good driver while holding up a smartphone and not looking at it? Why do I trust or not trust in that proposition? Why not, as this approach suggests, just yield to someone determined to be antisocial and get out of their way? Is being righteous in opposing a tailgater just a kind of self-indulgent or egotistical response? Or an aggression of another kind? What does that imply about other cases?

Suppose I’m certain that if I want to go a particular speed, it’s right to allow me to do so until or unless I am charged with the crime of speeding or unless I have an accident as a result? What else does that imply? Do I mean it in all cases or is driving a special case? Am I right that I’m a better driver than most others? What does that entitle me to if so?

I suspect that in a lot of cases, driving (or other everyday practices) are held to be “special cases”–that to try and work back to some bigger or more comprehensive view of the world isn’t going to work for many people in the Gawker thread. But that too is interesting: if much of how we read the “manners” of everyday life is ad hoc, that’s not necessarily bad, just significant.

Posted in Academia, Defining "Liberal Arts", Miscellany | Leave a comment

Frame(d)

High Anxiety

In modernity, dread only takes a holiday once in a while. Right now Mr. Dread is hard at work all around the world, and he’s not just sticking to the big geopolitical dramas or some single-issue fear. He’s kicking back and making himself comfortable everywhere where uncertainty holds sway, which is to say everywhere: homes, workplaces, boardrooms, the shop, the street, the wilderness.

So asking: why so anxious? of anyone is an almost pointless question. Who isn’t anxious? All the tigers in our souls are prowling the bars of whatever cage we’re in. But I’ll go ahead and ask.

What I’ll ask about is this: what stirs many tenured faculty in humanities departments at wealthy private colleges and universities to so often pick and fret and prod at almost any perturbation of their worlds of practice–their departments, their disciplines, their publications, their colleges and universities? Why do so many humanistic scholars rise to almost any bait, whether it is a big awful dangling worm on a barbed hook or some bit of accidental fluff blown by the wind into their pond?

The crisis in the humanities, we’re often assured, doesn’t exist. Enrollments are steady, the business model’s sound, the intellectual wares are good.

The assurance is, in many ways, completely correct. The trends are not so dire and many of the criticisms are old and ritualized. Parents have been making fun of the choice to major in philosophy for five decades. Or longer, if you’ve read your Aristophanes.

And yet humanists are in fact anxious. Judging from a number of experiences I’ve had in the last year at Swarthmore and elsewhere, there’s more and more tense feelings coming from more directions and more individuals in reaction to a wider and wider range of stimuli.

Just as one example, I just got back from a workshop with other faculty from small private colleges who have been working with various kinds of interdisciplinary centers and institutes and almost all of them reported that they’re constantly peppered by indirect or insinuated complaints from colleagues. We even heard a bit of it within the workshop: at one point, an audience member at the keynote said to the speaker, “Whatever it is you’ve just shown us, it’s not critique, and if it’s not critique, it’s not humanities”. When faculty are willing to openly gatekeep in a public or semi-public conversation, that’s a sign that shit is getting real.

I’d call it defensiveness, but that word is enough to make people legitimately defensive: it frames reaction as overreaction. Worried faculty are not overreacting. Maybe the humanities aren’t in crisis, but the academy as professors have known it in their working lives is. It is in its forms of labor, in its structures of governance, in its political capital, in its finances. That’s what makes the tension within the ranks of the few remaining tenured faculty who work at financially secure private institutions so interesting (because otherwise they are so atypical of what now constitutes academic work). Why should anxiety about the future afflict even those who have far less reason for anxiety?

The alarm, I think, is about the possibility (not yet the accomplishment) of transformations across a broad spectrum of everyday academic habitus: in the purposes and character of scholarship, in the modes of its circulation and interpretation, in the methods and affect of inquiry, in the incentives and commands that institutions deploy, in the goals and practice of teaching. With these fears coupled to the unbearable spectacle of many real changes that have taken place in the political economy of higher education, many of them unambiguously destructive, in the terms and forms of labor and in practices of management. A tenured humanist at a well-resourced private university or college might feel secure in their own working future, but that is the security (and guilt) of a survivor, a security situated in a world where it feels increasingly irresponsible to encourage young people to pursue academic careers as either vocation or job.

Change comes to every generation in academia. Rarely has any generation of academic intellectuals ceded power and authority gently or kindly to the next wave of upstarts. But most transitions are a simple matter of disciplinary succession: old-style political and intellectual history to social history to the “cultural turn” and so on. Whatever is at stake now seems beyond, above and outside those kinds of stately progressions.

When academia might or could change fundamentally (as it did at the end of the 19th Century, as it did in the 1920s, as it did after the Second World War), that tends to harshly expose the many invented traditions that usually gently sediment themselves into the working lives and psyches of professors. What we sometimes defend or describe as policies and practices of long antiquity and ironclad necessity are suddenly exposed as relatively recent and transitory. We stop being able to pretend that sacred artifacts of disciplinary craft like the monograph or peer review are older than a generation or two in their commonality. We draw lines of descent between ourselves and those intellectuals and professors we imagine to be our ancestors, but it only takes a few generations before we’re desperately appropriating and domesticating people who lived and worked in situations radically unlike our own. We try to whistle our way across jagged breaks and disjunctures: do not mind the gaps! Because if past intellectuals carried on writing, thinking and interpreting without tenured and departmentalized disciplinarity to support them, then arguably future intellectuals could (and will!) too.

American professors have figuratively leapt upon melancholic bonfires in gloomy protest all through the 20th Century over such retrospectively small perturbations as the spread of electives, the fall of Western Civilization (courses), the admission of women into formerly all-male institutions, the introduction of studio arts and performance-based inquiry into liberal arts curricula, the rise of pre-professional majors. Even going back the creation of new private religious colleges and universities or to the secularization of much academic study in the mid-19th Century. As we celebrate Swarthmore’s sesquicentennial this year, it’s hard to remember that once upon a time American small liberal-arts colleges might have seemed as much a kind of faddish vanity born out of every congregation and municipality wanting to put itself on the map with its own college.

Not that these changes were not major changes with a range of consequences, but well, here we are. The world did not end, the culture did not fall, knowledge was not lost forever. Often quite the contrary. Life went on.

In the end, when academics vest too much energy in discussions of particular, sometimes even peculiar, forms of process and structure within their institutions, they lose the ability to speak frankly about interestedness, both their own and the larger interests of their students and their societies. Simon During, whose recent essay “Stop Defending the Humanities” very much informs my own thinking in this piece, writes that “The key consequence of seeing the humanities as a world alongside other broadly similar worlds is that the limits of their defensibility becomes apparent, and sermonizing over them becomes harder”. An argument about whether a particular department gets a line or not, whether a particular major has this course or that course, about whether students must learn this or that theory, is always a much more parochial argument than the emotional and rhetorical tone of those discussions in their lived reality would imply. Nothing much depends upon such arguments except our own individual sense of self in relation to our profession. Which of course is often a very big kind of dependency when you’re inside your own head.

Perhaps counter to the general trend, I personally feel as if I have little invested in the fortunes of history as a discipline or African studies as a specialization. I have a great deal invested in the value of thinking about and through the past, and in the methods that historians (in and out of the academy) employ, but I don’t see such thinking as necessarily synonymous with the discipline of history as it exists in its academic form circa 2014. I have a lot invested in my own fortunes, and were I working for an institution where the fortunes of history or African studies in their institutional forms continuously determined the future of my own terms of employment, my sense of vestment in those things would have to change. I’m just lucky (perhaps) to work in a place that gives me the institutional freedom to cultivate my own sensibility.

There’s nothing wrong with self-interest. Keeping self-interest consciously in the picture is what keeps it from becoming selfishness, it’s what allows for some ethical awareness of where self-interest stops and the interests of other selves begin. That awareness can allow people to tolerate or even happily embrace a much wider range of outcomes and changes.

If it turns out, for example, that there are ways to reorganize labor within the academy that will create a much larger number of fairly good jobs, at the expense of exploitative forms of adjuncting but also at the expense of a very small number of extravagantly great jobs, well, that’s a good thing. If it turns out that more energy, attention and resources put into humanities labs or other new institutional structures leads to less energy, attention and resources to some more traditional structure of disciplinary study, well, what the hell, why not? Que sera, sera. If I need to teach one kind of course less often and another kind more often because of changes in student interest, then the main thing that change affects is me, my labor, my satisfaction, my sense of intellectual authenticity. Not the discipline or the major or the university I work for, except inasmuch as my sense of self is entangled in those things. Some entanglement is good: that’s what makes faculty good custodians of the larger mission of education.

A lot of entanglement is bad: that’s what leads to grandiose misidentifications of an individual’s transitory circumstances with the ultimate fate of huge collective projects (like disciplines or institutions or even departments) or society as a whole. That’s what leads to trying to control that fate through the lens of those individual circumstances.

There is a lot of entanglement in the academic humanities at the moment.

Hacking and Yacking

Scholars in STEM disciplines have their own concerns and worries, but they do not tend to feel the same kind of existential dread about the future of their own practices nor worry so much about the kinds of misremembered and misattributed “traditions” of scholarship and teaching that many humanists allow themselves to be weighted down with. This is not to say that they should get off lightly. STEM professors are also frequently prone to think that the structures of their majors or the organization of their disciplines or the resource flows that sustain their scholarship are precisely as they must be and have been at any given moment, and find it just as difficult to accept that not that much depends upon whether this or that course gets taught at this moment or in that fashion.

More to the point, most STEM faculty are copiously invited by the wider society to define their research as having immediate and urgent instrumental impact on the world. That’s what often leads to scientism in disciplines like psychology, sociology, economics and political science, wherein a demand for resources to support research is justified by strong claims that such research will identify, manage and resolve pressing social problems. In many ways, natural scientists and mathematicians are often more careful about (or even actively opposed to) claims that their work solves problems or improves the world than social scientists tend to be.

Hardly anyone in the academy seems able to refuse in principle the claim that their work might make the world a better place. Because of course, this could be true of anyone. Even more modestly self-interested people hope that in some small way they will leave the world better than they found it.

The problem here with humanists is the characteristic tropes and ways that they use to position themselves in relationship to the world (or as During aptly puts it, worlds), at least in the last three decades or so.

I found myself a bit embarrassed last year while attending a great event that my colleagues organized that showcased scholars and creators working with new media forms. After one presentation of a really amazing installation work, one of our students eagerly asked the artist, “What are the politics of your work?” and followed the question by stating that the work had accomplished important reframings of the politics of embodiment, of gender, of sexuality, of identity, of race, of technology, and of neoliberalism. There is almost no artist or scholar who is simply going to say, “No, none of that” in reply to something so earnest and friendly, and so it was in this case: the speaker politely demurred and asserted that the politics of the work were in some sense yet to be known even (perhaps especially) to the artist herself. I was embarrassed by the moment because the first part of the question was a performance of studied incuriosity, a sort of hunting for the answers at the back of the book. Cut to the chase! What’s the politics, so I know where to place this experience in my catalog of affirmations and confirmations. It was in its own way as instrumentalized a response as an engineering major listening to a presentation by a cosmologist about string theory and then saying, “Ok, but what can I make with this?” The catalog of attributions that formed the second part of the question both preceded and superceded any experience of witnessing the work itself.

Ok, I know: student! We all had such moments as students, and the thinking of our students is not necessarily an accurate diagnosis of our teaching and scholarship. But there seemed to me in that moment something of an embryonic and innocent reflection of something bigger and more pervasive.

Harvard faculty who recently surveyed the state of the humanities at their university identified many issues and problems, many of which they attribute to forces and actors outside of their own disciplines. However, one of the problems that the Humanities Project accepted ownership over was this: “Among the ways we sometimes alienate students from the Humanities is the impression they get that some ideas are unspeakable in our classrooms.” Or similarly, that some ideas are required. Recall my mention early on of the scholar who protested, “If you aren’t doing critique, you aren’t doing humanities”—and what the Harvard authors imply is that for some humanists, critique is not just a method or act, it is a fully populated rubric that dictates in advance a great many specific commitments and postures, many of which are never fully referenced back to some coherent underlying philosophy.

Scholars who identify with “digital humanities” know that they can quickly get a rise out of colleagues (both digital and analog) by reciting the phrase, “More hack, less yack”. Rightly so! First because working with digital technology and media requires lots of thoughtful yacking if you don’t just want to make the latest Zynga-style ripoff of a social media game or whatever. Second because theory and interpretation are hacks in their own right, things which act upon and change the world. The phrase is sometimes read as a way of opting out of critique, and thus retreating into the privileged invisibility of white masculinity while continuing to claim a place in the humanities. Sometimes that’s a fair reading of what the phrase enables or intends.

The problem with critique, however, is not that it’s not a hack, but that many times the practice of critique by humanistic scholars is not terribly good at hacking what it wants to hack. This is not a new problem, nor is it a problem of which the practitioners of critique are unaware. This very thought was the occasion for fierce debates between left intellectuals (both in and outside of the academy) in the 1980s, and one of the sharpest interventions into that dialogue was crafted by the recently deceased Stuart Hall.

In the 1980s, Hall was working out of an established lineage of questions about the relationship between intellectuals and the possibility of radical transformation of capitalist modernity, most characteristically associated with the works of Western Marxists like Gramsci, Adorno, and Lukacs but also other lineages of critical work associated with Bourdieu, Foucault, and others. For me, since this was one of the formative moments in my own development as a scholar, the most electric thing for me about Hall’s reading of the 1980s in Britain was his insistence that Thatcherism had gained its political ascendancy in part because of its adroit reworkings of public discourse, that it managed to connect in new ways with the subjectivity and intimate cultural worlds of the constituencies that it brought into a new conservative coalition. E.g., Thatcherism was not merely a question of command over a repressive apparatus, not merely an expression of innate structural power, but that it was the contingent outcome of a canny set of tactical moves within culture, moves of rhetorical framing and sympathetic performance. The position was easily applied to Reaganism as well, in particular to explaining the rise of the so-called Reagan Democrats.

This was of course exciting to left intellectuals (like me) who saw themselves as having expert training in the interpretation of culture, because it seemed to imply that left intellectuals could make a countermove on the same chessboard and potentially hope to have a big impact. But here came some problems, which Hall himself always seemed to have a better grasp on than many of those who claimed him as an influence. Namely, that knowing how identities are constructed, how frames operate, how common sense is produced, is not the same as knowing how to construct, how to frame, how to produce common sense.

Critique commonly embeds within itself Marx’s commandment to not just interpret the world but also to change it. That’s the commitment to “hack”, to act upon the world. What Hall and similar critics like Gayatri Spivak or Judith Butler had to ask during the debates of the 1980s and 1990s was this: what kinds of frames and rhetorical moves create transformative possibilities or openings? Hall played around with a number of propositions, such as “strategic essentialism”: e.g., leverage the ways that the language of essentialism is powerfully mobilizing within communities formed around identity while not forgetting that this is a strategic move, a conscious “imagining” in Benedict Anderson’s sense. Forgetting that it’s a strategy risks appropriation by reactionary movements and groups associated with nationalism or sectarianism. Which is in some cases more or less what has happened.

But the risk or the problem was more profound than that. In the very best case this scenario involves anointing yourself as part of a vanguard party or social class with all the structural and moral problems that vanguardism entails. E.g., the reason you believe you can play the chess game of framings and positionality is that you know more and know better than the plebians you’re trying to move and mobilize. And you believe that’s equally true of the guys on the other side: that the Reaganites and their successors win because they know which buttons to push without themselves being captive to those same buttons, that they know what they’re doing, not that they authentically feel and believe what they say. It is a conception of critique that puts the critic (or enemy of the critic) up and outside of the battlefield of culture, as capable of framing because they are not produced by frames. And in the case of humanistic critique from the left, the critic holds that their own engagement not even produced by the defense or advancement of self-interest. The position has to hold that interests of critique are simultaneous with the interests of everyone who is not grossly self-interested: e.g., by a true yet-to-be pluralistic kind of universal good that negates the self-interest of capitalist modernity. That it is working for the Multitude rather than the Empire. This is one of the oldest problems for any radical left: how to account for its circumstances of its own possibility. There are many venerable ways out of that intellectual and political puzzle, but it is always an issue and one that becomes more acute in a politics that names culture as a battleground and intellectuals as one important force in that struggle.

What humanists who aspire to critique understand best about rhetoric, language, culture (both expressive and everyday) through both theoretical and empirical inquiry is often at odds with effective action within culture, with the crafting of powerful interventions into public rhetoric, with the shaping of consciousness through framing gestures. Humanists are rightly suspicious of foundationalist, positivistic claims about the causes and sources of culture and consciousness, whether they come from evolutionary psychology or economics. That often means that only highly particularistic, highly local understandings of why people think, talk, and imagine in certain ways will do as a basis for expert knowledge of people thinking, representing, talking and imagining. But much of the time when we wrap up our scholarly work that has that kind of attention to particularism, we don’t end up more confident in our understandings of how and where we might mobilize or act. The particularism of much humanistic study is frequently even more fiercely inhibiting to the possibility of a deliberate instrumental reframing of the themes or mindsets that have been studied. Why? Because such study often convinces us that consciousness and discourse are the massively complex outcomes of the interaction of many histories, many actions, many institutions. It convinces us that frames and discourse often shape public culture and private interaction in ways that only partially involve deliberate intent and that also often escape or refract back upon that intent. And, if we’re at all self-aware, it often reveals to us that we’re the wrong people in the wrong place at the wrong time to be trying to reframe the identities, discourses and institutions that we have identified as being powerful or constitutive.

One way out of that disappointing moment is to assert that when the other guys win, it’s because they cheat: they have structural power, they have economic resources, they astroturf. Which just takes us back to some of Hall’s critics on the left who always thought messing around with cultural struggles was a waste of time. At least some of them more or less got stuck instead with hanging around waiting for the structural contradictions of capitalism to finally reach their preordained conclusion. Or alternatively anointed themselves not as the captains of counter-hegemonic consciousness but as the direct organizers of direct struggles, a posture which has usually lead up and out of direct employment within the academy.

Accepting the alibi that the right wins in battles for public consciousness because they have overwhelming structural advantages prevents the development of a meaningful curiosity about why some discursive interventions into public culture (conservative and otherwise) are in fact instrumentally powerful. Many humanistic critics seem doomed to take power and domination as always-known, always-transparent subjects. There have been significant attempts to undo that doom–the history of whiteness offered by scholars like David Roedinger and Nell Irvin Painter is one great example, and there are others. But always there is the problem: to treat the interiority of power and domination as being as interesting, as unknown, as contingent as anything else we might study is to open a space of vulnerability, to make critique itself contingent not just in its means but its ends. If it turns out, for example, that both powerful and subaltern conservatives in contemporary American society are as produced by and within culture as anyone else, then that potentially activates a whole range of embedded intellectual and ethical obligations that we tend to be guided by when we’re looking at something we imagine to be a bounded “culture” defining a place, a community, a people.

If it turns out that the other guys win sometimes not because they’re cheating but because they’re more present and embedded in the game than the academic intellectual, then what? Hall was always aware of this dimension of Thatcherism: that it worked in part because Thatcher herself and a few of her supporters were acutely aware of the ressentiments of some lower middle-class Britons, because of her fluency in some of their social discourses and dispositions. It stopped working because most of the rest of her party only spoke upper-class twittery or nouveau-riche vulgarism but also because ressentiment as a formation tends to press onward to vengeance and cruelty, to overstep. But this goes for many causes and ideals that progressives treasure as well. The growing acceptance of gay marriage in the United States, unless you believe Michele Bachmann’s views that it’s the work of a sinster conspiracy, has at least as much to do with a long, patient appeal to middling-class American views of decency and fairness as it does to sharp confrontational attacks on the fortresses of heteronormativity. It’s an achievement, as some queer theorists have noted, that has the potential cost of the bourgeois domestication of sexuality and identity as a whole, but it’s still an example of a deliberate working of the culture towards an end, and it’s a working that scholars and activists can rightly say they contributed to.

But this is the thing: every move that’s justified as a move within and about the culture then needs to be thought through in terms of what its endgame might be. You can justify tone-policing and calling people out on social media as a way to mobilize the marginalized, as a strategy of making people visible. You can justify it as catharsis. But I’m not sure, as some seem to be, that there’s much in the way of evidence that it works as a strategy for controlling, suppressing or transforming dominant speech.

The critical humanist wants to lift up the hood of the culture and rebuild the engine, but it turns out the toolkit they’ve actually got is for the maintenance of some other machine entirely. Which means in some sense that all the framings, all the hackings, all the interventions into rhetoric have tended come squarely back to that other machine: to the academy itself. Which explains why the anxieties of critique are visited back so intensely upon academic life and upon academic colleagues who seem in some fashion or another to have wavering loyalties. Humanistic critique might not have hacked the culture, but it definitely remade the academy. We are our own success story, but critique dare not let itself believe that success is in any way firmly accomplished, and it must also believe that any such accomplishment is always in deathly peril. It is, in any event, not enough: the remaking of the academy alone is never what critique had an aspiration to achieve.

I don’t think that bigger aspiration was wrong, but I do think that taking it seriously should always have implied a fundamentally different kind of approach to professionalism and institutionalization for critical humanists than it ultimately did. It’s not surprising in that sense that Stuart Hall always insisted that he wasn’t really an academic or a scholar, just an intellectual who happened to work in an academic environment. But of course even that “happened to” raises questions that were almost impossible for Hall and others to explore or explain. What if the deeply humanistic and progressive intellectuals who really make powerful or influential moves on the chessboard are not, cannot be, in the academy, whether by design or a “happening”? What if they’re app designers or filmmakers or preachers or entrepreneurs or community activists or advertisers? And what if the powerful moves to be made in the public culture are not a function of profound erudition and methodological disciplinarity but emotional intelligence? Or the product of barely articulated intuitions about the histories and structures circulating in the body politic rather than the formal scholarly study of the same? (More uncomfortably on the “happened to” front, what logic would entice disciplines to hire intellectuals rather than scholars? I’ve met more than a few academic humanists who insist that they, like Hall, are only intellectuals passing through the university only to see them turn around and be wholly committed to the most stringent enforcement of intensified and narrow disciplinary authority over who gets hired, tenured and promoted.)

The scholar devoted to critique could seek consolation by imagining they supply tools and weapons to other actors in the public sphere. That they give the intuitive critic and the culture worker information, ideas, frameworks. Hey, the Wachowskis read their philosophers when they made the Matrix films, right? And that would be a fair enough consolation in many cases: many people have been indirectly influenced by Foucault’s anatomization of power who could not cite him; Judith Butler changed the inner life of gender for people who have never heard of her. With a touch of humility, it’s not at all hard to claim our place as one more strand on the loom of cultural struggle.

Maybe that humility should be more than a touch. In recent discussions at Swarthmore over controversial events and a series of protests, I’ve heard it said more than once that academic institutions should never legitimate oppression by voluntarily inviting it inside their walls. Some of my colleagues have rolled their eyes in derision at the riposte of one student in the student newspaper who pointed out that we frequently and often respectfully read the works of people who were deeply involved in oppression: isn’t that legitimation, too? Well, why is that a silly response? It’s silly to some humanists because they believe that their own critical praxis allows both for awareness of how past (and present) works are implicated in power and for a plasticity and creativity in how we appropriate or create productive readings out of texts, lives, practices that we otherwise reject or abjure.

But this is where the hubris of an attachment to “framing” comes in. Like the Mythbusters, we come on at the beginning of the show and say: do not try this at home. We are trained, and so we can frame and reframe what we offer to produce an openness in how our students interpret and do it without producing too much openness. That novel can mean this thing or that thing or oh! how delightful, a new thing that it’s never meant before. But no, it doesn’t mean that thing, and no you shouldn’t think that of it, and oh dear, please you know that part is just awful. And so, if (for example) a terrible reactionary comes to campus and doesn’t perform his terribleness on cue and the wrong thing gets thought by many in the audience as a result, that’s a failure of framing. You know the frame has failed when the anticipated and required readings of the text are not performed. That’s not a failure of the audience and it’s not a success of the text. It’s alleged to be a failure of pedagogy, of scholarship, of intellectual praxis. The ringmaster forgot to flick his whip to get the clowns to caper when they were supposed to. All roads always lead back to us, ourselves, because that’s where we’ve vested our professionalism as both scholars and teachers: we are those who produce consciousness, at least within our own dominion.

The thing is, why do academic institutions legitimate? Because they do, they really do. There’s a reason why public figures and politicians who’ve just done something wrong or who have had the morality of their actions called into question often gratefully accept the opportunity to speak at a university, to accept an honorary degree, to teach a course. There’s a reason why the current government of Israel worries about the prospect of an academic boycott.

We legitimate not because we are adroit (re)framers, not because we put the Good Humanist Seal of Approval on some performances and the Stamp of Critique on others. We legitimate because after all the populist anti-intellectualism, after all the asshole politicians trash-talking the eggheads who waste money on gender studies and art history, after all the billionaire libertarians who trash universities as a part of their own preening self-flattery, because after all that most people still trust and value academia, both their ideal vision of academia and even much of its reality.

Look it up: on the list of must-trusted and least-trusted professions (in an age of profound alienation and mistrust) teachers, professors and scientists all still fare very well. We legitimate because people expect us to do our homework, to be deeply knowledgeable, to be honest, to be curious, to be temperate and judicious, and to be fair. And they even trust us despite the fact that we are the gatekeepers of the economic fates of many of our fellow citizens, and often even trust us more in proportional relationship to the degree to which we anoint the future elites of a society that is growing more unequal and unjust by the second.

This is not a liability: it’s a strength, but you have to use it as it comes. If there’s one thing that the theoretical indebtedness to Foucault among many humanists today should lead to is an awareness that virtue does not arise as an automatic consequence of your distance from power. If you want to practice critique, you work first with what you got and with who you are, you work the power you possess rather than pining for power elsewhere. The master’s tools can dismantle the master’s house: they built it, after all. Or they can change what’s inside of it. If that’s not acceptable, then you make something else, somewhere else, as someone else. Humanistic-critique-as-mastery-over-framing wants the legitimacy and influence of academic institutions without accepting the histories and readings that produce that legitimacy. It wants to be intellectuals elsewhere just happening to be here. It wants to hack without really understanding the code base it’s working with.

Academic Freedom as a Positive Liberty

Ok, but I too am anxious. I too do not want to work with what I’ve got and accept what I am. Can you tell that, 5,000 odd words later? And no, it’s not the anxiety of loss, not that old white liberal spiel about “oh, back in my day, the students were all very such-and-such, now we have that awful critique and multiculturalism and postcolonialism”. Inasmuch as I can and do perform that kind of ghastly professorial nostalgia, I’m probably indistinguishable from most of my humanist colleagues: oh, dear, I remember that great directed reading on Marxist critical theory with that student; oh dear, I used to have students who knew who Fanon was; and so on. Inasmuch as I am mournful in my expressions on social media, it’s often about my profound sense that many things I thought were irreversible signs of social progress have turned out to be profoundly reversible. Inasmuch as I rage about political trends, I sound very much like your average left-leaning humanistic professor.

It is not the anxiety of loss I feel most in my work these days. It’s the anxiety of a mostly-never-was and maybe never-will-be understanding of what I think the main or dominant professional ethos of an academic intellectual ought to be in scholarship and teaching and public persona.

It’s the opposite of what I think is embedded inside the idea of critique-as-reframing, critique as chess move in a war of position. When someone says to me, “Why didn’t you frame that event differently? Why do you let those words stand out there implying that the event means this? Those words out there that permit people to think that?”, my gut wants to reply as Justice Harry Blackmun did to the death penalty: “I shall tinker no more with the machinery of framing”.

By this I do not mean to say that I do not hope, as a writer, to mean what I say and say what I mean, and to influence people accordingly. But the worst problem with believing that any politics, intellectual or otherwise, is a matter of framing is ultimately the way that it encodes the framer as an agent and the framed as a thing. That both tempts the person who hopes to control the frame into a hubris that intensifies the ways in which they come off as inauthentic and manipulative (and therefore defeat their own goal) and paradoxically keeps the aspirant framer from a richer understanding of how and why other people come to think and feel and act as they do. That undersanding is actually crucial if you hope to persuade (rather than frame) others.

With all of their defects, including potential blindness to power and an air of liberal blandness, the terms persuasion and dialogue are, if you’ll excuse the irony for a moment, a better frame for what a critical humanist intellectual, or maybe just a critically aware human being, might want to be and do in relation with others. Because they start at least with the notional humanity of everyone in the room, in the conversation, in the culture, in the society. That’s not a gesture of extravagant respect to other people, it’s not generosity. It’s a gesture of self-love and self-empowerment, because you are going to get precisely jackshit nowhere in moving people to where you think they ought to be if you permit yourself the indulgence of thinking some people are things who can be dogwhistled wherever you want them to be. Even the most crass and awful kinds of dogwhistles don’t work that way, really. Maybe that gets you some votes in the primary election but it doesn’t change hearts and minds, doesn’t change how people live and act. As Raymond Williams once said of advertisers, there are a lot of people working the culture who are magicians that don’t know how their own magic tricks work.

So part of how I want an institution devoted to thoughtful, scholarly inquiry and conversation to work is to stop overthinking everything. And I don’t think I’ll get that.

But it is also this. One reason I absolutely did not want to defend the presence of Robert George at Swarthmore in conventionalized terms of free speech, in conventional languages of academic freedom, is first that this is just the most tedious kind of counterpunch in the stupid pantomime show that American national politics have become. The outsiders who tut-tutted at Swarthmore students and faculty on Twitter and so on have not a fuck to give about academic freedom when it extends to something they don’t like or respect. If there is anything a decade of blogging often about academic freedom has convinced me of, it is that there is almost no one who can be counted upon to be an honest broker on the subject, but most especially not many of the right’s most dedicated concern trolls.

This begs the question of what exactly I am looking for as I wander around with my lamp in the daytime.

The idea that academic freedom means that the academy should be a perfect mirror of the wider society is stupid. That would not be the outcome of an honest and balanced approach to academic freedom. That would just be evidence that the academy had become completely pointless. As indeed I would say of any specific social or political institution: nothing with a mission or a purpose should be judged success or failure on whether it is a precise microcosm of society as a whole. You make institutions to be a part, a piece, that the whole cannot be or isn’t already.

I’ve suggested in the past that academic freedom also doesn’t particularly accomplish what its defenders allege it does. It doesn’t liberate scholars and teachers to speak honestly and openly, it doesn’t incentivize the production of new ideas and innovation. Even less so now of course with the corrosion of tenure and the rise of adjunctification, but tenure never really protected most of what is claimed for academic freedom. It has long tended to domesticate, to conventionalize, to restrict scholarly speech and thought.

Academics still insist on defining academic freedom, like freedom of speech more broadly, as a negative freedom. A freedom from power, from restriction, from constraint, from retaliation. What if, instead, we defined it as a positive liberty? Meaning, something we were supposed to create more of for more people in more ways. What if we saw it as an entitlement, a comfort, a richness and saw ourselves not as the people protected from harm but as those who are obliged to set the table as extravagantly as we could?

What would that mean? It starts here: nothing human is alien to me. So then this: our curricula, our writing, our events, our conversations, should be cornucopia bursting to the brim with everything, with anyone. Our learning styles, our teaching styles, our everyday world of learning and thought, should run the spectrum and we should love each thing and everyone in that range. Love (but challenge!) the slacker, the romantic, the specialist, the literalist, the dissenter, the generalist, the cynic, the critic. The only thing you don’t love is the one who is trying to keep everyone else from their thing, who is consciously out to destroy and hurt.

Don’t build departments and legacies and traditions. Don’t hire people to cover fields, hire people because they’re different in their thinking and methods and styles and lived experiences and identities than the last person you hired. Build ecosystems full of niches and habitats. Let them change. Be surprised at what’s living over there in that place you haven’t looked at lately. Be intrigued when there’s some new behavior or relationship appearing.

Stop framing, stop managing. Because here’s the other thing: academic freedom retold as a positive liberty would be about accepting the ethical and professional responsibility to populate the academy with as much different kinds of shit as it can hold. It would be about giving up the responsibility to guarantee in advance what the outcomes will be. It’s about not quickly putting up the guard rails every time it looks like someone is going off-message or having an unapproved interpretation. Not freedom to speak, not guarantees against suppression. The active responsibility to cultivate more speech! More speech and thoughts of any kind! All kinds in all the people! All the things!

I build most of my classes as environments and see my students as agents. I’m not empowering them in the conventional Promethean sense, taking them paternistically from marginality into authority. Sure, I have boundaries to what I’m doing, and I have responsibilities to enforce some standards—-both those I agree with myself and those that I am the custodian for. I’m not everyone and everything: I have things I know well, things I know less well, things I don’t know at all, and I steer clear of the latter. I have my hangups and my obsessions: if you’re in my class, you’ll hear about them. But outside of that? Anything’s a good outcome. Anything has to be, if you’re really committed to teaching into the agency of students rather than teaching as the control over that agency. I learned that from my best graduate advisor, who helped Afrocentrists and Marxists and liberals and postmodernists and pretty much every foundling or lost puppy who ended up on his doorstep to be better and smarter at what they were, rather than remolding them into kinfolk in his lineage house. Almost all outcomes are good. Almost all lives that pass through education are good, and all of them should feel as if they grew and were enriched by that passage.

Which I think is frustratingly sometimes not the case, and I think it’s often because we the faculty in all our disciplines and all our institutions want to control too much, want to be not the gardeners of an ecosystem but the bosses of a workplace. Or the aspirant framers of a culture-to-come whose imagined transformations can only be thus and not that.

This is in the end the other place where the critical theories that inform so much of contemporary academic humanism are frustratingly mismatched with the substance of much practice. We should know better than to place “power” and “virtue” as opposites—but we should also know better than to embrace predictability and control. Both because systems, societies, futures are not predictable or easily controllable, and because many of the most beloved theorists among progressive humanists don’t want them to be. Don’t just describe some ideal possible future way of being as rhizomic, be the rhizome.

There are many powerful forces that would rise to stop such a vision, have already risen to do so. We can’t teach and speak and think this way in higher education as long as most of the teaching and thinking is happening at sub-poverty wages among adjuncts who have zero security and institutional power. We can’t teach and speak and think this way if our administrations are gigantic corporate-style bureaucracies or if our public funding is completely removed.

But the way I’m thinking the academy, and especially the humanities could be, might actually be the solution to many of those interminable debates about process and structure and even about public acceptance. If we could live with, even embrace, the profound indeterminacy of culture and transformation and knowledge, if we could build ecosystems and be rhizomes, I think we’d be more consistent with the indeterminacy and unpredictability of the world that we hope to serve.

But yes, I’m anxious and a bit sad. I don’t expect this to ever be the way we are, and I fear it won’t be not just because something alien or sinister will move in to stop us. It’ll be because we won’t. Maybe we can’t. I think there are lots of humanists I know that are doing some or all of what I think we should do, lots of humanists who are wise enough, most of the time, to avoid thinking they can control the horizontal and the vertical. But it’s a reflex that jerks very hard at precisely the moments where it shouldn’t, and each time it does a niche in the ecosystem goes dead. Cliched as the Serenity Prayer might be, what we need is the wisdom to know the difference between what we can (and should) change and what we can’t (or shouldn’t). If not for our institutions and our students and our disciplines, for ourselves. Because I think that’s where there’s some relief from anxiety. Let it go.

Posted in Academia, Generalist's Work, Oh Not Again He's Going to Tell Us It's a Complex System, Politics, Swarthmore | 8 Comments

Nervous Conditions

Nick Kristof’s call to cloistered, monastic faculty to come out and speak to wider publics has already been lambasted, dissected and critiqued by a wide range of academics.

My knee jerked pretty hard as well when I read it, for many of the same reasons that other writers have already articulated:

1) that many faculty are and have been speaking in public in a variety of ways for a long time;
2) that the “cloistered monks” trope is at best a tired one with roots in long-standing habits of American anti-intellectualism and at worst a specific nod to many present interests that would like to strip-mine higher education;
3) that academics who speak to larger publics, who synthesize and generalize knowledge, depend deeply on the work of specialists (who may or may not be equally involved in speaking to various publics);
4) that focusing this appeal on faculty and their temperaments is aiming the persuasive power of a columnist at the wrong target: the real issues are located in structures of promotion and tenure for tenure-track faculty and in the casualization of academic labor for the vast majority of teachers and researchers. Speaking to wider publics as an adjunct is both wholly unrewarded (as is any professional commitment or effort by an adjunct) and extraordinarily risky.

The third point is especially crucial: scholars who engage publics as experts are navigating across rich, deep, complex oceans of knowledge. Take away what Kristof disparages and the public scholar is just one more bullshitter in an endless desert of bullshit.

However, I was also struck a bit at the ferocity and intensity of the reaction by academics to Kristof, and worried about it for two reasons. The first is simply about rhetorical politics and the danger of appearing to protest too much. It’s fairly predictable that Kristof would smugly affirm that this reaction shows that he touched a sensitive spot and therefore must be right.

It doesn’t mean he’s right, but it does mean that he touched a sensitive spot. So the question worth thinking about more deeply is, “What makes this such a sore subject for faculty?” Kristof’s goad is something that academics themselves worry about quite a lot. Even before wistful speculation about the loss of “public intellectuals” became common fodder for conversation among academics in the 1990s, there were long-running discussions about whether professors owed something to wider publics, and if so, what it exactly it was that they owed: scholarship? teaching? engagement? The rise of digital media intensified and shifted this long-running conversation, and sharpened its stakes. The strategic challenge of public engagement for print-era scholars, especially on the left, was how to gain access to the carefully guarded fortresses of print capitalism or how to construct powerful alternative media outlets in a world where media technologies of production and circulation were scarce or expensive. Digital media allowed scholars to think instead about less visible processes that shaped access and attention, and about whether “the public sphere” was an obsolete, never-was or poisoned concept in the first place. Should a scholar seeking engagement speak instead with already-engaged audiences with an interest in the scholar’s particular expertise? Was engagement only meaningful in relationship to subcultures? Was an engaged scholar instead someone who listened to publics rather than spoke to them? What if engagement meant less a kind of synthesis or summary of long-form scholarship in an otherwise familiar print format and voice and more some kind of radically different way of speaking?

These are questions that many academics have been exploring and inhabiting for the last two decades, so in some sense, we shouldn’t necessarily begrudge a columnist asking something of the same questions. That is, if he bothered to ask them as questions and bothered to ask them in a way that didn’t use lazy tropes about the incomprehensibility of professorial writing.

The useful conversation that might be possible if our knees stopped jerking and a columnist like Kristof stopped playing to the peanut gallery centers on a point I’ve already raised: what permits an academic to perform a public role in a distinctive way? Kristof sees academics as “smart” people who can contribute their intelligence and insight to public discussions. But the problem is that the American public sphere has become a difficult place for some people to speak and be heard. Beyond the obvious issues with making a contribution to public conversations via digital media that are rife with bullying and often toxic levels of sexism and racism, there is an equally pressing problem with the capture of expertise by lobbyists and closed political institutions. Kristof ought to be familiar with that issue, considering the company he keeps at the Times op-ed page, but the very way he makes his call suggests how unconcerned he is about it. Who exactly wants “smart” academic input? And what kind? Does Kristof want to hear from anthropologists or historians about the issues he wants to confront? Judging from past behavior, no. Do policy-makers really want to hear from any expert whose thinking might disrupt or confound coming to solutions that are already inevitably going to be come to? What kinds of public and political action are actually open to the unexpected input of already-existing academic expertise, and might actually be transformed were it made available? The answer, I fear, is “not much, not many, not really”. Maybe the issue is with the “we” that thinks they need professors, not with the professors. Kristof might ask–using himself as a test–what exactly is dependent upon this input that he thinks is lacking. If what he means by accessibility is “I want professors who agree with what I already think, and I want them to say so clearly”, that’s very different than saying, “There’s something I don’t understand, something I can’t do, something beyond my knowledge”. The former is just hunting for a few more bits of costume jewelry to burnish the finery of the powerful. The latter would be a welcome invitation, but given that it starts with humility, don’t hold your breath.

This might bring us around to a real issue that’s worth taking seriously, past all the dramatics of the academic response to Kristof. If we react strongly, it isn’t just because he’s insulting. It is also because, without really intending to, he is genuinely raising a difficult problem. We already know that in an open-source, open-access, digital media, crowdsourcing world, op-ed columnists in print media are dispensible. The issue that troubles all academics, however they write, wherever they teach, is whether the same is true of expertise in general. We haven’t yet been able to imagine what the new circumstances governing the circulation of expertise might ultimately be. We aren’t going to get a good conversation going about that from Kristof’s prompt, but the time is coming soon where we had better do so or risk sounding just as out of touch with the reality around us.

Posted in Academia, Blogging | 3 Comments

Who’s the Boss?

In the current wave of online ill-will between contingent and tenure-track faculty (which of course most faculty in either group will never see, know about or care about), one of the common sentiments that produces some modest degree of agreement is, “Blame the administrators”.

The common refrain, echoing the arguments of Benjamin Ginsberg’s Fall of the Faculty, goes something like this:

1) Faculty used to be firmly in control of most of the business of academic institutions.
2) Administrators took that control away from them.
3) Then administrators made more administrators and fewer faculty, and made most of the faculty contingent employees. Why? Because they’re bad, because they could, because they hate truth and justice, because they’re neoliberal capitalists.
4) And so here we are. We should retake governance, fire most of the administrators, and rehire most faculty as tenure-track faculty.

This at least is Ginsberg’s take. Every once in a while in Fall, he pauses to consider what the faculty role in the history of administrative growth might be, every once in a while he considers the role of federal and state regulations, every once in a while he thinks about larger trends in employment and the economy. But for the most part, he views faculty as having little or no role in the growth of administration and the rise of contingent labor, he almost never asks whether students played a part, treats academia as a self-contained institution that explains itself, and largely sees administrators, particularly the “deanlets” that he views with special contempt, as the deliberate and programmatic agents of the marginalization of the faculty.

Now keep in mind, as always when I join in these discussions, that I am in a very favored and increasingly isolated institutional situation. Swarthmore faculty may grouse about governance and managerialism, but I generally assume that this is like students grousing about cafeteria food, a kind of obligatory disgruntlement. In any serious comparison with most of global academia, we’re still very much at the center of the governance of the institution, especially in its academic operations. I can teach largely what I like and so can most of my colleagues: the restrictions that have power over us are almost entirely imposed by departmental and divisional colleagues, not the institution. My department can set the terms of its own curricular program within very broad parameters. In a faculty that is mostly tenure-track, hiring of contingent faculty has been mostly a short-term strategy for managing sudden growth in student interest in a particular major or about replacing faculty on leave in heavy enrollment departments. This is not to say that we are not dealing with some of the same pressures and issues that are affecting all of academia, but this is precisely my starting place: some of the drivers of managerialism, administrative growth, and faculty marginalization are totally outside of any given institution, and impossible to contest through a simple shifting of the deck chairs on the Titanic.

But the history here is in some broad measure the history of many institutions. How did the growth of administration happen? It started happening sixty or so years ago because faculty stopped being able to and willing to do many of the major administrative jobs in colleges and universities as the numbers of students grew dramatically and the nature of academic life changed. When academia stopped looking to faculty to handle admissions and residential life and budgets, it started looking to professionals who had done somewhat similar work in other institutions. And those people professionalized the same way that faculty had professionalized a few decades earlier, the same way that faculty were undergoing intensification of professionalization as their ranks grew and grew in the 1950s and 1960s. The administrators didn’t professionalize because that was part of the Master Plan to Destroy the Faculty, but because that’s what was happening across the whole of the economy and society.

Professionalization is in and of itself a driver of growth. It is on the faculty side as well as the administrative side: some departments grow not because they are trying to manage increased demand but because the consensus in academic institutions supports growth into a new specialized field. Specialization in academic disciplines creates economic pressures on institutions: when only a specialist can teach some aspect of a departmental or division curriculum, they have to be replaced if they are absent, augmented with more labor resources (likely contingent teaching if we’re talking after 1980) if many students want to work in that general area of study.

Beyond that, what has driven the growth of administration in academia? Federal and state regulatory mandates, for one. Many faculty are uncomfortable focusing on that issue because it makes us sound like businessmen complaining about over-regulation, but maybe there’s a lesson in there somewhere. On the other hand, at least some of those regulations are generally supported by faculty, in spirit or sometimes even in specific substance. So we can hardly complain about having to respond by adding administrators to deal with those kinds of compliance issues, which clearly require some degree of specialized knowledge. Legal obligations that follow on the Americans with Disabilities Act or regulations on the welfare of organisms in laboratories or Title IX are serious and complex.

Where else was there growth in staff between 1970 and 2014? Information technology. Human resources. Financial management. At many places, the former especially has been an area of substantial growth. Reconcile arguing that information technology staff should be small with wanting campus networks that run smoothly, are secure from intrusion, pose no legal liabilities, and provide faculty with all the instructional support they need.

Ginsberg’s complaints are largely confined to residential life staff and then to growth at the top of the administrative pyramid, just under a president or chancellor. Residential life administrations usually include on their org chart things like specialists in mental health, diversity coordinators, learning disabilities specialists, some of which may have regulatory compliance woven into their work. Even if they don’t, most of those jobs have had at least the passive, sometimes active, support of many faculty at many institutions.

So even if you back Ginsberg in his acerbic dimissal of “deanlets”, who irk him in part because they intrude into what he thinks should be the sole prerogative of faculty (instruction and curricular design) and in his thinking that there are too many bosses and supervisors at the top of administrative hierarchies, you’re only making a dent in the overall growth of administrative compensation budgets.

If you want to do more than that, you either have to name a large range of administrative functions you believe can be eliminated at no cost to the core mission of academic institutions, or you have to compress those functions into fewer positions and be indifferent to any complaints about overwork, or you have to argue for hiring lower-cost deprofessionalized or outsourced labor to do the work. I think most faculty would avoid making the latter two arguments on the record, at any rate. And on the first, when I start asking most faculty I know (at Swarthmore or elsewhere) which exact administrative positions they think aren’t needed, I usually get a few desultory, mumbled suggestions but nothing like a categorical area of staff work that they believe could be eliminated. At large universities with Division I athletic programs that draw heavily on the general operational budget, you might (justifiably) hear faculty raise questions about whether that has anything to do with the core mission of the university, and maybe there’s a few other areas you could similarly underscore, but this is hard work. Faculty who just toss this sort of argument against “administrative bloat” off casually aren’t much different than right-wing voters who believe that somehow there’s a lot of waste in government social spending and a lot of voter fraud: it functions as a deep authorizing mythology that precedes any engagement with the world as it is. If there is any bloat–or at least growth that could be pared back over time–faculty were usually deeply involved in its creation, or they at least endorse the idea of the institutional missions that administrators are supposed to be executing. Faculty want experts in mental health and learning disabilities, they want diversity experts, they want legal staff, they want librarians, they want instructional technologists, they want expert financial and budgetary staff, they want human resources personnel who understand contemporary benefit structures, they want environmental services staff, they want staff who organize peer learning, they want administrative assistants, they want event planners, they want people who handle communications. If you remove any of those functionalities, or ask faculty to handle it themselves, you hear plenty of griping.

———————

All of this brings back the question: who is the boss then, especially in universities where the terms of labor are so increasingly miserable for teaching faculty?

And the answer is, depending on what kind of bossing and decision making about terms of labor we’re talking about: the structures of the institutional culture overall, faculty, top-level administrators, trustees, state politicians or society at large. Meaning, first, it is a misguided political idea to look for a single bad guy to take out of the picture, to remove from bossing, in order to create a better work environment and better terms of labor. And different institutions work differently. More poorly resourced or less scrutinized institutions often operate on more arbitrary and capriciously centralized terms with more power in the hands of top administrators. Strongly religious colleges usually have some kind of formalized cultural overlay that ‘bosses’ the lives and work of faculty and staff. State legislators in some political cultures around the U.S. are more pervasively involved in inspecting and controlling the working lives of university employees.

Some “bossing” embedded in the terms of faculty labor, especially for contingent employees, traces straight back to tenure-track faculty or even to other contingent staff. TT faculty, even when they’re a small remnant of what they once were at a given institution, still usually control the content and structure of much of the curriculum, and therefore determine what kinds of contingent work is needed, how often it’s needed, and how stable the expectations are for further employment. They’re often the people who determine whether a particular contingent faculty member will be rehired, how they are evaluated, whether they have access to resources, and so on.

But what has driven academic institutions towards more and more aggressive use of less and less well-treated contingent faculty? Who is the “boss” of that move? Who made and still makes that decision? Here, yes, it’s totally fair to point to top-level administrators. Tenure-track faculty at many institutions own some piece of that move, often because they failed to respond actively to discourses around cost and budgeting except with a total dismissal of cost as a consideration or with strategic moves intended to preserve their own labor practices while permitting the larger institution to move in a different direction. But top-level administrators drove a lot of this approach to managing costs and financial resources. Larger forces beyond the university have also “bossed” this system into being, ranging from the extent to which the cultural idea of being a scholar remains attractive to many undergraduates to the overall structural impoverishment of labor markets world-wide (e.g., when all the choices are increasingly bad, it’s easier to defend the particular badness of the way you are employing people).

The real challenge is to match a specificity of complaint with the agency of some group or constituency who could plausibly be expected to do something different. Just pointing at “administration” and administrative growth as if that alone is both an accurate description of the causes of poor work conditions for contingent faculty and a plausible direction to seek redress or transformation does nothing at all to help. When it is faculty, especially tenure-track faculty, doing the pointing, the gesture both distracts from their own responsibilities, from what they can do right now, and it doesn’t help move us towards some concrete decisions about what kind of administrative growth is at issue, about how to talk about costs without having to preach austerity, about how to stage a more generative confrontation with influential “bosses” outside of the academy, or anything else of use.

Posted in Academia, Politics, Swarthmore | 20 Comments

Now I’m In For It

So I’ve overhauled my survey course on the history of the Atlantic slave trade in West Africa this semester as an experiment in “flipping the classroom”. I’m not quite flipping the way that some do, with lectures as homework and problem sets in the classroom, but that’s a bit of the spirit of what I’m doing.

The way the course is going to work is that the syllabus will be something of a work in progress, especially after the first five weeks or so.

I’ve identified two major questions that will drive the course: why did the Atlantic slave trade happen to West and Central African societies, and what were the consequences of incorporation into the Atlantic system for West and Central African societies? We will spend time in class sessions breaking down those questions into more manageable subquestions that have purchase in the existing historiography. During class, and sometimes outside of class, as an assignment, we will be locating relevant scholarship or other materials to help us work with these questions, and we will then read some of that work together in class, taking collaborative notes on a shared document.

I’ll have another shared document called “Lecture Requests” open during class where students can semi-anonymously request that I spend some time talking about a subject that is either confusing in the scholarly literature or that seems both important and too diffuse for us to fully grasp from the readings alone. Sometimes I’ll try to lecture as soon as I see a request, other times I’ll wait and do it in the next class, especially when I feel the need to prep a bit on that particular subject.

We’ll also keep a spreadsheet “reading log” that I will eventually export into Viewshare so we can create visualizations from our reading (say, a map of places in West and Central Africa that we read about during the semester). We’ll have a few other docs open during most class sessions (one for harvesting good specific search terms for further use in locating appropriate materials, for example).

I’m doing this because I’d like to see if there’s a better way to both produce more consistent command over a body of knowledge than my usual pedagogy does and at the same time do something more powerful or lasting in terms of showing students how to learn, how to build knowledge out of reading and note-taking. I’m fairly convinced by Randy Bass, Cathy Davidson, Douglas Thomas and others that if we want to make the case that maintaining the high quality of intensive face-to-face teaching requires and thus justifies hiring expensive, highly trained professionals, we need to find ways to make sure that the time we spend in classrooms is the best use of that time that we can think of within the information-rich, profoundly-networked world that we actually inhabit.

A lot of the class will be visible in public (and I’m linking it to Hastac’s #FutureEd initiative), so I invite curious onlookers and helpful kibitizers to take a look now and again and see what they think about how it’s going.

Posted in Academia, Africa, Defining "Liberal Arts", Swarthmore | 5 Comments

Heroic Measures (The Modest Proposal Remix Edition)

Bill Keller has spent the last two years in a dull and very public exasperated talking-to with the rest of the world for not being enough like Bill Keller. Since the New York Times helpfully selected him among all the writers whose opinions ought to be published periodically within the pages of the New York Times, out of gratitude for his nostalgic reprise of William Randolph Hearst’s brilliant use of the press to start wars, he has written periodically about his mild and grudging regrets for misleading the entire country, how he knew Nelson Mandela personally and it turns out Mandela could sometimes be a jerk, how he sort of liked Doris Kearns Goodwin’s new book and some other stuff that his buddies on the opinion page have had opinions about. He hasn’t tweeted copiously through it all, but he’s been reading some tweets, occasionally. Even by contemporary standards of old-media irrelevance, he’s irrelevant. A rapt audience of a few Times editors, a few other pundits and a couple of old people follows his marshmellow-soft narrative of truisms, hackneyed repetitions, noncommittal middle-of-the-roadisms, and smug posturings, occasionally annoying a larger audience enough to warrant a few angry tweets and blog posts.

In the last entry or so, his tone has changed slightly; his condescension has become a little less forgiveable. As 2014 began, the insufferable and privileged character of old-media punditry that had colonized the major American daily newspapers became much less tolerable. He was deemed too much of an asshole to just ignore. He is now lighting up the Internet with fury, serving as linkbait for the New York Times, which has embraced him as a source of new media advertising revenue.

Bill Keller is still alive, still writing, though you wouldn’t guess it by reading him. The column has become less about prolonging his career and more about defending his wife’s column.

“The words of my column become words that express why I’m paternalistically disappointed by most folks for not being enough like me, not dying the way I think they ought to die, and doing other things that really they should know better about,” he might as well write after reading a collective blast of tweeted exasperation. “The ebb and flow of not-Keller America, of not-Keller world. And so, too, inevitably, of all the things that Keller has done before.”

Posted in Cleaning Out the Augean Stables | 2 Comments

Unplanned Obsolescence

Solidarity and sympathy in online culture and social media are fleeting things: you are only as good as your last response rather than a lifetime of responses, and only as welcome as you are permitted to be within a particular conversation. Discussions that start by drawing the “circle of we” with a circumscribed perimeter resist expansion or redrawing, often appropriately so.

So rather than beg for an ally’s badge, my best reaction to some of the latest complaints against tenured faculty and academic institutions might be to propose some alternative “circles of we” that recast the nature of the conversation.

Asking sharp questions about the imagined endgame of a critique is not about holding that critique up against utopia and finding it wanting if it does not have a road between here and that endpoint. It’s about asking for the strategic vision of that critique in the here and now. If one starts from the proposition that higher education in the U.S. (or more globally) was a basically positive, healthy institution in some previous heyday (most likely the expanded, more democratic, more accessible academy of the 1970s), then a critique of the labor practices, economics, and culture of the present is or should be sharply intent on the difference between then and now. This is how I have largely read Marc Bousquet’s arguments in the last few years: that there is no need to accept moves like programmatically limiting the supply of doctoral candidates, adopting novel institutional reforms, abolishing tenure altogether and so on in order to fix the inequalities in academic labor markets. Instead, all that’s needed is an internal reallocation of institutional budgets to hiring more tenure-track faculty and fewer administrators, a re-emphasis on the core missions of higher education (teaching and the production of knowledge) and a restoration of public funding. You can take a different line than Bousquet and still have roughly the same strategic vision, that some concretely past academy is the one that we want.

If on the other hand, the conclusion is that academia was always elitist, always exclusionary, always unfair both to its workers and apprentices and to its publics, that there was never any golden age to restore, then the strategic vision even now has to be clear: is there an imaginable higher education that could be comprehensively better? Or does the problem lie in the very idea of a professionalized faculty and in the institutionalization of education? There are good, honorable arguments of long-standing that point in either of those directions: one is not left having to craft a critique from scratch. But they have very different implications right here, right now, however far or improbable the endgame might be.

Not the least among those implications is who can be expected to join a coalition of the willing and who cannot. There’s no reason to make tenured faculty your first, preferred targets if you’re chasing restoration unless you genuinely believe that existing tenure-track faculty were the primary agents who produced casualization of academic labor, the diversion of internal budgets to administrative purposes, and the reduction of direct and indirect public budgetary support for higher education. Even in that case, you’re not against tenured faculty as a concept, since that’s the labor dispensation that a restorationist wants to return, just against the particular inhabitants of that role in this particular historical moment (or even, potentially, you are set against some past particular group who did the dirty deed rather than the present incumbents). In this vision, the conventional culture of academic life is largely something worth valuing, preserving, and continuing, and it would be foolish to do it damage in pursuing reform.

If there is some concrete assembly of labor practices, institutional budgets, internal culture, habitus, and so on that is imagined as preferable not just to the present but to any past dispensation, the coalition of the willing is very different depending on which kinds of transformations are being envisioned. There are people inside and outside of academia who envision a technologically-mediated transformation of how we teach and publish, of how we name and employ scholars and experts, of how higher education becomes a new kind of public good, who are also very committed to the reform of academic labor, the diversification of faculty and students, and the refinement or tweaking of the culture of academic professionalism. That’s a reformist politics that sets some faculty against others, that mixes contingent and tenure-track faculty on both sides of the debate. If on the other hand the academy-to-be is the one that American neoliberals, conservatives and libertarians sometimes imagine, where practical learning overthrows the liberal arts and efficient managerial approaches reduce costs, then contingent and tenure-track faculty alike are almost universally going to line up against that possibility.

If in the end there is nothing about institutional education and professionalized academia which appeals, nothing to reform or restructure short of practices which would have to be so comprehensively different to any present or imaginable dispensation, then throw all the rotten tomatoes that come to hand at every target in sight: all faculty, all administrators, all students. They’re all, in this view, doing a very profoundly wrong thing and doing it at great expense. I don’t outline this position to mock it. It has a long lineage of great intellectual profundity and political force behind it. Someone drawn to this position doesn’t need to invent a comprehensive alternative, because this sort of critique by its nature is only sure about what education or learning or training or knowledge production aren’t and shouldn’t be. But don’t expect anyone who is even modestly invested in the institutions we inherit to join in the tomato-flinging. And don’t bother with any particular rage against a particular group, because the argument is so much bigger than that.

——–

This in the end would be my own modest proposal: that most of the arguments about the unfairness of academic labor practices and academic culture are too small. In a sense, they prove that even the strongest critics accept and are a bit blinded by a belief in the specialness of academia, because they even think its unfairness or inequity is special to it.

There is a bigger landscape to consider, one that might either further catalyze a politics of reform (or revolt) or that might bleed out the energy of such a politics within the vastness of history.

Talk of “crisis”, either within some subset of academia or about the whole of it, is often properly met with skepticism. More often than not, when you’re told that there is a crisis, it’s best to quickly check your wallet, because that kind of talk is a favorite distraction by neoliberal pickpockets. But think on a big scale and crisis talk makes a different sort of (mostly upsetting) sense.

The casualization of academic labor started before the rise of information technology and online media, during the 1970s. That is often overlooked even (or especially) by the newest generation of the casualized. But in different forms this is something that was happening to almost all of the professions that rose out of bourgeois life and culture in the West during the 19th Century. And often efforts to extend or erode the boundaries that had been drawn so brightly by the alliance of professional associations and the state during the first half of the 20th Century were not spearheaded by neoliberal corporatizers or conservative anti-intellectuals but by progressives of one kind or another who were either seeking to extend the benefits of public goods beyond what poorer states could afford (say, with “barefoot doctors”) or were trying to break the dominating power of socially exclusive professionals over their subjects and clients (say, with the move to allow competing forms of professionalism like midwifery or homeopathy their own legitimacy). That second move gained particular force among progressives in the wake of Foucauldian-inspired critiques of professionals and their institutions, a perspective that made it hard to simply repeat older liberal arguments about the professions as a form of beneficient service to an enlightened society.

But what is happening now is not just an intensification of this earlier attempt to extend professional services or to make the boundaries and power of professional institutions more porous. What is happening now in the realignment of professional economies and technological infrastructures is possibly something more akin to an industrial revolution. Almost none of the interested parties drawn to that scene of transformation really fully understand or master it, whether they are snake-oil salesmen speaking of “disruption”, visionaries considering new modes and methods of educational practice, or justly rageful victims watching social contracts being broken right before their eyes. How can we understand it fully? That’s the nature of this kind of transformation, whether you find yourself on the barricades or in the guillotine.

But it is happening to more than academia. It is happening to law. It is happening to psychiatry. It is happening to accounting. It is happening to medicine. It is happening to anything and everything that organized itself as a profession, that licensed people with special training as the only legal or proper source of valued services. Some of the work of the professions is being automated. Some of it is being crowdsourced. Some of it is being simply deemed too expensive or unnecessary. And some of it is being taken out of the hands of the professionals and hitched to the wagon of a new class of owners who turn professionals into workers, who demolish the idea that the defining value of professional service is the knowledgeable autonomy of trained experts within their own institutions and in their own practices of service.

So in this sense to say that tenured faculty are to academic labor as white people are to racism is both to think too small and to misfire the structural analogy. Too small because the same thing could and should be said of all professionals whose terms of employment today are still set within the economies and norms that existed in the mid-20th Century, who still can largely believe in and defend the habitus of their profession as it once existed. Which means, equally, that contingent faculty banging on the closed door have many potential allies across a wide range of professions–but to make common cause with them still requires some of the choices I outlined earlier. Namely, were the professions as they once existed a good thing in those former terms? Or do we want to tear down their remaining shreds and fragments in order to make something radically new?

In that choice, professionals of the ancien regime are to newer workers not masters or owners. They are the woeful artisans staring out the window of their cottages at the dark satanic mills rising all around them. And this might explain much of the rageful antagonism between the ancien professionals and the new workers. It always seems as if artisans and workers should be on the same side against the new owners but it rarely turns out that way, and often only for the briefest of conjunctures. Because in the end their interests are different. The artisans know that their work can’t scale to the needs both created by and creating the industrial producers. They can’t make enough room in their cottages for all the workers even if they wanted to. And the workers need a job, right now: they rightly cannot give two fucks about how it used to be great in the old days when folk sheared the sheep in the spring and wove all the summer long. They want fair wages, good work conditions, a chance for advancement. Their best hope lies in the progressive remaking of the factories, the forging of new social contracts, not in the incremental carving out of a few more apprenticeships in the old guilds.

The artisans and the workers don’t have to be against each other either, necessarily. Oh, the artisans can try to smash the new machines if they like, but they shouldn’t expect much sympathy from the people whose meal tomorrow depends on the continued working of the assembly lines. The workers can feel sorry for the old folk up the valley if they like, but they should hardly be expected to endorse the traditional claims of their guilds within the marketplace.

You can have a marketplace that has room for small producers of high-priced artisanal goat cheese and big industrial producers of Velveeta where the workers in each setting are non-rivalrous, indeed, hardly think of one another at all. Maybe that’s where higher education is going, where the few elite institutions that still have tenure are the producers of high-value craftwork in teaching and scholarship, a quaint variation on slow food available to those who can afford the price. And education or training or certification in some other massified form has its workers with their struggles for dignity and fairness to come, struggles that will have almost nothing to do with the old-timey crafters.

To set yourself against that future rather than just drifting down the river of time resigned to its flow, means making clear choices right here and right now, maybe choices that have never been made before within similar conjunctures. Maybe we do want more cottages, a landscape alive with professionals who have forcefully recaptured their monopolies and privileges in new assemblages and institutions. Maybe we just want public goods to be public goods again, which might take a rededication of professional work to the ethos of service. Maybe we want to tear it all the fuck down and build a platform for some future day of the rope against the new owners. All I’m certain of is that many of the arguments out there right now within and about academia are too parochial in some fashion and thus often as much contributing to the drift down the river as they are struggling against its flow.

Posted in Academia, Oh Not Again He's Going to Tell Us It's a Complex System, Politics | 10 Comments

Yesterday, All Our MOOC Troubles Seemed So Far Away

Everybody remember the expectation that a smart, professorial President would hire an equally smart, skilled staff who would prove that a well-run government can be quickly responsive to the needs of the society, efficient in the execution of its duties, and not just services to the highest bidder?

Yeah, me neither. The current Administration seems determined to help us forget. Today the President’s Council of Advisors on Science and Technology issued a report on massively online open courses (MOOCs) that not only reads as if it was written a year ago, but manages even in the frame of a year ago to take the most cravenly deferential and crudely instrumental posture available in that moment. It’s a love letter to the venture capitalists scrambling to gut open higher education, written at a time when the most thoughtful entrepreneurs and executives involved in organizing MOOCs have all but conceded that whatever their value might be, they’re not going to solve the problem of labor-intensivity in education nor are they going to serve as a primary vehicle for achieving equity of access to higher education for potential pupils.

There was a good deal of I-told-you-so-ing after Sebastian Thrun announced that Udacity would move towards offering MOOCs for something other than basic higher education, in part because Thrun had concluded that they simply couldn’t substitute for existing models of teaching. I don’t think anyone should have mocked Thrun for saying so, even though many of us did say that this is what was going to happen. Not the least because it has happened before, at each major milestone in the development of mass communication in modern societies: the new medium was eagerly held up as a chance to affordably massify education and extend its transformative potential, only to fall short. Largely because no matter what mass medium we’re talking about, this kind of education is essentially an assisted form of autodidacticism. It has worked and still works largely only for those who already know what they want to know, and who already know how to learn.

There are some people who deeply believe that new technological infrastructures can in and of themselves solve problems of cost, equity and efficacy, in higher education or anything else. But at least some of the people who were preaching the MOOC gospel a year ago, where the President’s Council just went in their time machine, did so hoping to draw a Golden Ticket in the “I Made an IPO and Broke Something Important” sweepstakes. Most of those folks seem to be moving on now. In the Silicon Valley game, you don’t have to make money, but you do need to show that you can displace and disrupt an existing service with some speed. That’s not going to happen in this case.

One of the reasons that so many faculty who are otherwise very friendly to digitally-mediated innovation and change were so annoyed with MOOCs is that the intense push by companies and investors to draw attention to MOOCs drew energy and resources away from existing projects that have been using information technology to enhance and enrich traditional modes of teaching, often called “blended learning”. Now that the craze for the MOOC is starting to fade, maybe the blended learning conversation can gain the public attention it deserves once again.

But also, maybe we can hold on to what we’ve learned about the genuinely interesting possibilities of MOOCs. So they’re not going to magically solve the economic problems of education or public goods, or for the more anti-intellectual backers, they’re not going to create a world where algorithms will replace truculent faculty. If we get lucky, they might put some of the sleazier for-profit online educators out of business. However, existing MOOCs are still a potentially terrific implementation of three possible objectives, all of which might even have market value.

1) MOOCs are a model form of new digital publication. If you read this blog, you’ve seen me say this before (and seen me say before, somewhat crossly, that I’ve been saying it for years.) But this is no longer just potential: it’s reality. Does anyone remember how many people bought “For Dummies” books? Or in recent years, how many institutions are paying for a lynda.com account? The MOOC is a BOOC: it’s an enhanced, interactive instructional guide where other readers and the authors are there to help you learn. An instructional book has never been confused for a face-to-face course in a university, but it’s also a concept that’s been in existence longer than the university itself.

2) MOOCs are learning communities. Again, this is a potential that’s been around since the WELL, but existing MOOCs are a good demonstration of mature technologies and practices that help dedicated groups learn and explore together at various levels of commitment and interest. They can’t teach calculus to a single student who is underprepared to learn calculus, but they can help a very big group of people who have diverse knowledge and a common interest in the future of higher education learn and discuss that topic together.

3) The mass response to MOOCs are evidentiary proof of the transformative potential of traditional higher education. They’ve been misused as vehicles for transforming higher education, but what they really document is that people who’ve had higher education want to have more learning experiences like that for the rest of their lives. It’s why I always feel so sad when I talk to a Swarthmore alum who just wants to talk about books and ideas and research again and who starts to think that this alone is a reason to go on to graduate school. It’s not a good reason to do that because that’s not what graduate school typically services. But look at who takes MOOCs: it’s a close overlap with people who take community college courses for enrichment, with people who join book clubs and go to lectures, with people who just want to know more and talk with people who also have that aspiration. What have MOOCs shown so far? That there are a lot of people like that. They’re busy people, so they often drop out. But I bet those people are to support for educational institutions as a public good, ready to believe in the potentialities of education for a democratic society. MOOCs might not entirely scratch the itch for lifelong learning that many people who’ve had a taste of education develop, but they’re one way to respond to that desire, and more potently, an affirmation that the desire exists.

If the White House wants to pay attention to something important, they might start there rather than embracing the hope that market forces will automagically deploy the MOOC to finally relieve the technocrats of the burden of maintaining and extending public goods.

Posted in Academia, Digital Humanities, Information Technology and Information Literacy, Politics | 5 Comments

Be Nelson Mandela

It is 1981 and I am writing my first long research paper ever in my high school government class on why the U.S. government and U.S. institutions need to commit more aggressively to fighting apartheid. I am citing a report that says if apartheid isn’t ended soon through a negotiated process, it will collapse in a revolutionary bloodbath in which tens of thousands will die. The Reagan Administration has already expressed its lack of interest in pressuring South Africa, though it had no problem applying sanctions to Poland. I spend a good portion of my research reading about Nelson Mandela and the ANC.

It is 1985 and I’m speaking at a student rally against apartheid, as one of two student representatives to the Board of Trustees who have been pushing for divestment. Somewhere the Special AKA’s song “Free Nelson Mandela” is playing.

It is 1988 and I’m a graduate student starting to focus my interest on southern African history, attending a conference in Canada that has numerous participants from South Africa whose presence was financed in part by the Canadian government as a sign of its commitment to the anti-apartheid movement. Many of the speakers and attendees had been members of the United Democratic Front, which had been the key driver of internal struggle against apartheid during the 1980s. Some of them have recently been in jail. The mood at the conference is pessimistic, even despairing. Activists have been murdered, beaten and tortured with increasing frequency and boldness and the state seemed to have successfully suppressed the momentum of mass protest. One speaker says, “This phase of the struggle is over. Our children may see the end of apartheid, but we will not.” Mandela has been involved in secret negotiations with the apartheid leadership for years but no one at the conference knows that or at least could say that they knew it.

It’s 1990 and I’m working on my dissertation in London. Mandela is going to walk free of prison that day and I’m watching it on the TV and damn if I’m not crying freely. Not long after I arrived in London in October 1989, the Berlin Wall had begun to crumble. Suddenly everything impossible is happening.

It’s 1991 and I’m visiting South Africa for the first time, taking a break from my research in Zimbabwe. My dissertation topic had been imagined in 1988 with South Africa in mind at first, but I decided due to the academic boycott that I should work in Zimbabwe instead. My friend’s house is full of the excitement of exiles returning and friends being released from jail. I have a great conversation with a sweet, gentle physicist who tells me about how his complicated plan to set off a small symbolic explosion in a famous building (avoiding casualties) landed him in jail when he told the wrong person about it. I’m told gleefully that one of my circle of friends in my graduate program actually helped to write an iconic line in Mandela’s 1990 speech about the violence in Natal.

It’s 1998 and I’m in South Africa again. I’m in the first trembles of a long slide into middle-age regret and re-examination, and confessing to one of my South African friends about how embarrassed I feel by some of my more romantic and naive perceptions of the struggle against apartheid and African nationalism in general. (I’ve just been in Zimbabwe again, which was a very different place in 1998 than in 1991.) I confide that I’m not sure I have any heroes any longer, and feel stupid that I ever should have had them. My friend, who had been involved directly in the internal struggle of the 1980s and has spent time with some of the political leadership of the new South Africa, says, “It’s foolish to have heroes. Though it’s perfectly fine to have people you like and don’t like, people you trust and don’t trust.” You could like Walter Sisulu or Cyril Ramaphosa, you could hate Ronnie Kasrils, says my friend. Mandela is too remote and protected for my friend to think of as someone you like or don’t like, though there was a warmth, charm and humility there too real to be faked.

————–

Like many of us, perhaps more than some if less than others, I’ve grown up with Nelson Mandela somewhere in the frame of my life. Which is why it seems important to me to get him right now as everyone scrambles now to claim that they always were on his side and he was always on theirs. That claim is not just a preoccupation of outsiders. That scramble has been underway in South Africa for years, arguably ever since his presidency ended. And for the most part, people, including some of his heirs, get him wrong, and usually because they can’t afford to get him right.

They get him wrong because he offered in his life to be gotten not-quite-right. To be just enough the man and leader his possible and committed allies needed him to be, to throw a rope to those who needed him to be revolutionary, to be a saint, to be a moderate, to be a nonracialist, to be a nationalist, to be angry or sad, to be statesmanlike. To throw that rope and let any who would climb on board.

That speaks to something I suppose we could call pragmatism. But that implies a kind of insincerity, a manipulator’s willingness to tell people what they want to hear. Mandela had his eyes all the time on his goals, and what he said and did were not just a means to that end but the end itself.

So he was a strategist. This, too, is a commonplace thing to say about Mandela. More than a few of the well-prepared obituaries that have been circulating since yesterday afternoon have repeated Ahmed Kathrada’s oft-told tale of a three-day chess game that Mandela played against a new detainee on Robben Island, until his opponent surrendered. But this too isn’t quite right, if it’s meant to confer superhuman acuity on Mandela. As he himself was quick to say for much of his life, he made a great many mistakes as both leader and man. The ANC’s approach to the political struggle in South Africa, whether under the active leadership of Mandela and his circle or not, has been full of bone-headed moves. Mandela’s commitment to the armed struggle was a strategic necessity and a political masterstroke, but the actual activities of MK were mostly a sideshow to the real revolution fought in the townships after 1976. It’s not as if Mandela sat down and said, “Ok, so now I go into jail for 27 years and come out a statesman”. His life as both revolutionary and president was, as any political life is, a series of improvisations and accidents.

His improvisations were far more gifted than most, in part because of his disciplined approach to political selfhood. That’s the thing that made Mandela’s strategy and his adaptations stand out. All of his selves and words and decisions were an enactment of the enduring nation he meant to live in some day. I think that is the difference between him and many of his nationalist contemporaries who ascended to power in newly independent African states between 1960 and 1990. (This, too, needs remembering today: Mandela came to nationalism in the same historical moment as Kwame Nkrumah, Julius Nyerere, Patrice Lumumba, Kenneth Kaunda, and so on.) The difference is that Mandela was always looking through the struggle to its ultimate ends, whereas most of the nationalists could see little further than the retreat of the colonial powers from the continent and the defeat of any local political rivals. Perhaps that was because Mandela and his closest allies, even during the Youth League’s insurgency against the old ANC leadership, could see that the endgame of apartheid could never be as simple as making a colonizer go back home. Perhaps it is just that he was a better person, a bigger man, a greater leader than most of them.

Or indeed, most of all the leaders of his time in this respect: to keep a long view of the world he ultimately thought his people, all people, should live in. He is the head of his class on a global scale, standing tall not just above his African contemporaries but above most other nationalists and certainly above the neoliberal West, whose leaders seem almost embarrassed to have ever thought about politics as the art of shaping a better future for all.

I suppose as a historian that my knee should jerk at any invocation of the great-man theory and cite the masses and parties and structures that brought Mandela to power. And as a lightly depressive middle-aged man attached to my comforts, I should embrace my friend’s warnings against having heroes. At least Mandela can no longer disappoint anyone who lionizes him, not that he ever did. That is perhaps most of all what we all admire about him: that with every opportunity in the world, structural and personal, to stumble on feet of clay after 1990, he never did. (Winnie Madikizela-Mandela begs to differ, I know.)

But that knee won’t jerk and perhaps I can still have a hero or two. The problem with the wave of admiring appraisals of Mandela as hero and great man is not that he was not a hero or great man. The problem with those celebrations (even before Mandela’s death) is that few of them oblige the people offering them to rethink anything at all about their own times, their own lives, their own mistakes. At best, they occasion the grudging admission, “I thought he was a terrorist or a revolutionary, but it turns out he was a great man.” But put one foot in front of the other and soon you’ll be walking out the door: the next step might be to recognize that he was a terrorist and a revolutionary and a great man.

A characteristic weakness of empires is that they have a hard time telling friends from enemies. Nations have to work to turn citizens within their borders into dehumanized outsiders. Empires, on the other hand, hardly know how to distinguish between grifters who are just taking the empire for whatever it’s willing to give and friends whose autonomous, authentic pursuit of their own political ends happens to coincide with the long-term interests and values of the empire. So the United States and England and France, for example, dumped treasure and spilled blood for Mobutu and Banda and Bokassa and Houphouet-Boigny in all the years of Mandela’s ANC leadership and then imprisonment. And all the while in its secret counsels and whispered conversations, the West was mostly content with its conclusion: Mandela was or would be a dangerous man, and the ANC a dangerous party. From the 1960s on, the U.S. and U.K. wanted apartheid gone (at least until Reagan, when anything that was not communism was good, and perhaps even better if it was sufficiently authoritarian to hold the line) but there were few in those governments seeing the great man in Mandela.

So of course it sticks in the craw to hear those who would have condemned Mandela (and those who did condemn him through word and deed) now speak of his greatness. But again, the point is not to say, “You were wrong this once, because this man”. It is to say, “You are often wrong, and not just because your judgement of individual greatness is wrong.” You are wrong when you can’t be bothered to hear from people who would have been, who were, your friends when they come to testify about how your drones killed their families, wrong when you spy on anyone going into a mosque in New York City, wrong when you let some mid-rank bureaucrat or think-tank enfant play the role of policy-wonk Iago who whispers to you which friends to murder or neglect. You are wrong when you pretend that from Washington or London you can sort and sift through who ought to be allowed to win desperate struggles for freedom and justice and who should not, and wrong when you arm and forgive and advise the same kind of grifters who take your money and laugh all the way to the torture chambers.

You were wrong then and now because you won’t let yourself see a Mandela. But also because you think that the privilege of making a Mandela belongs to the empire. This in the end is his final legacy: that he, and his closest colleagues, and the people in the streets of Soweto, and maybe even a bit (though not nearly so much as they themselves would like to think) the global allies of the anti-apartheid struggle, all of that made Mandela. Mandela made himself, much as he in his humility would always insist that he was made by the people and was their servant.

When you say, “He was a great statesman”, credit what that means. It means that he looked ahead, kept his eyes on the prize, and tried to do what needed doing, whether that meant taking up arms, or playing chess, or making a friendly connection with a potentially friendly jailer. If you’re going to say it, then credit first that there might be great leaders (and great movements) where you right now see only terrorism or revolution or disorder. That so many people were wrong about Mandela should at least allow for that much.

Don’t forget that it wasn’t just the Cold War leadership of the West that was wrong. Other African nationalists were wrong: many forget that for a time, the PAC had a serious chance of being taken as the legitimate representative of the aspirations of South Africans. Of course, some of them were perfectly right about Mandela and that’s why they hated him both early and late, because he had a far-sightedness and a realistic vision of a world that could be that they lacked. For someone like Robert Mugabe, the most unforgiveable thing about Mandela is that having power, he gave it up. And those on the left who just want to remember Mandela the revolutionary have to remember that Mandela the neoliberal was largely the same man, with the same political vision.

What no one really wants to see is Mandela the builder, because nowhere in that sight can we find our own reflection.

That’s why he seems like such a lonely giant, mourned by all, imitated by none. Because who now can boast of a long-term view of the future? Who is looking past the inadequacies of the moment to a better dispensation? Who really works to see and imagine a place, a nation, a world in which we might all want to live and then plots the distance between here and there? Some of us know what we despise, we know the shape of the boot on our neck or the weight on our shoulders. Some of us know what we fear: the shadow of a plane falling on a skyscraper, the cough of a bomb exploding, the loss of an ease in the world. We know how to feel a hundred daily outrages at a stupid or bad thing said, how to gesture at the empty spaces where a vision once resided, how to sneer at our splitters and wankers, how to invest endless energies in demanding symbolic triumphs that lead nowhere and build nothing. Our political leaders (and South Africa’s, too) have no vision beyond the next re-election and their retinues of pundits and experts and appointees are happy to compliment and flatter the vast expanses of their nakedness in return for a share of the spoils.

Mourn the statesman and the revolutionary and the terrorist and the neoliberal and the ethicist and the pragmatist and the saint and don’t you dare try to discard or remove any part of that whole. Celebrate him? Sure, but then make sure you’re willing to consider emulating him.

Posted in Africa, Politics | 34 Comments