Generalist’s Work – Easily Distracted https://blogs.swarthmore.edu/burke Culture, Politics, Academia and Other Shiny Objects Sun, 21 May 2017 13:03:36 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.15 Helpful Hints for Skeptics https://blogs.swarthmore.edu/burke/blog/2017/05/21/helpful-hints-for-skeptics/ https://blogs.swarthmore.edu/burke/blog/2017/05/21/helpful-hints-for-skeptics/#comments Sun, 21 May 2017 13:03:36 +0000 https://blogs.swarthmore.edu/burke/?p=3164 Continue reading ]]> I suppose I knew in some way that there were people whose primary self-identification was “skeptic”, and even that there were people who saw themselves as part of the “skeptic community”. But it’s been interesting to encounter the kinds of conversations that self-identified members of the skeptic community have been having with one another, and especially the self-congratulatory chortling of some such over something like the lame “hoax” of gender studies.

Skepticism is really just a broad property of many forms of intellectual inquiry and a generalized way to be in the world. Most scholars are in some respect or another skeptics, or they employ skepticism as a rhetorical mode and as a motivation for their research. Lots of writers, public figures, and so on at least partake of skepticism in some fashion. I’m a bit depressed that people who identify so thoroughly with skepticism that they see that as their primary community and regard the word as a personal identifier don’t seem to be very good at being skeptical.

So a bit of advice for anyone who aspires to not just use skepticism as a tool but to be a skeptic through-and-through.

1) Read Montaigne. Be Montaigne. He’s the role model for skepticism. And take note of his defining statement: What do I know? If you haven’t read Montaigne, you’re missing out.

2) Regard everything you think you know as provisional. Be sure of nothing. When you wake up in the morning, decide to argue that what you were sure of yesterday must be wrong. Just to see what shakes loose when you do it.

3) Never, ever, think your shit doesn’t stink. If you’re spending most of your time attacking others, regarding other people as untrue or unscientific or unrational who need to have your withering skeptical gaze upon them, you’re not a skeptic. Skepticism is first and last introspective. You are the best focus of your own skepticism. Skepticism that is relentlessly other-directed is just assholery with a self-flattering label. Skepticism requires humility.

4) Always doubt your first impulses. Always regard your initial feelings as suspect.

5) Always read past the headline. Always read the fine print. Always read the details. Never be easy to manipulate.

6) Never subcontract your skepticism. “Skeptical community” is in that sense already a mistake. No one else’s skepticism can substitute for your own. Yes, no person is an island, and yes, you too stand on the shoulders of giants. But when it comes to thinking a problem through from as many perspectives as possible, when it comes to asking the unasked questions, every skeptic has to stand on their own two feet.

7) Never give yourself excuses. If you don’t have the time to think something through, to explore it, to look at all the perspectives possible, to ask the counter-intuitive questions, then fine: you don’t have the time. Don’t decide that you already know all the answers without having to do any of the work. Don’t start flapping your gums about the results of your skepticism if you never did the work of thinking skeptically about something.

8) Never be obsessive in your interest in a single domain or argument. If you have something that is so precious to you that you can’t afford to subject it to skepticism, if you have an idee fixe, if you’re on a crusade, you’re not a skeptic.

9) Never resist changing sides. Always be willing to walk a mile in other shoes. Skepticism should be mobile. If you have a white whale you’re chasing, you’re not a good skeptic. A good skeptic should be chasing Ahab as often as the other way round–and sometimes should just be carving scrimshaw and watching while the whale and the captain chase each other.

10) Be curious. A skeptic is a wanderer. If you’re using skepticism as a reason not to read something, not to think about something, not to learn something new, you’re not a good skeptic.

]]>
https://blogs.swarthmore.edu/burke/blog/2017/05/21/helpful-hints-for-skeptics/feed/ 5
Home to Roost https://blogs.swarthmore.edu/burke/blog/2017/04/17/home-to-roost/ https://blogs.swarthmore.edu/burke/blog/2017/04/17/home-to-roost/#comments Mon, 17 Apr 2017 22:26:54 +0000 https://blogs.swarthmore.edu/burke/?p=3096 Continue reading ]]> Formal argument in the classic style has real limits. Sometimes when we try to rule some sentiment or response in an argument or dialogue as “out of bounds” by classing it as a logical fallacy or as some other form of argumentative sin, we box out some important kinds of truth. Not all contentious discussion between two or more people is an exchange of if-then statements that draw upon bodies of standard empirical evidence. Sometimes, for example, it’s actually important to talk about matters marked off-limits by formalists as ad hominem: there are plenty of real-world moments where the motivations of the person you’re arguing with matter a great deal in terms of deciding whether the argument is worth having and whether it’s worth the labor time or emotional effort to assess what’s been said.

Equally, there is a sort of casual hand-waving manner of dismissing something that’s been said as an invalid “slippery slope argument” as if any argument that says, “A recent event might have long-term cumulative consequences that are more severe” is always invalid, always lacking in evidence. Typically the hand-waver says, “Come, come, this event is a minor thing, where’s the evidence that it will lead to something worse, that’s a fallacy because you can’t prove that it will.”

I find this especially frustrating as a historian, because often what I’m doing is comparing something in the present to a wide number of examples of change over time in the past. And in many cases, people in the past who have noted the incremental or cumulative dangers of an event or trend and been correct have had to endure finger-wagging galore from mainstream pundits who try to stay deeply buried in the vaults of consensus. When someone says, “Eventually this will undermine the legitimacy of something important”, that’s a slippery-slope argument of a kind, but it’s a completely legitimate one. Eventually it will. Now it has.

For almost the entire lifespan of this now more-than-a-decade-old blog, one of the things I’ve been warning about is the dangers posed by a failing sense of connection between citizens and the formal political institutions of many nation-states, including the United States–and that one of the foremost dangers would be that a kind of populist anger that might be potentially indeterminate or plastic in its ideological loyalties would be captured by reactionary nationalism. Well, here we are: the slide down that slope is nearly complete. One of the reasons I’m not sure what to blog about any longer is that I don’t think there’s any way back up that slope. There’s no do-overs. I don’t know what to do next, nor do I have any kind of clear insight about what may come of the moment we’re in.

The one thing I do know is that we cannot form anything like a coherent political or intellectual response if we refuse to understand how we got to this moment, and how the history of the present looks to the people who have registered their alienation from and unhappiness with conventional political elites and their favored institutions in a series of votes over the last five years in the United Kingdom, in Colombia, in Austria, in the United States, in India, in Turkey and elsewhere, including in the imminent French elections. Even when we are intensely critical of what they’ve done, and even when we say with complete accuracy that one of the major motivations for what they’ve done is deep-seated racism, xenophobia or other form of desire to discriminate against a class or group of their fellow citizens, we still have to see when and how some of what they think makes a kind of sense–and where people tried to warn long ago that if things kept going as they were going, the eventual consequence might be an indiscriminate feeling of popular cynicism or despair, a kind of blanket dismissal of the powers that be and an embrace of a kind of flat form of “fake news”.

Some examples.

First, let’s take the deranged fake stories about a pizza restaurant in Washington DC being a center of sex trafficking. What makes it possible to believe in obvious nonsense about this particular establishment? In short, this: that the last fifty years of global cultural life has revealed that public innocence and virtue are not infrequently a mask for sexual predation by powerful men. Bill Cosby. Jimmy Savile. Numerous Catholic priests. On and on the list goes. Add to that the fact that one form of feminist critique of Freud has long since been validated: that what Freud classed as hysteria or imagination was in many cases straightforward testimony by women about what went on within domestic life as well as within the workplace lives of women. Add to that the other sins that we now know economic and political power have concealed and forgiven: financial misdoings. Murder. Violence. We may argue about how much, how often, how many. We may argue about typicality and aberration. But whether you’re working at it from memorable anecdotal testimony or systematic inquiry, it’s easy to see how people who came to adulthood in the 1950s and 1960s all over the world might feel as if we live on after the fall, even if they know in their hearts that it was always thus. Just as we fear crime far more than we ought to, we may overestimate dramatically how much corruption is hidden behind a facade of innocence. We should understand why it is easy to believe that anybody powerful, especially any powerful man, might be engaged in sexual misconduct. Think of how many male celebrities and political figures marketed as dedicated to “family values” have turned out to be serial philanders. Cultural conservatives sometimes try to blame this series of revelations on the permissiveness of post-1970 popular culture, but the problem is with the gap between what people pretend to be doing and what they are doing–and it is the kind of gap that readily appears in the rear-view mirror of the past once you see it clearly in the present, as a persistent consequence of male power. The slippery slope here is this: that at some point, people come to accept that this is what all powerful men do, and that any powerful man–or perhaps even powerful woman–who professes innocence is lying. All accusations sound credible, all power comes pre-accused, because at some point, all the Cosbys and teachers at Choate Rosemary Hall and Catholic priests have made it plausible to see rape, assault, molestation everywhere. And by making all of that into that kind of banality, we make it harder to accuse any given individual, like our current President, of some distinctively awful behavior, even though he’s plainly guilty of that. We have to reckon with where we’re at. There’s no way out of where we are without some change in the entanglement of gender, power and sex. Yes, of course it doesn’t mean that every accusation is by definition true, but we should understand why any accusation can make a kind of sense, no matter what other ideological overtones come along with it.

Second, let’s talk about wiretapping. Again, mainstream punditry complains of how President Trump accuses the Obama White House of having him tapped, and they ask: where’s the evidence? And they’re right: the evidence is laughably absent. What they don’t reckon with is that once again, we’re on the bottom of a long-since-slid slope. How many times did Americans and other citizens in other countries have to warn of the consequences of ubiquitous surveillance by intelligence services in terms of the faith and trust that democratic citizens might put in their institutions–and in the degree to which those citizens might believe their own privacy to be safely respected? With each revelation, with each disclosure, with each accusation, sensible liberals and conservatives alike have insisted that this case was necessary, that that practice was prudent, that this example was a minor misstep or judgmental error. That the world is a dangerous place. That the safeguards were in place: secret courts, hidden judges, prudent spies, classified oversight. That citizens just had to trust in the prerogatives of the executive branch, or the prudence of the legislators, or the professionalism of the generals and spies. And so many times that trust has been breached: we have heard, many years later, that surveillance that was crudely political was approved, that signals were intercepted without a care in the world for restraint or rights, and that what intelligence was gathered was ignored, distorted or misused. So are we surprised that today, the current occupant of the White House, can indulge in bad conspiracy theory and evidence-less speculation and strike a chord with some listeners? We shouldn’t be surprised–and we should recognize that this is what happens when you misuse surveillance decade after decade.

I could go on. Corruption: despite a brief spasm of reform after Nixon, pretty soon we were back to numerous elected officials who thought little of lying and covering up, or saying one thing while grossly doing another behind closed doors. Crony capitalism–having another law for the rich than the poor–all the current material that Trump likes to preach to his favored audiences. People were warned that if something didn’t change, if some acts weren’t cleaned up, if folks didn’t think about what happens when mistrust grows to an epidemic, if there wasn’t some urgency about a more transparent and honest government, then the public would grow accustomed to it all, would come to believe in the ubiquity of those sins. They would stop listening to cries of wolf, because they would falsely believe all the world to be a world of wolves. Some of what Trump throws at the wall sticks because there’s a truth to it, however woefully he may stink of the worst of what he hurls.

Undoing that will take something like a revolution, or at least a cleansing. If we still hope to avoid that being Steve Bannon’s “unravelling of the administrative state”, then it will take something quite the opposite of what Bannon has in mind. It will take a new generation of public officials, political leaders, and prominent citizens who understand that even small ditches will increment eventually into bottomless pits. Who live up to what they profess, who build something new. So far I see almost no sign that the mainstream of the Democratic Party understands this at all.

]]>
https://blogs.swarthmore.edu/burke/blog/2017/04/17/home-to-roost/feed/ 12
Is There a Desert or a Garden Underneath the Kudzu of Nuance? https://blogs.swarthmore.edu/burke/blog/2015/08/31/is-there-a-desert-or-a-garden-underneath-the-kudzu-of-nuance/ https://blogs.swarthmore.edu/burke/blog/2015/08/31/is-there-a-desert-or-a-garden-underneath-the-kudzu-of-nuance/#comments Mon, 31 Aug 2015 17:52:21 +0000 https://blogs.swarthmore.edu/burke/?p=2875 Continue reading ]]> I like this essay by Kieran Healy a lot, even though I am probably the kind of person who habitually calls for nuance. What this helps me to understand is what I am doing when I make that nearly instinctive move. I suppose in part I am doing what E.P. Thompson did in writing against theory as abstraction: believing that the important things to understand about human life are always descriptive, always in the details, always in what is (or was) lived, real, and tangible. There are days where I would find more persuasive, both as scholar and person, from the truths found in a novel or a deep work of narrative journalism than from social theory. But it is stupid to act as if one can be a microhistorian in a naive and unstructured fashion: there’s tons of theory in there somewhere, from the selection of the stories that we find worth our time to what we choose to represent them as saying. I do not read about human beings and then insist that the only thing I can do is just read to you what I read. I describe, I compress, I abstract. That’s what Kieran is arguing that theory is, and what the demand for “nuance” prevents us from doing in a conscious and creative way.

I suppose I lately have a theory of theory, which is that it is usually a prelude to doing something to human beings wherein the abstractions that make theory ‘good to think’ will become round holes through which real human square pegs are to be pounded. But this is in some sense no better (or worse) than any other abstraction–to really stick to my preferences, I should take every theory (and its application or lack thereof) on its particulars.

I also think that there is something of a puzzle that Kieran works around in the piece, most clearly in his discussion of aesthetics. (Hopefully this is not an objection about the need for nuance by some other name.) But it is this: on what grounds should we prefer a given body of theory if not for its descriptive power? Because that’s what causes the kudzu of nuance to grow so fast and thoroughly: academics read each other’s work evaluatively, even antagonistically. What are we to value between theories if not their descriptive accuracy? (If that’s what we are to value, that will fertilize the kudzu, because that’s what leads to ‘your theory ignores’ and ‘your theory is missing…’) We could value the usefulness of theory: the numbers of circumstances to which it can apply. Or the ease-of-use of theory: its memorability, its simplicity, its familiarity. Or the generativity of theory, tested by the numbers of people who actually do use it, the amount of work that is catalyzed by it.

The problem with all or any of those is that I don’t know that it leaves me with much when I don’t like a theory. Rational choice/homo economicus fits all of these: it is universal in scope, it’s relatively easy to remember and apply as a way to read many many episodes and phenomena, and it has been hugely generative. I don’t like it because I think for one it isn’t true. Why do I think that? Because I don’t think it fits the actual detailed evidence of actual human life in any actually existing human society. Or the actual evidence of how human cognition operates. But I also don’t like it because of what is done in the name of such theory. That would always have to be a post-facto kind of judgment, though, if I were prohibited from a complaint about the mismatch between a theory and the reality of human life, or it would have to be about ad hominem: do I dislike or mistrust the politics of the theorists?

I think this is why we so often fall back into the kudzu of nuance, because if we clear away the overgrowth, we will face one another naked and undisguised. We’d either have to say, “I find your theory (and perhaps you) aesthetically unpleasing or annoying” or “I don’t like the politics of your theory (and perhaps you) and so to war we will go”. The kudzu of nuance may be ugly and confusing, but it at least lets us continue to talk at and past one another without arriving at a moment of stark incommensurability.

]]>
https://blogs.swarthmore.edu/burke/blog/2015/08/31/is-there-a-desert-or-a-garden-underneath-the-kudzu-of-nuance/feed/ 1
Playing the Odds https://blogs.swarthmore.edu/burke/blog/2014/07/25/playing-the-odds/ https://blogs.swarthmore.edu/burke/blog/2014/07/25/playing-the-odds/#comments Fri, 25 Jul 2014 18:56:51 +0000 https://blogs.swarthmore.edu/burke/?p=2648 Continue reading ]]> The idea that higher education makes you a better person in some respect has long been its soft underbelly.

The proposition makes most current faculty and administrators uncomfortable, especially at the smaller teaching-centered colleges that are prone to invoke tropes about community and ethics. The discomfort comes both from how “improvement” necessarily invokes an older conception of college as a finishing school for a small, genteel elite and from how genuinely indispensible it seems for most definitions of “liberal arts”.

Almost every attempt to create breathing room between the narrow teaching of career-ready skills and a defense of liberal arts education that rejects that approach is going to involve some claim that a liberal arts education enlightens and enhances the people who undergo it in ways that aren’t reducible to work or specific skills, that an education should, in Martha Nussbaum’s words, “cultivate humanity”.

This is part of the ground being worked by William Deresiewicz’s New Republic critique of the elitism of American higher education. One of the best rejoinders to Deresiewicz is Chad Wellmon’s essay “Twilight of an Idol”, which conjoins Deresiewicz with a host of similar critics like Andrew Delbanco and Mark Edmundson.

I see much the same issue that Wellmon does, that most of these critiques are focused on what the non-vocational, non-instrumental character of a college education was, is and should be. Wellmon and another critic, Osita Nwanevu, point out that there doesn’t need to be anything particularly special about the four years that students spend pursuing an undergraduate degree. As Wellmon comments, “There is, thankfully, no going back to the nineteenth-century Protestant college of Christian gentlemen. And that leaves contemporary colleges, as we might conclude from Deresiewicz’s jeremiad, still rummaging about for sources of meaning and ethical self-transformation. Some invoke democratic citizenship, critical thinking, literature, and, most recently, habits of mind. But only half-heartedly—and mostly in fundraising emails.”

Half-heartedly is right, precisely because most faculty know full well that all the substitutes for the older religious or gentlemanly ideals of “cultivation” still rest upon and invoke those predicates. But we can’t dispense with this language entirely because we have nothing else that spans academia that meaningfully casts shade at the instrumental, vocational, career-driven vision of education.

The sciences can in a pinch fall back on other ideas about utility and truth: their ontological assumptions (and the assumptions that at least some of the public make about the sciences) are here a saving grace. This problem lands much harder on the humanities, and not just as a challenge to their reproduction within the contemporary academy.

I wrote last year about why I liked something Teju Cole had said about writing and politics. Cole expressed his disappointment that Barack Obama’s apparent literacy, his love of good books, had not in Cole’s view made Obama a more consistently humane person in his use of military power.

I think Cole’s observation points to a much more pressing problem for humanistic scholars in general. Intellectuals outside the academy have been and still are under no systematic pressure to justify what they do in terms of outcomes. As a novelist or essayist or critic you can be a brutal misanthropist, you can drift off into hallucinogenic dream-states, you can be loving or despairing or detached. You can claim your work has no particular instrumental politics or intent, or that your work is defined by it. You don’t have to be right about whether what you say you’re doing is in fact what you actually do, but you still have a fairly wide-open space for self-definition.

Humanists inside the academy might think they have the same freedom to operate, but that clashes very hard with disciplinarity. Most of us claim that we have the authority that we do because we’ve been trained in the methods and traditions of a particular disciplinary approach. We express that authority within our scholarly work (both in crafting our own and in peer reviewing and assessing the work of others) and in our curricular designs and governance. And most of us express, to varying degrees, a whiggish or progressive view of disciplinarity, that we are in our disciplines understanding and knowing more over time, understanding better, that we are building upon precedent, that we are standing on the shoulders of someone–if not giants, at least people the same size as us. If current disciplinary work is just replacing past disciplinary work, and the two states are essentially arbitrary, then most of our citational practices and most of our curricular practices are fundamentally wasted effort.

So if you’re a moral philosopher, for example, you really need to think in your own scholarly work and in your teaching of undergraduates that the disciplined study of moral philosophy provides systematic insights into morality and ethics. If it does, it shouldn’t seem like a big leap to suggest that such insight should allow those who have it to practice morality better than those who have not. This doesn’t mean necessarily that a moral philosopher has to be more moral in the conventional terms of a dominant moral code. Maybe the disciplinary study of morality and ethics leads scholars more often to the conclusion that most dominant moral codes are contradictory or useless. Or that morality is largely an arbitrary expression of power and domination. Doesn’t really matter what the conclusions are, just that it’s reasonable to think that the rigorous disciplinary study of morality through philosophy should “cultivate the humanity” of a moral philosopher accordingly.

But if you’ve known moral philosophers, you’ve known that there is not altogether much a notable difference between them and other academics, between them and other people with their basic degree of educational attainment, between them and other people with the same social backgrounds or identities, between them and other people from the same society, and so on, in terms of morality and ethics. It seems to me that what they know has strikingly little effect on who they are, how they act, what they feel.

Many humanist scholars would say that reading fiction gives us insights into what it means to be human, but it’s pressingly difficult to talk about what those insights have done to us, for us, to describe what transformations, if any, we’ve undergone. Many historians would argue that the disciplined study of history teaches us lessons about the human condition, about how human societies navigate both common social and political challenges and about what makes the present day distinctively different from the past.

I’m often prepared to go farther than that. Many of my colleagues disliked a recent assessment exercise here at the college where we were asked about a very broad list of possible “institutional learning goals”. I disliked it too, mostly because of how assessment typically becomes quantitative and incremental. I didn’t necessarily dislike the breadth, though. Among the things we were asked to consider is whether our disciplines teach values and skills like “empathy”. And I would say that yes, I think the study of history can teach empathy. E.g., that a student might through studying history become more able to feel empathy in a wider and more generative range.

The key for me is that word, “might”. If moral philosophers are not significantly more moral, if economists are not significantly more likely to make superior judgments about managing businesses or finances, if historians are not significantly better at applying what they know about past circumstances to their own situations, if literary critics don’t seem altogether that better at understanding the interiority of other people or the meaning of what we say to one another, then that really does call into question that vague “other” that we commonly say separates a liberal arts approach to education from a vocational strategy.

No academic (I hope) would say that education is required to achieve wisdom. In fact, it is sometimes the opposite: knowing more about the world can be, in the short-term, an impediment to understanding it. I think all of us have known people who are terrifically wise, who understand other people or the universe or the social world beautifully without ever having studied anything in a formal setting. Some of the wise get that way through experiencing the world, others through deliberate self-guided inquiry.

What I would be prepared to claim is something close to something Wellmon says, that perhaps college might “might alert students to an awareness of what is missing, not only in their own colleges but in themselves and the larger society as well”.

But my “might” is a bit different. My might is literally a question of probabilities. A well-designed liberal arts education doesn’t guarantee wisdom (though I think it can guarantee greater concrete knowledge about subject matter and greater skills for expression and inquiry). But it could perhaps be designed so that it consistently improves the odds of a well-considered and well-lived life. Not in the years that the education is on-going, not in the year after graduation, but over the years that follow. Four years of a liberal arts undergraduate experience could be far more likely to produce not just a better quality of life in the economic sense but a better quality of being alive than four years spent doing anything else.

I think I can argue that the disciplinary study of history can potentially contribute to the development of a capacity for empathy, or emotional intelligence, an understanding of why things happen the way that they do and how they might happen differently, and many other crafts and arts that I would associate as much with wisdom as I do with knowledge, with what I think informs a well-lived life. But potential is all I’m going to give out. I can’t guarantee that I’ll make someone more empathetic, not the least because I’m not sure how to quantify such a thing, but also because that’s not something everybody can be or should be counted upon to get from the study of history. It’s just, well, more likely that you might get that than if you didn’t study history.

This sense of “might” even justifies rather nicely the programmatic hostility to instrumentally-driven approaches to education among many humanists. Yes, we’re cultivating humanity, it’s just that we’re not very sure what will grow from any given combination of nutrients and seeds. In our students or ourselves.

This style of feeling through the labyrinth gives me absolutely no title to complacency, however. First, it’s still a problem that increased disciplinary knowledge and skills do not give us proportionately increased probability of incorporating that knowledge into our own lives and institutions. At some point, more rigorous philosophical analyses about when to pull the lever on a trolley or more focused historical research into the genesis of social movements doesn’t consistently improve the odds of making better moral decisions or participating usefully in the formation of social movements.

Second, I don’t think most curricular designs in contemporary academic institutions actually recognize the non-instrumental portion of a liberal-arts education as probabilistic. If we did see it that way, I think we’d organize curricula that had much less regularity, predictability and structure–in effect, much less disciplinarity.

This is really the problem we’re up against: to contest the idea that education is just about return-on-investment, just about getting jobs, we need to offer an education whose structural character and feeling is substantially other than what it is. Right now, many faculty want to have their cake and eat it too, to have rigorous programs of disciplinary study that are essentially instrumental in that they primarily encourage students to do the discipline as if it were a career, justified in a tautological loop where the value of the discipline is discovered by testing students on how they demonstrate that the discipline is, in its own preferred terms, valuable.

If we want people to take seriously that non-instrumental “dark side of the moon” that many faculty claim defines what college has been, is and should remain, we have to take it far more seriously ourselves, both in how we try to live what it is that we study and in how we design institutions that increase the probabilities that our students will not just know specific things and have specific skills but achieve wisdoms that they otherwise could not have found.

]]>
https://blogs.swarthmore.edu/burke/blog/2014/07/25/playing-the-odds/feed/ 10
Fighting Words https://blogs.swarthmore.edu/burke/blog/2014/07/08/fighting-words/ https://blogs.swarthmore.edu/burke/blog/2014/07/08/fighting-words/#comments Tue, 08 Jul 2014 18:31:29 +0000 https://blogs.swarthmore.edu/burke/?p=2626 Continue reading ]]> Days pass, and issues go by, and increasingly by the time I’ve thought something through for myself, the online conversation, if that’s the right word for it, has moved on.

One exchange that keeps sticking with me is about the MLA Task Force on Doctoral Study in Modern Language and Literature’s recent report and a number of strong critical responses made to the report.

One of the major themes of the criticisms involves the labor market in academia generally and in the MLA’s disciplines specifically. Among other things, this particular point seems to have inspired some of the critics to run for the MLA executive with the aim of shaking up the organization and galvanizing its opposition to the casualization of academic labor. We need all the opposition we can get on that score, though I suspect that should the dissidents win, they are going to discover that the most militant MLA imaginable is nevertheless not in a position to make a strong impact in that overall struggle.

I’m more concerned with the response of a group of humanities scholars published at Inside Higher Education. To the extent that this response addresses casualization and the academic labor market, I think it unhelpfully mingles that issue with a quite different argument about disciplinarity and the place of research within the humanities. Perhaps this mingling reflects some of the contradictions of adjunct activism itself, which I think has recently moved from demanding that academic institutions convert many existing adjunct positions into traditional tenure-track jobs within existing disciplines to a more comprehensive skepticism or even outright rejection of academic institutions as a whole, including scholarly hierarchies, the often-stifling mores and manners that attend on graduate school professionalization, the conventional boundaries and structures of disciplinarity, and so on. I worry about baby-and-bathwater as far as that goes, but then again, this was where my own critique of graduate school and academic culture settled a long time ago, back when I first started blogging.

But on this point, the activist adjuncts who are focused centrally on abysmal conditions of labor and poor compensation in many academic institutions are right to simply ignore much of that heavily freighted terrain since what really matters is the creation of well-compensated, fairly structured jobs for highly trained, highly capable young academics. Beyond insuring that those jobs match the needs of students and institutions with the actually existing training that those candidates have received, it doesn’t really matter whether those jobs exist in “traditional” disciplines or in some other administrative and intellectual infrastructure entirely. For that reason, I think a lot of the activists who are focused substantially on labor conditions should be at the least indifferent and more likely welcoming to the Task Force’s interest in shaking the tree a little to see what other kinds of possibilities for good jobs that are a long-term part of academia’s future might look like. Maybe the good jobs of the academic future will involve different kinds of knowledge production than in the past. Or involve more teaching, less scholarship. If those yet-to-exist positions are good jobs in terms of compensation and labor conditions, then it would be a bad move to insist instead that what adjuncts can only really want is the positions that once were, just as they used to be.

They should also not welcome the degree to which the IHE critics conflate the critique of casualization with the defense of what they describe as the core or essential character of disciplinary scholarship.

The critics of the Task Force report say that the report misses an opportunity to “defend the value of the scholarly practices, individual and collective, of its members”. The critics are not, they say, opposed in principle to “innovation, expansion, diversification and transformation”, but that these words are “buzzwords” that “devalue academic labor” and marginalize humanities expertise.

Flexibility, adaptability, evolution are later said to be words necessarily “borrowed” from business administration (here linking to Jill Leopore’s excellent critique of Clayton Christiansen).

For scholars concerned with the protection of humanistic expertise, this does not seem to me to be a particularly adroit reading of a careful 40-page document and its particular uses of words like innovation, flexibility, or evolution. What gets discounted in this response is the possibility that there are any scholars inside of the humanities, inside of the MLA’s membership, who might use such words with authentic intent, for whom such words might be expressive of their own aspirations for expert practice and scholarly work. That there might be intellectual arguments (and perhaps even an intellectual politics for) for new modes of collaboration, new forms of expression and dissemination, new methods for working with texts and textuality, new structures for curricula.

If these critics are not “opposed in principle” to innovation or flexibility, it would be hard to find where there is space in their view for legitimate arguments about changes in either the content or organization of scholarly work in the humanities. They assert baldly as common sense propositions that are anything but: for example, that interdisciplinary scholarship requires mastering multiple disciplines (and hence, that interdisciplinary scholarship should remain off-limits to graduate students, who do not have the time for such a thing).

If we’re going to talk about words and their associations, perhaps it’s worth some attention to the word “capitulation”. Flexibility and adaptability, well, they’re really rather adaptable. They mean different things in context. Capitulation, on the other hand, is a pretty rigid sort of word. It means surrendering in a conflict or a war. If you see yourself as party to a conflict and you do not believe that your allies or compatriots should surrender, then if they try to, labelling their actions as capitulation is a short hop away from labelling the people capitulating as traitors.

If I were going to defend traditional disciplinarity, one of the things I’d say on its behalf is that it is a bit like home in the sense of “the place where, when you have to go there, they have to take you in”. And I’d say that in that kind of place, using words that dance around the edge of accusing people of treason, of selling-out, is a lousy way to call for properly valuing the disciplinary cultures of the humanities as they are, have been and might yet be.

The critics of the MLA Task Force say that the Task Force and all faculty need to engage in public advocacy on behalf of the humanities. But as is often the case with humanists, it’s all tell and no show. It’s not at all clear to me what you do as an advocate for the humanities if and when you’re up against the various forms of public hostility or skepticism that the Task Force’s report describes very well, if you are prohibited from acknowledging the content of that skepticism or prohibited from attempting to persuasively engage it on the grounds that this kind of engagement is “capitulation”. The critics suggest instead “speaking about these issues in classes” (which links to a good essay on how to be allies to adjunct faculty). In fact, step by step that’s all that the critics have to offer, is strong advocacy on labor practices and casualization. Which is all a good idea, but doesn’t cover at all the kinds of particular pressures being faced by the humanities, some of which aren’t confined to or expressed purely around adjunctification, even though those pressures are leading to the net elimination of jobs (of any kind) in many institutions. Indeed, even in the narrower domain of labor activism, it’s not at all clear to me that rallying against “innovation” or “adaptability” is a particularly adroit strategic move for clawing back tenure lines in humanities departments, nor is it clear to me that adjunct activists should be grateful for this line of critical attack on the MLA Task Force’s analysis.

Public advocacy means more than just the kind of institutional in-fighting that the tenurati find comfortable and familiar. Undercutting a dean or scolding a colleague who has had the audacity to fiddle around with some new-fangled innovative adaptability thing is a long way away from winning battles with state legislators, anxious families, pragmatically career-minded students, federal bureaucrats, mainstream pundits, Silicon Valley executives or any other constituency of note in this struggle. If the critics of the MLA Task Force think that you can just choose the publics–or the battlegrounds–involved in determining the future of the humanities, then that would be a sign that they could maybe stand to take another look at words like flexible and adaptable. It’s not hard to win a battle if you always pick the fights you know you can win, whether or not they consequentially affect the outcomes of the larger struggles around you.

]]>
https://blogs.swarthmore.edu/burke/blog/2014/07/08/fighting-words/feed/ 4
Frame(d) https://blogs.swarthmore.edu/burke/blog/2014/03/14/framed/ https://blogs.swarthmore.edu/burke/blog/2014/03/14/framed/#comments Fri, 14 Mar 2014 13:02:32 +0000 https://blogs.swarthmore.edu/burke/?p=2584 Continue reading ]]> High Anxiety

In modernity, dread only takes a holiday once in a while. Right now Mr. Dread is hard at work all around the world, and he’s not just sticking to the big geopolitical dramas or some single-issue fear. He’s kicking back and making himself comfortable everywhere where uncertainty holds sway, which is to say everywhere: homes, workplaces, boardrooms, the shop, the street, the wilderness.

So asking: why so anxious? of anyone is an almost pointless question. Who isn’t anxious? All the tigers in our souls are prowling the bars of whatever cage we’re in. But I’ll go ahead and ask.

What I’ll ask about is this: what stirs many tenured faculty in humanities departments at wealthy private colleges and universities to so often pick and fret and prod at almost any perturbation of their worlds of practice–their departments, their disciplines, their publications, their colleges and universities? Why do so many humanistic scholars rise to almost any bait, whether it is a big awful dangling worm on a barbed hook or some bit of accidental fluff blown by the wind into their pond?

The crisis in the humanities, we’re often assured, doesn’t exist. Enrollments are steady, the business model’s sound, the intellectual wares are good.

The assurance is, in many ways, completely correct. The trends are not so dire and many of the criticisms are old and ritualized. Parents have been making fun of the choice to major in philosophy for five decades. Or longer, if you’ve read your Aristophanes.

And yet humanists are in fact anxious. Judging from a number of experiences I’ve had in the last year at Swarthmore and elsewhere, there’s more and more tense feelings coming from more directions and more individuals in reaction to a wider and wider range of stimuli.

Just as one example, I just got back from a workshop with other faculty from small private colleges who have been working with various kinds of interdisciplinary centers and institutes and almost all of them reported that they’re constantly peppered by indirect or insinuated complaints from colleagues. We even heard a bit of it within the workshop: at one point, an audience member at the keynote said to the speaker, “Whatever it is you’ve just shown us, it’s not critique, and if it’s not critique, it’s not humanities”. When faculty are willing to openly gatekeep in a public or semi-public conversation, that’s a sign that shit is getting real.

I’d call it defensiveness, but that word is enough to make people legitimately defensive: it frames reaction as overreaction. Worried faculty are not overreacting. Maybe the humanities aren’t in crisis, but the academy as professors have known it in their working lives is. It is in its forms of labor, in its structures of governance, in its political capital, in its finances. That’s what makes the tension within the ranks of the few remaining tenured faculty who work at financially secure private institutions so interesting (because otherwise they are so atypical of what now constitutes academic work). Why should anxiety about the future afflict even those who have far less reason for anxiety?

The alarm, I think, is about the possibility (not yet the accomplishment) of transformations across a broad spectrum of everyday academic habitus: in the purposes and character of scholarship, in the modes of its circulation and interpretation, in the methods and affect of inquiry, in the incentives and commands that institutions deploy, in the goals and practice of teaching. With these fears coupled to the unbearable spectacle of many real changes that have taken place in the political economy of higher education, many of them unambiguously destructive, in the terms and forms of labor and in practices of management. A tenured humanist at a well-resourced private university or college might feel secure in their own working future, but that is the security (and guilt) of a survivor, a security situated in a world where it feels increasingly irresponsible to encourage young people to pursue academic careers as either vocation or job.

Change comes to every generation in academia. Rarely has any generation of academic intellectuals ceded power and authority gently or kindly to the next wave of upstarts. But most transitions are a simple matter of disciplinary succession: old-style political and intellectual history to social history to the “cultural turn” and so on. Whatever is at stake now seems beyond, above and outside those kinds of stately progressions.

When academia might or could change fundamentally (as it did at the end of the 19th Century, as it did in the 1920s, as it did after the Second World War), that tends to harshly expose the many invented traditions that usually gently sediment themselves into the working lives and psyches of professors. What we sometimes defend or describe as policies and practices of long antiquity and ironclad necessity are suddenly exposed as relatively recent and transitory. We stop being able to pretend that sacred artifacts of disciplinary craft like the monograph or peer review are older than a generation or two in their commonality. We draw lines of descent between ourselves and those intellectuals and professors we imagine to be our ancestors, but it only takes a few generations before we’re desperately appropriating and domesticating people who lived and worked in situations radically unlike our own. We try to whistle our way across jagged breaks and disjunctures: do not mind the gaps! Because if past intellectuals carried on writing, thinking and interpreting without tenured and departmentalized disciplinarity to support them, then arguably future intellectuals could (and will!) too.

American professors have figuratively leapt upon melancholic bonfires in gloomy protest all through the 20th Century over such retrospectively small perturbations as the spread of electives, the fall of Western Civilization (courses), the admission of women into formerly all-male institutions, the introduction of studio arts and performance-based inquiry into liberal arts curricula, the rise of pre-professional majors. Even going back the creation of new private religious colleges and universities or to the secularization of much academic study in the mid-19th Century. As we celebrate Swarthmore’s sesquicentennial this year, it’s hard to remember that once upon a time American small liberal-arts colleges might have seemed as much a kind of faddish vanity born out of every congregation and municipality wanting to put itself on the map with its own college.

Not that these changes were not major changes with a range of consequences, but well, here we are. The world did not end, the culture did not fall, knowledge was not lost forever. Often quite the contrary. Life went on.

In the end, when academics vest too much energy in discussions of particular, sometimes even peculiar, forms of process and structure within their institutions, they lose the ability to speak frankly about interestedness, both their own and the larger interests of their students and their societies. Simon During, whose recent essay “Stop Defending the Humanities” very much informs my own thinking in this piece, writes that “The key consequence of seeing the humanities as a world alongside other broadly similar worlds is that the limits of their defensibility becomes apparent, and sermonizing over them becomes harder”. An argument about whether a particular department gets a line or not, whether a particular major has this course or that course, about whether students must learn this or that theory, is always a much more parochial argument than the emotional and rhetorical tone of those discussions in their lived reality would imply. Nothing much depends upon such arguments except our own individual sense of self in relation to our profession. Which of course is often a very big kind of dependency when you’re inside your own head.

Perhaps counter to the general trend, I personally feel as if I have little invested in the fortunes of history as a discipline or African studies as a specialization. I have a great deal invested in the value of thinking about and through the past, and in the methods that historians (in and out of the academy) employ, but I don’t see such thinking as necessarily synonymous with the discipline of history as it exists in its academic form circa 2014. I have a lot invested in my own fortunes, and were I working for an institution where the fortunes of history or African studies in their institutional forms continuously determined the future of my own terms of employment, my sense of vestment in those things would have to change. I’m just lucky (perhaps) to work in a place that gives me the institutional freedom to cultivate my own sensibility.

There’s nothing wrong with self-interest. Keeping self-interest consciously in the picture is what keeps it from becoming selfishness, it’s what allows for some ethical awareness of where self-interest stops and the interests of other selves begin. That awareness can allow people to tolerate or even happily embrace a much wider range of outcomes and changes.

If it turns out, for example, that there are ways to reorganize labor within the academy that will create a much larger number of fairly good jobs, at the expense of exploitative forms of adjuncting but also at the expense of a very small number of extravagantly great jobs, well, that’s a good thing. If it turns out that more energy, attention and resources put into humanities labs or other new institutional structures leads to less energy, attention and resources to some more traditional structure of disciplinary study, well, what the hell, why not? Que sera, sera. If I need to teach one kind of course less often and another kind more often because of changes in student interest, then the main thing that change affects is me, my labor, my satisfaction, my sense of intellectual authenticity. Not the discipline or the major or the university I work for, except inasmuch as my sense of self is entangled in those things. Some entanglement is good: that’s what makes faculty good custodians of the larger mission of education.

A lot of entanglement is bad: that’s what leads to grandiose misidentifications of an individual’s transitory circumstances with the ultimate fate of huge collective projects (like disciplines or institutions or even departments) or society as a whole. That’s what leads to trying to control that fate through the lens of those individual circumstances.

There is a lot of entanglement in the academic humanities at the moment.

Hacking and Yacking

Scholars in STEM disciplines have their own concerns and worries, but they do not tend to feel the same kind of existential dread about the future of their own practices nor worry so much about the kinds of misremembered and misattributed “traditions” of scholarship and teaching that many humanists allow themselves to be weighted down with. This is not to say that they should get off lightly. STEM professors are also frequently prone to think that the structures of their majors or the organization of their disciplines or the resource flows that sustain their scholarship are precisely as they must be and have been at any given moment, and find it just as difficult to accept that not that much depends upon whether this or that course gets taught at this moment or in that fashion.

More to the point, most STEM faculty are copiously invited by the wider society to define their research as having immediate and urgent instrumental impact on the world. That’s what often leads to scientism in disciplines like psychology, sociology, economics and political science, wherein a demand for resources to support research is justified by strong claims that such research will identify, manage and resolve pressing social problems. In many ways, natural scientists and mathematicians are often more careful about (or even actively opposed to) claims that their work solves problems or improves the world than social scientists tend to be.

Hardly anyone in the academy seems able to refuse in principle the claim that their work might make the world a better place. Because of course, this could be true of anyone. Even more modestly self-interested people hope that in some small way they will leave the world better than they found it.

The problem here with humanists is the characteristic tropes and ways that they use to position themselves in relationship to the world (or as During aptly puts it, worlds), at least in the last three decades or so.

I found myself a bit embarrassed last year while attending a great event that my colleagues organized that showcased scholars and creators working with new media forms. After one presentation of a really amazing installation work, one of our students eagerly asked the artist, “What are the politics of your work?” and followed the question by stating that the work had accomplished important reframings of the politics of embodiment, of gender, of sexuality, of identity, of race, of technology, and of neoliberalism. There is almost no artist or scholar who is simply going to say, “No, none of that” in reply to something so earnest and friendly, and so it was in this case: the speaker politely demurred and asserted that the politics of the work were in some sense yet to be known even (perhaps especially) to the artist herself. I was embarrassed by the moment because the first part of the question was a performance of studied incuriosity, a sort of hunting for the answers at the back of the book. Cut to the chase! What’s the politics, so I know where to place this experience in my catalog of affirmations and confirmations. It was in its own way as instrumentalized a response as an engineering major listening to a presentation by a cosmologist about string theory and then saying, “Ok, but what can I make with this?” The catalog of attributions that formed the second part of the question both preceded and superceded any experience of witnessing the work itself.

Ok, I know: student! We all had such moments as students, and the thinking of our students is not necessarily an accurate diagnosis of our teaching and scholarship. But there seemed to me in that moment something of an embryonic and innocent reflection of something bigger and more pervasive.

Harvard faculty who recently surveyed the state of the humanities at their university identified many issues and problems, many of which they attribute to forces and actors outside of their own disciplines. However, one of the problems that the Humanities Project accepted ownership over was this: “Among the ways we sometimes alienate students from the Humanities is the impression they get that some ideas are unspeakable in our classrooms.” Or similarly, that some ideas are required. Recall my mention early on of the scholar who protested, “If you aren’t doing critique, you aren’t doing humanities”—and what the Harvard authors imply is that for some humanists, critique is not just a method or act, it is a fully populated rubric that dictates in advance a great many specific commitments and postures, many of which are never fully referenced back to some coherent underlying philosophy.

Scholars who identify with “digital humanities” know that they can quickly get a rise out of colleagues (both digital and analog) by reciting the phrase, “More hack, less yack”. Rightly so! First because working with digital technology and media requires lots of thoughtful yacking if you don’t just want to make the latest Zynga-style ripoff of a social media game or whatever. Second because theory and interpretation are hacks in their own right, things which act upon and change the world. The phrase is sometimes read as a way of opting out of critique, and thus retreating into the privileged invisibility of white masculinity while continuing to claim a place in the humanities. Sometimes that’s a fair reading of what the phrase enables or intends.

The problem with critique, however, is not that it’s not a hack, but that many times the practice of critique by humanistic scholars is not terribly good at hacking what it wants to hack. This is not a new problem, nor is it a problem of which the practitioners of critique are unaware. This very thought was the occasion for fierce debates between left intellectuals (both in and outside of the academy) in the 1980s, and one of the sharpest interventions into that dialogue was crafted by the recently deceased Stuart Hall.

In the 1980s, Hall was working out of an established lineage of questions about the relationship between intellectuals and the possibility of radical transformation of capitalist modernity, most characteristically associated with the works of Western Marxists like Gramsci, Adorno, and Lukacs but also other lineages of critical work associated with Bourdieu, Foucault, and others. For me, since this was one of the formative moments in my own development as a scholar, the most electric thing for me about Hall’s reading of the 1980s in Britain was his insistence that Thatcherism had gained its political ascendancy in part because of its adroit reworkings of public discourse, that it managed to connect in new ways with the subjectivity and intimate cultural worlds of the constituencies that it brought into a new conservative coalition. E.g., Thatcherism was not merely a question of command over a repressive apparatus, not merely an expression of innate structural power, but that it was the contingent outcome of a canny set of tactical moves within culture, moves of rhetorical framing and sympathetic performance. The position was easily applied to Reaganism as well, in particular to explaining the rise of the so-called Reagan Democrats.

This was of course exciting to left intellectuals (like me) who saw themselves as having expert training in the interpretation of culture, because it seemed to imply that left intellectuals could make a countermove on the same chessboard and potentially hope to have a big impact. But here came some problems, which Hall himself always seemed to have a better grasp on than many of those who claimed him as an influence. Namely, that knowing how identities are constructed, how frames operate, how common sense is produced, is not the same as knowing how to construct, how to frame, how to produce common sense.

Critique commonly embeds within itself Marx’s commandment to not just interpret the world but also to change it. That’s the commitment to “hack”, to act upon the world. What Hall and similar critics like Gayatri Spivak or Judith Butler had to ask during the debates of the 1980s and 1990s was this: what kinds of frames and rhetorical moves create transformative possibilities or openings? Hall played around with a number of propositions, such as “strategic essentialism”: e.g., leverage the ways that the language of essentialism is powerfully mobilizing within communities formed around identity while not forgetting that this is a strategic move, a conscious “imagining” in Benedict Anderson’s sense. Forgetting that it’s a strategy risks appropriation by reactionary movements and groups associated with nationalism or sectarianism. Which is in some cases more or less what has happened.

But the risk or the problem was more profound than that. In the very best case this scenario involves anointing yourself as part of a vanguard party or social class with all the structural and moral problems that vanguardism entails. E.g., the reason you believe you can play the chess game of framings and positionality is that you know more and know better than the plebians you’re trying to move and mobilize. And you believe that’s equally true of the guys on the other side: that the Reaganites and their successors win because they know which buttons to push without themselves being captive to those same buttons, that they know what they’re doing, not that they authentically feel and believe what they say. It is a conception of critique that puts the critic (or enemy of the critic) up and outside of the battlefield of culture, as capable of framing because they are not produced by frames. And in the case of humanistic critique from the left, the critic holds that their own engagement not even produced by the defense or advancement of self-interest. The position has to hold that interests of critique are simultaneous with the interests of everyone who is not grossly self-interested: e.g., by a true yet-to-be pluralistic kind of universal good that negates the self-interest of capitalist modernity. That it is working for the Multitude rather than the Empire. This is one of the oldest problems for any radical left: how to account for its circumstances of its own possibility. There are many venerable ways out of that intellectual and political puzzle, but it is always an issue and one that becomes more acute in a politics that names culture as a battleground and intellectuals as one important force in that struggle.

What humanists who aspire to critique understand best about rhetoric, language, culture (both expressive and everyday) through both theoretical and empirical inquiry is often at odds with effective action within culture, with the crafting of powerful interventions into public rhetoric, with the shaping of consciousness through framing gestures. Humanists are rightly suspicious of foundationalist, positivistic claims about the causes and sources of culture and consciousness, whether they come from evolutionary psychology or economics. That often means that only highly particularistic, highly local understandings of why people think, talk, and imagine in certain ways will do as a basis for expert knowledge of people thinking, representing, talking and imagining. But much of the time when we wrap up our scholarly work that has that kind of attention to particularism, we don’t end up more confident in our understandings of how and where we might mobilize or act. The particularism of much humanistic study is frequently even more fiercely inhibiting to the possibility of a deliberate instrumental reframing of the themes or mindsets that have been studied. Why? Because such study often convinces us that consciousness and discourse are the massively complex outcomes of the interaction of many histories, many actions, many institutions. It convinces us that frames and discourse often shape public culture and private interaction in ways that only partially involve deliberate intent and that also often escape or refract back upon that intent. And, if we’re at all self-aware, it often reveals to us that we’re the wrong people in the wrong place at the wrong time to be trying to reframe the identities, discourses and institutions that we have identified as being powerful or constitutive.

One way out of that disappointing moment is to assert that when the other guys win, it’s because they cheat: they have structural power, they have economic resources, they astroturf. Which just takes us back to some of Hall’s critics on the left who always thought messing around with cultural struggles was a waste of time. At least some of them more or less got stuck instead with hanging around waiting for the structural contradictions of capitalism to finally reach their preordained conclusion. Or alternatively anointed themselves not as the captains of counter-hegemonic consciousness but as the direct organizers of direct struggles, a posture which has usually lead up and out of direct employment within the academy.

Accepting the alibi that the right wins in battles for public consciousness because they have overwhelming structural advantages prevents the development of a meaningful curiosity about why some discursive interventions into public culture (conservative and otherwise) are in fact instrumentally powerful. Many humanistic critics seem doomed to take power and domination as always-known, always-transparent subjects. There have been significant attempts to undo that doom–the history of whiteness offered by scholars like David Roedinger and Nell Irvin Painter is one great example, and there are others. But always there is the problem: to treat the interiority of power and domination as being as interesting, as unknown, as contingent as anything else we might study is to open a space of vulnerability, to make critique itself contingent not just in its means but its ends. If it turns out, for example, that both powerful and subaltern conservatives in contemporary American society are as produced by and within culture as anyone else, then that potentially activates a whole range of embedded intellectual and ethical obligations that we tend to be guided by when we’re looking at something we imagine to be a bounded “culture” defining a place, a community, a people.

If it turns out that the other guys win sometimes not because they’re cheating but because they’re more present and embedded in the game than the academic intellectual, then what? Hall was always aware of this dimension of Thatcherism: that it worked in part because Thatcher herself and a few of her supporters were acutely aware of the ressentiments of some lower middle-class Britons, because of her fluency in some of their social discourses and dispositions. It stopped working because most of the rest of her party only spoke upper-class twittery or nouveau-riche vulgarism but also because ressentiment as a formation tends to press onward to vengeance and cruelty, to overstep. But this goes for many causes and ideals that progressives treasure as well. The growing acceptance of gay marriage in the United States, unless you believe Michele Bachmann’s views that it’s the work of a sinster conspiracy, has at least as much to do with a long, patient appeal to middling-class American views of decency and fairness as it does to sharp confrontational attacks on the fortresses of heteronormativity. It’s an achievement, as some queer theorists have noted, that has the potential cost of the bourgeois domestication of sexuality and identity as a whole, but it’s still an example of a deliberate working of the culture towards an end, and it’s a working that scholars and activists can rightly say they contributed to.

But this is the thing: every move that’s justified as a move within and about the culture then needs to be thought through in terms of what its endgame might be. You can justify tone-policing and calling people out on social media as a way to mobilize the marginalized, as a strategy of making people visible. You can justify it as catharsis. But I’m not sure, as some seem to be, that there’s much in the way of evidence that it works as a strategy for controlling, suppressing or transforming dominant speech.

The critical humanist wants to lift up the hood of the culture and rebuild the engine, but it turns out the toolkit they’ve actually got is for the maintenance of some other machine entirely. Which means in some sense that all the framings, all the hackings, all the interventions into rhetoric have tended come squarely back to that other machine: to the academy itself. Which explains why the anxieties of critique are visited back so intensely upon academic life and upon academic colleagues who seem in some fashion or another to have wavering loyalties. Humanistic critique might not have hacked the culture, but it definitely remade the academy. We are our own success story, but critique dare not let itself believe that success is in any way firmly accomplished, and it must also believe that any such accomplishment is always in deathly peril. It is, in any event, not enough: the remaking of the academy alone is never what critique had an aspiration to achieve.

I don’t think that bigger aspiration was wrong, but I do think that taking it seriously should always have implied a fundamentally different kind of approach to professionalism and institutionalization for critical humanists than it ultimately did. It’s not surprising in that sense that Stuart Hall always insisted that he wasn’t really an academic or a scholar, just an intellectual who happened to work in an academic environment. But of course even that “happened to” raises questions that were almost impossible for Hall and others to explore or explain. What if the deeply humanistic and progressive intellectuals who really make powerful or influential moves on the chessboard are not, cannot be, in the academy, whether by design or a “happening”? What if they’re app designers or filmmakers or preachers or entrepreneurs or community activists or advertisers? And what if the powerful moves to be made in the public culture are not a function of profound erudition and methodological disciplinarity but emotional intelligence? Or the product of barely articulated intuitions about the histories and structures circulating in the body politic rather than the formal scholarly study of the same? (More uncomfortably on the “happened to” front, what logic would entice disciplines to hire intellectuals rather than scholars? I’ve met more than a few academic humanists who insist that they, like Hall, are only intellectuals passing through the university only to see them turn around and be wholly committed to the most stringent enforcement of intensified and narrow disciplinary authority over who gets hired, tenured and promoted.)

The scholar devoted to critique could seek consolation by imagining they supply tools and weapons to other actors in the public sphere. That they give the intuitive critic and the culture worker information, ideas, frameworks. Hey, the Wachowskis read their philosophers when they made the Matrix films, right? And that would be a fair enough consolation in many cases: many people have been indirectly influenced by Foucault’s anatomization of power who could not cite him; Judith Butler changed the inner life of gender for people who have never heard of her. With a touch of humility, it’s not at all hard to claim our place as one more strand on the loom of cultural struggle.

Maybe that humility should be more than a touch. In recent discussions at Swarthmore over controversial events and a series of protests, I’ve heard it said more than once that academic institutions should never legitimate oppression by voluntarily inviting it inside their walls. Some of my colleagues have rolled their eyes in derision at the riposte of one student in the student newspaper who pointed out that we frequently and often respectfully read the works of people who were deeply involved in oppression: isn’t that legitimation, too? Well, why is that a silly response? It’s silly to some humanists because they believe that their own critical praxis allows both for awareness of how past (and present) works are implicated in power and for a plasticity and creativity in how we appropriate or create productive readings out of texts, lives, practices that we otherwise reject or abjure.

But this is where the hubris of an attachment to “framing” comes in. Like the Mythbusters, we come on at the beginning of the show and say: do not try this at home. We are trained, and so we can frame and reframe what we offer to produce an openness in how our students interpret and do it without producing too much openness. That novel can mean this thing or that thing or oh! how delightful, a new thing that it’s never meant before. But no, it doesn’t mean that thing, and no you shouldn’t think that of it, and oh dear, please you know that part is just awful. And so, if (for example) a terrible reactionary comes to campus and doesn’t perform his terribleness on cue and the wrong thing gets thought by many in the audience as a result, that’s a failure of framing. You know the frame has failed when the anticipated and required readings of the text are not performed. That’s not a failure of the audience and it’s not a success of the text. It’s alleged to be a failure of pedagogy, of scholarship, of intellectual praxis. The ringmaster forgot to flick his whip to get the clowns to caper when they were supposed to. All roads always lead back to us, ourselves, because that’s where we’ve vested our professionalism as both scholars and teachers: we are those who produce consciousness, at least within our own dominion.

The thing is, why do academic institutions legitimate? Because they do, they really do. There’s a reason why public figures and politicians who’ve just done something wrong or who have had the morality of their actions called into question often gratefully accept the opportunity to speak at a university, to accept an honorary degree, to teach a course. There’s a reason why the current government of Israel worries about the prospect of an academic boycott.

We legitimate not because we are adroit (re)framers, not because we put the Good Humanist Seal of Approval on some performances and the Stamp of Critique on others. We legitimate because after all the populist anti-intellectualism, after all the asshole politicians trash-talking the eggheads who waste money on gender studies and art history, after all the billionaire libertarians who trash universities as a part of their own preening self-flattery, because after all that most people still trust and value academia, both their ideal vision of academia and even much of its reality.

Look it up: on the list of must-trusted and least-trusted professions (in an age of profound alienation and mistrust) teachers, professors and scientists all still fare very well. We legitimate because people expect us to do our homework, to be deeply knowledgeable, to be honest, to be curious, to be temperate and judicious, and to be fair. And they even trust us despite the fact that we are the gatekeepers of the economic fates of many of our fellow citizens, and often even trust us more in proportional relationship to the degree to which we anoint the future elites of a society that is growing more unequal and unjust by the second.

This is not a liability: it’s a strength, but you have to use it as it comes. If there’s one thing that the theoretical indebtedness to Foucault among many humanists today should lead to is an awareness that virtue does not arise as an automatic consequence of your distance from power. If you want to practice critique, you work first with what you got and with who you are, you work the power you possess rather than pining for power elsewhere. The master’s tools can dismantle the master’s house: they built it, after all. Or they can change what’s inside of it. If that’s not acceptable, then you make something else, somewhere else, as someone else. Humanistic-critique-as-mastery-over-framing wants the legitimacy and influence of academic institutions without accepting the histories and readings that produce that legitimacy. It wants to be intellectuals elsewhere just happening to be here. It wants to hack without really understanding the code base it’s working with.

Academic Freedom as a Positive Liberty

Ok, but I too am anxious. I too do not want to work with what I’ve got and accept what I am. Can you tell that, 5,000 odd words later? And no, it’s not the anxiety of loss, not that old white liberal spiel about “oh, back in my day, the students were all very such-and-such, now we have that awful critique and multiculturalism and postcolonialism”. Inasmuch as I can and do perform that kind of ghastly professorial nostalgia, I’m probably indistinguishable from most of my humanist colleagues: oh, dear, I remember that great directed reading on Marxist critical theory with that student; oh dear, I used to have students who knew who Fanon was; and so on. Inasmuch as I am mournful in my expressions on social media, it’s often about my profound sense that many things I thought were irreversible signs of social progress have turned out to be profoundly reversible. Inasmuch as I rage about political trends, I sound very much like your average left-leaning humanistic professor.

It is not the anxiety of loss I feel most in my work these days. It’s the anxiety of a mostly-never-was and maybe never-will-be understanding of what I think the main or dominant professional ethos of an academic intellectual ought to be in scholarship and teaching and public persona.

It’s the opposite of what I think is embedded inside the idea of critique-as-reframing, critique as chess move in a war of position. When someone says to me, “Why didn’t you frame that event differently? Why do you let those words stand out there implying that the event means this? Those words out there that permit people to think that?”, my gut wants to reply as Justice Harry Blackmun did to the death penalty: “I shall tinker no more with the machinery of framing”.

By this I do not mean to say that I do not hope, as a writer, to mean what I say and say what I mean, and to influence people accordingly. But the worst problem with believing that any politics, intellectual or otherwise, is a matter of framing is ultimately the way that it encodes the framer as an agent and the framed as a thing. That both tempts the person who hopes to control the frame into a hubris that intensifies the ways in which they come off as inauthentic and manipulative (and therefore defeat their own goal) and paradoxically keeps the aspirant framer from a richer understanding of how and why other people come to think and feel and act as they do. That undersanding is actually crucial if you hope to persuade (rather than frame) others.

With all of their defects, including potential blindness to power and an air of liberal blandness, the terms persuasion and dialogue are, if you’ll excuse the irony for a moment, a better frame for what a critical humanist intellectual, or maybe just a critically aware human being, might want to be and do in relation with others. Because they start at least with the notional humanity of everyone in the room, in the conversation, in the culture, in the society. That’s not a gesture of extravagant respect to other people, it’s not generosity. It’s a gesture of self-love and self-empowerment, because you are going to get precisely jackshit nowhere in moving people to where you think they ought to be if you permit yourself the indulgence of thinking some people are things who can be dogwhistled wherever you want them to be. Even the most crass and awful kinds of dogwhistles don’t work that way, really. Maybe that gets you some votes in the primary election but it doesn’t change hearts and minds, doesn’t change how people live and act. As Raymond Williams once said of advertisers, there are a lot of people working the culture who are magicians that don’t know how their own magic tricks work.

So part of how I want an institution devoted to thoughtful, scholarly inquiry and conversation to work is to stop overthinking everything. And I don’t think I’ll get that.

But it is also this. One reason I absolutely did not want to defend the presence of Robert George at Swarthmore in conventionalized terms of free speech, in conventional languages of academic freedom, is first that this is just the most tedious kind of counterpunch in the stupid pantomime show that American national politics have become. The outsiders who tut-tutted at Swarthmore students and faculty on Twitter and so on have not a fuck to give about academic freedom when it extends to something they don’t like or respect. If there is anything a decade of blogging often about academic freedom has convinced me of, it is that there is almost no one who can be counted upon to be an honest broker on the subject, but most especially not many of the right’s most dedicated concern trolls.

This begs the question of what exactly I am looking for as I wander around with my lamp in the daytime.

The idea that academic freedom means that the academy should be a perfect mirror of the wider society is stupid. That would not be the outcome of an honest and balanced approach to academic freedom. That would just be evidence that the academy had become completely pointless. As indeed I would say of any specific social or political institution: nothing with a mission or a purpose should be judged success or failure on whether it is a precise microcosm of society as a whole. You make institutions to be a part, a piece, that the whole cannot be or isn’t already.

I’ve suggested in the past that academic freedom also doesn’t particularly accomplish what its defenders allege it does. It doesn’t liberate scholars and teachers to speak honestly and openly, it doesn’t incentivize the production of new ideas and innovation. Even less so now of course with the corrosion of tenure and the rise of adjunctification, but tenure never really protected most of what is claimed for academic freedom. It has long tended to domesticate, to conventionalize, to restrict scholarly speech and thought.

Academics still insist on defining academic freedom, like freedom of speech more broadly, as a negative freedom. A freedom from power, from restriction, from constraint, from retaliation. What if, instead, we defined it as a positive liberty? Meaning, something we were supposed to create more of for more people in more ways. What if we saw it as an entitlement, a comfort, a richness and saw ourselves not as the people protected from harm but as those who are obliged to set the table as extravagantly as we could?

What would that mean? It starts here: nothing human is alien to me. So then this: our curricula, our writing, our events, our conversations, should be cornucopia bursting to the brim with everything, with anyone. Our learning styles, our teaching styles, our everyday world of learning and thought, should run the spectrum and we should love each thing and everyone in that range. Love (but challenge!) the slacker, the romantic, the specialist, the literalist, the dissenter, the generalist, the cynic, the critic. The only thing you don’t love is the one who is trying to keep everyone else from their thing, who is consciously out to destroy and hurt.

Don’t build departments and legacies and traditions. Don’t hire people to cover fields, hire people because they’re different in their thinking and methods and styles and lived experiences and identities than the last person you hired. Build ecosystems full of niches and habitats. Let them change. Be surprised at what’s living over there in that place you haven’t looked at lately. Be intrigued when there’s some new behavior or relationship appearing.

Stop framing, stop managing. Because here’s the other thing: academic freedom retold as a positive liberty would be about accepting the ethical and professional responsibility to populate the academy with as much different kinds of shit as it can hold. It would be about giving up the responsibility to guarantee in advance what the outcomes will be. It’s about not quickly putting up the guard rails every time it looks like someone is going off-message or having an unapproved interpretation. Not freedom to speak, not guarantees against suppression. The active responsibility to cultivate more speech! More speech and thoughts of any kind! All kinds in all the people! All the things!

I build most of my classes as environments and see my students as agents. I’m not empowering them in the conventional Promethean sense, taking them paternistically from marginality into authority. Sure, I have boundaries to what I’m doing, and I have responsibilities to enforce some standards—-both those I agree with myself and those that I am the custodian for. I’m not everyone and everything: I have things I know well, things I know less well, things I don’t know at all, and I steer clear of the latter. I have my hangups and my obsessions: if you’re in my class, you’ll hear about them. But outside of that? Anything’s a good outcome. Anything has to be, if you’re really committed to teaching into the agency of students rather than teaching as the control over that agency. I learned that from my best graduate advisor, who helped Afrocentrists and Marxists and liberals and postmodernists and pretty much every foundling or lost puppy who ended up on his doorstep to be better and smarter at what they were, rather than remolding them into kinfolk in his lineage house. Almost all outcomes are good. Almost all lives that pass through education are good, and all of them should feel as if they grew and were enriched by that passage.

Which I think is frustratingly sometimes not the case, and I think it’s often because we the faculty in all our disciplines and all our institutions want to control too much, want to be not the gardeners of an ecosystem but the bosses of a workplace. Or the aspirant framers of a culture-to-come whose imagined transformations can only be thus and not that.

This is in the end the other place where the critical theories that inform so much of contemporary academic humanism are frustratingly mismatched with the substance of much practice. We should know better than to place “power” and “virtue” as opposites—but we should also know better than to embrace predictability and control. Both because systems, societies, futures are not predictable or easily controllable, and because many of the most beloved theorists among progressive humanists don’t want them to be. Don’t just describe some ideal possible future way of being as rhizomic, be the rhizome.

There are many powerful forces that would rise to stop such a vision, have already risen to do so. We can’t teach and speak and think this way in higher education as long as most of the teaching and thinking is happening at sub-poverty wages among adjuncts who have zero security and institutional power. We can’t teach and speak and think this way if our administrations are gigantic corporate-style bureaucracies or if our public funding is completely removed.

But the way I’m thinking the academy, and especially the humanities could be, might actually be the solution to many of those interminable debates about process and structure and even about public acceptance. If we could live with, even embrace, the profound indeterminacy of culture and transformation and knowledge, if we could build ecosystems and be rhizomes, I think we’d be more consistent with the indeterminacy and unpredictability of the world that we hope to serve.

But yes, I’m anxious and a bit sad. I don’t expect this to ever be the way we are, and I fear it won’t be not just because something alien or sinister will move in to stop us. It’ll be because we won’t. Maybe we can’t. I think there are lots of humanists I know that are doing some or all of what I think we should do, lots of humanists who are wise enough, most of the time, to avoid thinking they can control the horizontal and the vertical. But it’s a reflex that jerks very hard at precisely the moments where it shouldn’t, and each time it does a niche in the ecosystem goes dead. Cliched as the Serenity Prayer might be, what we need is the wisdom to know the difference between what we can (and should) change and what we can’t (or shouldn’t). If not for our institutions and our students and our disciplines, for ourselves. Because I think that’s where there’s some relief from anxiety. Let it go.

]]>
https://blogs.swarthmore.edu/burke/blog/2014/03/14/framed/feed/ 8
Teleology and the Fermi Paradox https://blogs.swarthmore.edu/burke/blog/2013/07/25/teleology-and-the-fermi-paradox/ https://blogs.swarthmore.edu/burke/blog/2013/07/25/teleology-and-the-fermi-paradox/#comments Thu, 25 Jul 2013 18:21:22 +0000 https://blogs.swarthmore.edu/burke/?p=2399 Continue reading ]]> I sometimes joke to my students that “teleology” is one of those things like “functionalism” that humanist intellectuals now instinctively recoil from or hiss at without even bothering to explain any longer to a witness who is less in-the-know what the problem is.

But if you want a sense of how there is a problem with teleology that is a meaningful impediment to thoughtful exploration and explanation of a wide range of existing intellectual problems, take a look at io9’s entry today that reports on a recent study showing that self-replicating probes from extraterrestrial intelligences could theoretically reach every solar system in the galaxy within 10 million years of an initial launch from a point of origin.

I’ve suggested before that exobiology is one of the quintessential fields of research that could benefit from keeping an eclectic range of disciplinary specialists in the room for exploratory conversations, and not just from within the sciences. To make sure that you’re not making assumptions about what life is, where or how it might be found or recognized, and so on, you really need some intellectuals who have no vested interest in existing biological science and whose own practices could open up unexpected avenues and insights into the problem, whether that’s raising philosophical and definitional questions, challenging assumptions about whether we actually could even recognize life that’s not as we know it (or whether we should want to), or offering unexpected technical or artistic strategies for seeing patterns and phenomena.

As an extension this point, look at the Fermi Paradox. Since it was first laid out in greater detail in 1975 by Michael Hart, there’s been a lot of good speculative thinking about the problem, and some of it has tread in the direction I’m about to explore. But you also can see how for much of the time, responses to the concept remain limited by certain assumptions that are especially prevalent among scientists and technologists.

At least one of those limits is an assumption about the teleology of intelligence, an assumption that intelligent life will commonly or inevitably trend towards social and technological complexity in a pattern that strongly resembles some dominant modern and Western readings of human history. While evolutionary biology has long since moved away from the assumption that life trends towards intelligence, or that human beings are the culmination of the evolution of life on Earth, some parallel speculative thinking about the larger ends or directionality of intelligent life still comes pretty easily for many, and is also common to certain kinds of sociobiological thought.

This teleology assumes that agriculture and settlement follow intelligence and tool usage, that settlement leads to larger scales of complex political and social organization, that larger scales of complex political and social organization lead to technological advancement, and that this all culminates in something like modernity as we now live it. In the context of speculative responses to the Fermi Paradox (or other attempts to imagine extraterrestrial intelligence) this produces the common view that if life is very common and intelligent life somewhat common that some intelligent life must lead to “technologically advanced civilizations” which more or less conform to our contemporary imagination of what “technological advancement” forward from our present circumstances would look like. When you add to this the observation that in some cases, this pattern must have occurred many millions of years ago in solar systems whose existence predates our own, you have Fermi’s question: where is everybody?

But this is where you really have to unpack something like the second-to-last term in the Drake Equation, which was an attempt to structure contemplation of Fermi’s question. The second-to-last term is “the fraction of civilizations that develop a technology that releases detectable signs of their existence into space”. For the purposes of the Drake Equation, the fraction of civilizations that do not develop that technology is not an interesting line of thought in its own right, except inasmuch as speculation about that fraction leads you to set the value of that term low or high. All we want to know in this sense is, “how many signals are there out there to hear?”

But if you back up and think about these questions without being driven by teleological assumptions, if you don’t just want to shortcut to the probability that there is something for SETI to hear–or to the question of why there aren’t self-replicating probes in our solar system already–you might begin to see just how much messier (but more interesting) the possibilities really are. Granted that if the number that the Drake Equation produces is very very large right up until the last two terms (up to “the fraction of planets with life that develop intelligence”) then somewhere out there almost any possibility will exist, including a species that thinks very substantially the way we do and has had a history similar to ours, but teleology (and its inherent narcissism) can inflate that probability very wildly in our imaginations and blind us to that inflation.

For example:

We’ve been notoriously poor in the two centuries since the Industrial Revolution really took hold at predicting the forward development of technological change. The common assumption at the end of the 19th Century was to extrapolate the rapid development of transportation infrastructure and assume that “advancement” always would mean that travel would steadily grow faster, cheaper, more ubiquitious. In the mid-20th Century it was common to assume that travel and residence in space would soon be common and would massively transform human societies. Virtually no one saw the personal computer or the Internet coming. And so on. The reality of 2013 should be enough to derail any assumptions about our own technological future, let alone an assumption that there will be common pathways for the technological development of other sentient life. To date, futurists have been spectacularly wrong again and again about technology in fundamental ways, often because of the reigning teleologies of the moment.

It isn’t just that we tend to foolishly extrapolate from our technological present to imagine the future. We also have very impoverished ways of imagining the causal relationship between other possible biologies of intelligent life and technosocial formations, even in speculative fiction. What technologies would an underwater intelligence develop? An intelligence that communicated complex social thoughts through touch or scent? An intelligence that commonly communicated to other members of its species with biological signals that carried over many miles as opposed to at close distances? And so on. How much of our technological histories, plural (because humanity has many more than one technological history) are premised on our particular biological history, the particular contingencies of our physical and cultural environments, and so on? Lots, I think. Even within human history, there is plenty of evidence that fundamental ideas like the wheel may not be at all inevitable. Why should we assume that there is any momentum towards the technological capabilities involved in sending self-replicating probes to other star systems or any momentum towards signalling (accidentally or purposefully)?

Equally: why should we assume that any other species would want to or ever even think of the idea? Some scientists engaging the Fermi Paradox have suggested that signalling or sending probes might prove to be dangerous and that this is why no one seems to be out there. E.g., they’ve assumed a common sort of species-independent rationality would or could guide civilizational decision-making, and so either everyone else has the common sense to be quiet or everyone who wasn’t quiet is dead because of it. But more fundamentally, it seems hard for a lot of the people who engage in this sort of speculation to see something like sending self-replicating probes for what they really might be characterized as: a gigantic art project. It’s no more inevitable than Christo draping canyons in fabric or the pharoahs building pyramids. It’s as much about aesthetics and meaning as it is technology or progress. There is no reason at all to assume that self-replicating probes are a natural or inevitable idea. We might want to at least consider the alternative: that it is a fucking strange idea that another post-industrial, post-scarcity culture of intelligences with a lot of biological similarity to us might never consider or might reject as stupid or pointless even if it occurred to them.

Anthropocentrism has died slowly by a thousand cuts rather than a single decisive strike, for all that our hagiographies of Copernicus and Galileo sometimes suggest otherwise. Modern Western people commonly accept heliocentrism, and can dutifully recite just how small we are in the universe. Until we began getting data about other solar systems, it was still fairly common to assume that the evolution of our own, with its distribution of small rocky planets and gas giants, was the “normal” solar system, which is increasingly obviously not the case. That too is not so hard to take on board. But contemporary history and anthropology provide us plenty of information to suspect that our anthropocentric (specifically modern and Eurocentric) understandings of how intelligence and technology are likely to interrelate are almost certainly equally inadequate to the reality out there.

The more speculative the conversation, the more it will benefit from a much more intellectually and methodologically diverse set of participants. Demonstrating that it’s possible to blanket the galaxy with self-replicating probes within ten million years is interesting, but if you want to know why that (apparently) didn’t happen yet, you’re going to need some philosophers, artists, historians, writers, information scientists and a bunch of other folks plugged into the discussion, and you’re going to need to work hard to avoid (or at least make transparent) any assumptions you have about the answers.

]]>
https://blogs.swarthmore.edu/burke/blog/2013/07/25/teleology-and-the-fermi-paradox/feed/ 5
A Different Diversity https://blogs.swarthmore.edu/burke/blog/2013/03/01/a-different-diversity/ https://blogs.swarthmore.edu/burke/blog/2013/03/01/a-different-diversity/#comments Fri, 01 Mar 2013 21:43:48 +0000 https://blogs.swarthmore.edu/burke/?p=2275 Continue reading ]]> Following on Carl Edgar Blake II’s description of his abilities, let’s go back to the question of whether faculty in higher education ought to have doctorates, whether doctoral study in some form roughly resembling its present structure is the best kind of training for undergraduate-level teachers or academic researchers.

The research part of this question is easy: yes, at least until the nature of scholarship itself changes in some substantial form. (And there’s yet another issue for another day.) The fit between scholarship as practiced in a dissertation (however long it takes to research and write it) and the vast majority of scholarly work across the disciplines is close.

For teaching? For stewardship over a curriculum? Perhaps the fit is not so precise. Going back to Michael Bérubé’s address on the situation of the humanities in academia, the issue of how a department (in the humanities or any other subject) might train graduate students to do other things besides be professors if all the people in that department are professors who were trained to be professors. Louis Menand’s recent talk at Swarthmore remarked on the same problem, with Menand concluding that he couldn’t imagine how to advise or teach a graduate student who wanted to apply her degree to a profession outside of academia, even while he conceded that it is increasingly urgent that graduate programs have this kind of flexibility.

This Catch-22 sentiment is a common one. I hear it even at Swarthmore and other small liberal-arts colleges: the coupling together of a belief that any course of study can lend itself to any kind of working future (indeed, any kind of manner of living life) but that professors usually don’t have the training or knowledge to advise students about any specific profession except for academia itself.

———-

Let’s start with that claim before asking what, if anything, needs to change about graduate training. When this point is made defensively (as I think it was in Menand’s talk), it’s troubling. It brutally undercuts one of the most common claims about the “liberal arts” as an ideal: that they enable students to learn how to learn, to become active agents in interpreting and imagining the world, to acquire knowledge as needed and wanted. If in fact this is true, how can it be that the faculty who teach within a liberal arts approach are incapable of enacting the supposed virtues of that course of study? Such that they cannot be expected to understand professions outside of academia or help a student see connections between what they have studied in college and their future aspirations or goals?

There is a legitimate point that we can make more carefully. I cannot really advise students about careers in museums, development organizations, non-profit community groups, carpentry or graphic design (for a few examples) if the advice a student is seeking from me is about the specific conditions of employment and specific ‘insider culture’ of those professions unless I just happen to have studied them in my own scholarship or have had past professional involvement with them. (In the latter case, that’s probably only useful if I’m a relatively junior faculty member.) The only job market where I have valuable insider knowledge is the academic job market.

But that shouldn’t be an excuse to shoo away a potential advisee. Because I do have two things I can help the student with. First, I should be able to help a student see how their own studies give them potential insight into the kind of work (or in other cases, the sort of living) done in any given field. If I can’t help a student see how their work in a history major can give them useful ideas about how to approach museum exhibition or advertising or law enforcement, I’m not much of a teacher, nor am I living up to the typical argument for a “liberal arts” approach to education. Second, if I can’t sit down with my student and learn together some of what there is to know out there about the “insider culture” of a given profession, find some contacts (maybe alumni, maybe faculty, maybe staff, maybe none of the above) and thus give the student a much clearer and more focused agenda when they do find themselves talking to a career advisor, I’m also not doing a great job as an advisor and teacher. I should be able to show students how to learn and understand what they want to learn and understand whether or not it’s my own area of specialized knowledge, because that’s what we claim our students are learning how to do.

It’s just that it is easier to do for an area where I have more knowledge and experience, and the texture and detail of my advice in those cases will be richer and deeper. When we’re busy, we naturally emphasize trying to match any questions to the person best qualified to answer them. If I’m talking to a student who wants to be a civil engineer, it’s inefficient for me to spend a lot of time acquiring the spot knowledge to help them get to the next step of that goal when there are a bunch of engineers on the other side of the garden at the back of my building. But if I’m talking with a history major who is interested in careers in design and technology and wants to know how history might help inform that future, I should have a bunch of ideas and suggestions readily at hand. It’s only when we get down to brass tacks like, “So what kinds of previous experience or graduate education do entry-level employees in product design typically have?” that I need to say, “Ok, I’m not the best person to ask.”

————-

The question then becomes, does academia need more people who are the best people to ask about a wider range of life experiences and careers? Large research universities with professional schools often do have a bigger range of other kinds of training and experience within their faculty, quite intentionally so in many cases. Small liberal-arts colleges usually don’t: the primary training and work experience of most faculty is academic from beginning to end. When a faculty member has spent time doing other work prior to commencing doctoral study, that often doesn’t figure as much as it could in how the community knows that person and how that person produces knowledge and interpretation within the community.

Not long before he died, my father asked me if it was possible for a successful lawyer with long experience like him to teach the last five or ten years of his working life. I said that it might be that some law schools would be interested, but probably not if he did not already have a connection to them and not if he hadn’t done some form of legal scholarship in his field of expertise. I also thought that there were community colleges that might be interested, and in fact, he had taught a few courses in that setting already. I think he would have been a great teacher in almost any setting: I could easily see him teaching a course on law or labor relations in a college like Swarthmore.

So why don’t we recruit someone like that more often to teach? There are some practical barriers. One-off courses taught by outsiders tend to dangle from the edge of an undergraduate curriculum, poorly integrated into the larger course of study. You can’t plan around taking such a course if you’re a student or directing students to such a course if you’re an advisor. Increasing the supply of such courses more steadily is a short road to adjunctification, which is especially corrosive to small teaching-centered residential colleges.

And if we had longer-term contracts aimed at recruiting this kind of “experiential diversity” in a faculty, how would we know what the content of a candidate’s experience and thinking amounted to? How would we be able to assess who could teach well in a typical liberal-arts environment? You wouldn’t be hiring someone to be a pre-professional trainer: you’d be looking instead for someone who could teach about the ideas, the problems, the open questions, in a broad domain of practice and knowledge. Hiring someone like my dad to teach “NLRB Regulations I” at a place like Swarthmore would be totally out of place with everything else the institution is doing. But a course like “An Insider’s Look at the Culture of Legal Practice in American Society, 1960-1995” might fit in perfectly. While I think he was a natural teacher, I don’t think he could have walked in off the street to teach a course like that any more than I could have walked into Swarthmore at the start of my first year of graduate school and taught a survey in African history.

If you set out to consciously diversify the range of experiences and training present in a typical liberal-arts faculty, you’d really have to be looking for and having an active preference for people like Toby Miller: compatible with and knowledgeable about the internal cultures and practices of academia, trained in some fashion close to the normal course of study, but with a much more wide-ranging set of previous experiences and a conscious dedication to using those experiences to provide a different angle or interpretation of “the liberal arts”.

Miller recounts his working history: “radio DJ, newsreader, sports reporter, popular-culture commentator, speech-writer, cleaner, merchant banker, security guard, storeman-packer, ditch digger, waiter, forester, bureaucrat, magazine and newspaper columnist, blogger, podcaster, journal editor, youth volunteer, research assistant, suicide counsellor, corporate consultant, social-services trainer, TV presenter and secretary”. If a candidate showed up in a job search for Swarthmore with that resume, I don’t think we’d actively discriminate against him if he had his i’s dotted and t’s crossed in a ‘normal’ form of graduate training. But we would neither see any of that past history as a qualifying asset likely to make the candidate a usefully different kind of teacher or advisor.

I’m as guilty of this perspective as anyone. Tenure-track hires are weighty decisions that can have consequences for thirty or forty years–and therefore tend to produce risk-adversity in even the most flighty or idiosyncratic person. Someone who has an innovative, edgy research project or teaching style but whose graduate training is otherwise familiar seems about as much of a risk as most of us want to take: hiring someone whose professional identity is as much vested in what they did before or outside of academia is often too unnerving unless the discipline in question has a particular preference or tolerance for certain kinds of outside-of-academia work (say, as in the case of economics). Considering that Toby Miller’s idiosyncratic path is partially what informs his sharp critique of the institutionalization of the humanities in American academia, it might be that that legitimate worries (“can this person teach? can they do well those things that we’re confident the institution should be doing?”) can’t easily get away from fears about what an outsider sensibility can do to an insider’s lifeworld.

I don’t underestimate the practical problems. I criticized Menand for saying that he can’t imagine how to advise anyone but an aspirant professor about their career choices, but here I’ll have to cop a similar plea. I can’t easily imagine in actual practice how we’d go about having a few Toby Millers by deliberate design rather than happy accident. But I can imagine that students, faculty and staff would benefit a lot if we could dream up a way to accomplish that objective.

]]>
https://blogs.swarthmore.edu/burke/blog/2013/03/01/a-different-diversity/feed/ 2
Particularism as a Big Idea https://blogs.swarthmore.edu/burke/blog/2013/02/20/particularism-as-a-big-idea/ https://blogs.swarthmore.edu/burke/blog/2013/02/20/particularism-as-a-big-idea/#comments Wed, 20 Feb 2013 21:21:34 +0000 https://blogs.swarthmore.edu/burke/?p=2260 Continue reading ]]> One of the interesting points about Jared Diamond’s books that has come up recently at Savage Minds is that cultural anthropologists don’t write “big books” much any longer, that the disciplinary vision of cultural and social anthropology is now so anti-universalist, anti-teleological, so devoted to the particular character of specific places and times, that a sweeping analysis of large-scale themes or generalized theory seems out of bounds. (David Graeber’s Debt was mentioned as an exception.) Cultural history exhibits something of the same tendency towards the microhistorical and particular, as does a good deal of humanistic scholarship in general.

This alone seems enough to inflame one set of critics who seem to regard it as both heretical and superficial to refuse to pursue generalized, sweeping conclusions and universally valid principles that arise out of empirical data. So this, in fact, seems to me the “big book” that we need an anthropologist or historian to write, aimed at the same audiences that read Diamond, Pinker, E.O. Wilson, Haidt and other sociobiologists, evolutionary psychologists, neurobiologists and “big history” writers who offer strong universalizing or generalizing accounts of all cultures and societies across space and time. What we need is someone who can write a big book about why the most interesting things to say about human cultures are particular, local and contingent.

That book would have to avoid falling into the trap of being the straw man that Pinker in particular loves to hit over the head. It would need to start by saying that of course there are transhistorical, universal truths about human biology and minds and the physical constraints of environment and evolution. “Nature” matters, it’s real, it’s important. And equally of course, there are institutions which have persistent force across time and space either because human beings carry those institutions with them and reproduce them in new settings, or because there really are functional, practical problems which arise repeatedly in human societies.

A preference for local, situated, grounded studies does not require a blanket rejection of the biological, material or functional dimensions of human history and experience. What I think the “big book” could say is two major things:

1) that many forms of generalizing social science make far stronger claims that they are factually and empirically entitled to make, and that this problem gets much worse when the generalization is meant to describe not just all existing societies but all of human history.

2) that much generalizing or universalizing social science uses a description of foundational or initial conditions of social and cultural life as if it were also a description of particular, detailed experience and thereby misses both what is interesting and important about the detailed variations between different places and times–which includes the fact that there should be details in the first place. Essentially, that strongly generalized accounts of all human history are making a big deal out of the most obvious and least interesting aspects of human existence.

The first point is simpler, but should command far more respect among scholars and intellectuals who describe themselves as scientists and empiricists than it seems to. I’m going to focus on it for the remainder of this essay and take up the second point another day.

Let me use the example of “stages” of world history, which comes up prominently in Diamond’s new book, primarily as an assertion that there are “traditional” societies that reflect an original or early stage of human history and “modern societies”, with everything presumably arranged neatly in between them. (Diamond is not much interested in his new book in the in-between, and actually has never really been interested in it–Guns, Germs and Steel more or less argues that the early migration and development of human societies across the planet has determined all later histories in a directly symmetrical fashion.)

Most contemporary anthropologists and historians react negatively when they come across an account that tries to arrange human societies along a single spectrum of evolutionary change. To some extent, that reaction is conditioned by the historical use of such characterizations to justify Western racism and colonialism. But even accounts of evolutionary stages of human history that scrupulously avoid those associations are factually troubled.

What’s the issue? Let’s take a point that crops up in Diamond, in Napoleon Chagnon’s work and in a number of other sociobiological and evolutionary-psychology accounts of human variation.

If someone says, “Many human societies practice some form of warfare” or “organized violence is common in most human societies”, that’s fine. The anthropologist or historian who pushes back on that simple generalization is just being a tendentious jerk. Sure, it begs the question of what “warfare” is, but the generalization is so gentle that there’s plenty of space to work out what “many” and “warfare” mean.

Step up a notch, “All human societies practice some form of warfare”. This kind of generalization is easy to break, and it is frustrating when someone making a generalization of this kind digs in their heels to defend it. It’s really only defensible as an icebreaker in a conversation about the phenemenon in question. It can only hold as an airtight assertion if “warfare” is defined so generally that it includes everything from World War II to a football game.

Refine it a step using an evolutionary schema: “All human societies once practiced some form of warfare, but warfare grew into a more rarified, restricted and directed phenomenon as states grew in scale and organizational sophistication.” This sounds like it’s being more careful than the “all human societies practice” generalization but in fact it is even easier to break, because it rests on a linear account of the history of the state (and then a linear account of warfare’s relationship to that history). This is simply not true: human political institutions across time and space have all sorts of variations and really haven’t moved progressively towards a single form or norm until the exceptionally recent past. Even now there are some striking variations at a global scale–and it’s equally clear now that Fukuyama’s End of History assertion that liberal democracy is the final stage of human political evolution is just plain wrong. Beyond the present moment lies the unknown as far as political structures and practices go.

You can break the general assertion not just by citing endless examples of political structures that don’t fit neatly between “traditional” and “modern” societies or endless examples of “warfare” with non-linear relationships to changing political structure over time. You can also break it at the end that Diamond and Chagnon focus on, in the assertion that “traditional societies” in recent history are unchanged survivals, a window into the distant past. There’s increasing evidence, for example, that there have been a succession of large-scale polities in the Amazonian rainforest and the eastern Andes over a very long period of time that simply happened to be absent or weak at the time that Europeans first pushed into these areas. Assuming that small-scale societies of various kinds in the same region where such a history unfolded were unchanging, pristine and unrelated to other societies is at the very least unsupported by any direct evidence. More to the point, such an assumption actively overlooks evidence in many cases in the modern world that “pristine” societies of this type live where they live because they were trying to get away from larger or more centralized polities, that there is a dynamic relationship between them. Which surely includes ideas and practices of violence and warfare.

This is where the use of evolution as the organizing idea of such accounts is so aggravating. Not because it’s “scientific” but because it’s not. Evolutionary biologists know better than to describe speciation as progress towards an end or a goal, to assume that natural selection must always produce more complex or sophisticated organisms over time, or that evolutionary processes should ever be represented by a single line of descent. Go ahead, show an evolutionary biologist a single line that goes from Devonian tetrapods to homo sapiens with every ‘transitional’ animal in between neatly marked as one more interval on the way to us and get ready for a big eyeroll and an exasperated sigh.

Sure, there’s a successive relationship over time between forms of political organization in human history, but if you were going to chart it, you’d have something that looked hugely bushy, with all sorts of groupings, thousands of radial and convergent movements at all scales of time. And if you tried to place “warfare” in relationship to that complexity it would get even messier and more intricate.

Anything that arranges human history as a matter of “stages” progressing neatly towards the modern is just factually wrong before we ever get to the troubled instrumental and ideological history of such schema. Yes, that includes most versions of dialectical materialism: the dogged attempts of some Marxist historians and anthropologists in the 1970s and 1980s to get everything before 1500 into some kind of clear dialectical schema long since crashed into either an assertion that there’s only been one general world-systemic polity ever in human history (the “5,000 year-old world system”) or that lots of variant premodern histories collapsed into a single capitalist world-system after 1500.

When scholars who see politics or culture or warfare or many other phenomena in granular and variable terms rise to object to strong generalizing or universalizing accounts, their first motive is an empirical one: it just isn’t like that. Human political structures didn’t ALL go from “simple tribes” to “early states” to “feudalism” to “absolutist centralization” to “nation-states” to “modern global society”. They didn’t even go that way in Western Europe, really. Certain kinds of structures or practices appeared early in human history, sure, and then recurred because they radiated out from some originating site of practice or because of parallel genesis in relationship to common material and sociobiological dimensions of human life. Other common structures and practices appeared later, sometimes because new technological or economic practices allow for new scales or forms of political life and structure. But there is a huge amount of variation that is poorly described by a linear relation. There are movements between large and small, hierarchical and flat, organized and anarchic, imperial and national, etc., which are not linear at all but cyclical or amorphous.

That’s the “big idea” that people with their eye on variation and particularism could try to sell more aggressively: that the stronger your generalizations and universalisms about human culture and societies are, the more likely they are to be just plain wrong, factually and empirically wrong, and that the only way to dodge that wrongness to sustain those generalizations is to cherrypick your examples and characterize anyone who calls you on it as a pedant or ideologue.

]]>
https://blogs.swarthmore.edu/burke/blog/2013/02/20/particularism-as-a-big-idea/feed/ 10
More on Menand https://blogs.swarthmore.edu/burke/blog/2013/02/08/more-on-menand/ https://blogs.swarthmore.edu/burke/blog/2013/02/08/more-on-menand/#comments Fri, 08 Feb 2013 20:07:48 +0000 https://blogs.swarthmore.edu/burke/?p=2245 Continue reading ]]> Almost back to feeling normal, so I thought I’d return to my somewhat fever-delirious notes on the Menand talk last week at Swarthmore and see what I could pull out of them.

Menand’s talk, following some of his recent writing, was broken into three sections. The first was a quantitatively-oriented summary of the current trends in higher education in general and in the humanities in specific. The second was a review of the history of the humanities in academia in the last 75 years or so. The third was a meditation on possible solutions to the problems described in the first two parts.

Though he was appropriately cautious about the language of “crisis”, pointing out that the humanities in particular has been by its own lights perpetually in crisis, the numbers he laid out suggested that there is a real crisis at the moment and that it is gathering momentum. In particular, he focused on dismal enrollment trends in the humanities at major research universities (including history, which as usual is a borderlands discipline that pops in and out of focus in these kinds of conversations), and on the degree to which students in the US have long since preferred pre-professional degrees in Accounting, Nursing, and so on in favor of any liberal arts (including the sciences or the hard social sciences). Interestingly, he argued that small undergraduate colleges like Swarthmore are one of the few islands of relative calm in the storm, that enrollment trends for the humanities at most such colleges are only mildly negative and the support of most administrations is strong. Menand noted many other negative trends in alignment with enrollment, such as the near-total vanishing of grants and support for research in the humanities.

In the second part, he gave what I found to be a curiously reactive and Kuhnian account of the transformation of humanistic scholarship since 1950 that concluded that we’re in a moment of atheoretical ennui, that there are no big ideas or theories. (Notably he made no reference at all to digital humanities, “distant reading”, text mining or anything else along these lines.) Still, he offered this history as a hopeful one, showing the resilience and relevance of humanistic thought, and observing that each successive move, while not progressing towards a greater cumulative knowledge that was more “true” or “accurate” in the whiggish sense, generatively opened up the intellectual and social spaces of humanistic practice. There were some really appealing ideas in this account–one I liked was the argument that the “public intellectual” is a red herring, that the problem with much humanistic thought is not that it communicates poorly but simply that many people (particularly in other academic disciplines) disagree with it and will continue to do so. This he took to be a source of strength and mission rather than a problem.

The third part is where I felt a bit let down. Menand’s writing makes clear that he doesn’t think formal interdisciplinarity is an answer to the problems of disciplinarity because interdisciplinarity IS disciplinarity, it ratifies the disciplines. In his writing, he also doesn’t think disciplinarity is a problem, he thinks it is the consequence of professionalization and that professionalization is a necessary part of the value of academic institutions. In the talk, he moved off of this line somewhat, in a fuzzy way. What I heard in there somewhere, maybe because I’m predisposed to hear it, was that there needs to be more conscious generalism, less over-specialization in the humanities.

Menand also said that the humanities need to basically get into everybody’s else shit more, that a more generalist sensibility doesn’t just let you help students see how the humanities connect to the world, but it also lets you get involved in discussions about neurobiology and economics with more confidence.

So how do we get there? Menand said, “Well, you can’t rearrange departments or practices as they exist, that’s too hard, so you’ll just have to wait for us to train a new generation of scholars who have slightly different practices and outlooks”. Not only does that align very poorly with the immediacy of the existential threat he laid out in the beginning, it seems very nearly synonymous with saying, “Yep, we’re screwed.” It just doesn’t seem that hard to me to create some space for curricular and intellectual movement, to loosen the constraints, within existing practices.

Menand also said that he felt he couldn’t possibly advise undergraduates about any other career besides an academic one, because he doesn’t know anything about other careers. This also seems really wrong in the context of his urging that humanists speak to and about anything that involves the human subject and human practices. How could we possibly be comfortable engaging in that range of argument and yet say that we have nothing to say to students about the lives they might live unless they want to be professors like us? It may well be that I cannot tell a student specifically about current tangible considerations around employment in the museum industry (to use Menand’s example) without having worked in museums myself. But I can surely talk to students about the idea and institutional history of the museum, about ways to imagine exhibition, about new media forms and practices that might transform museums and exhibition, and so on. Still, I thought he also ended up making an unintentional argument that if we want more flexibility and range in humanistic thought we may also need to look for some humanists who come from completely different backgrounds or training rather than just from slightly reformed ‘traditional’ graduate education.

Of the ideas he put forth in the last part of his talk, the one that I found the most useful might be the simplest to pull off (at least in the spirit of how I heard it): that one strategy that might help the humanities is simply to readdress what it teaches, to redirect the focus of a course so that it speaks back to or anchors itself in concepts, subjects or disciplines outside the ‘traditional’ remit of humanistic academia.

There are of course humanists who’ve been doing this kind of thing as a steady part of their practice for their whole career. I do think there is probably a way to go about it that is particularly generative and useful for our students and that doesn’t rub up against our colleagues outside the humanities so abrasively. When I teach my class on the history of international development institutions and the intellectual history of development, for example, I’m certainly speaking back to the way that “development” is conventionally imagined in the discipline of economics. But I’m also trying to let that way of thinking live and breath inside my class, so that one outcome of my course is that a student might choose to prefer that way of thinking and working with development. Often I think when humanists set off to talk about science or other forms of practice in the world, they forestall or foreclose the possibility of an escape from or challenge to the humanistic imagination, they define critique as a form of negation or rejection rather than a productive enrichment or complication. (Yes, I know, it happens far more in reverse, but that’s a problem for a different day.)

Because I can readily see how we might offer more courses like this, I’m not sure that things are quite as gloomy or difficult as Menand imagines they are–given the way he tells the history, it was almost the resigned account of a person who imagines himself the last survivor of a vanishing paradigm, Jor-El waving good-bye to the infant Superman as he rockets from Krypton, rather than as the reform-minded guardian of a grand tradition. I think we’re in the middle of a ferment full of new ideas and practices (as well as the enduring strength of many old ones). The trick will be to see the possibilities of this moment more fully, in a more joyous and permissive mood.

]]>
https://blogs.swarthmore.edu/burke/blog/2013/02/08/more-on-menand/feed/ 2