The (Ab)Uses of Fantasy

Evidently I’m not alone in thinking that last week’s episode of Game of Thrones was a major disappointment. By this I (and other critics) do not mean that it was simply a case of poor craftmanship. Instead, it featured a corrosive error in judgment that raised questions about the entire work, both the TV show and the book. Game of Thrones has always been a high-wire act; this week the acrobat very nearly fell off.

In long-running conversations, I’ve generally supported both the violence that GoT is known for and the brutal view the show takes of social relations in its fantasy setting, particularly around gender. Complaints about its violence often (though not invariably) come from people whose understanding of high fantasy draws on a very particular domestication of the medieval and early modern European past that has some well-understood touchstones: a relentless focus on noble or aristocratic characters who float above and outside of their society; a construction of violence to either formal warfare or to courtly rivalry; a simplification (or outright banishment) of the political economy of the referent history; orientalist or colonial tropes of cultural and racial difference, often transposed onto exotic fantasy types or creatures; essentially modern ideas about personality, intersubjectivity, sexuality, family and so on smuggled into most of the interior of the characters.

These moves are not in and of themselves bad. Historical accuracy is not the job of fiction, fantasy or otherwise. But it is also possible that audiences start to confuse the fiction for the referent, or that the tropes do some kind of work in the present that’s obnoxious. That’s certainly why some fantasy writers like China Mieville, Phillip Pullman and George R.R. Martin have various objected to the high fantasy template that borrows most directly from Tolkien. It can lead to a misrecognition of the European past, to the sanctification of elitism in the present (by allowing elites to see themselves as nobility), and also simply to the reduction of creative possibility. If a fantasy writer is going to draw on history, there are histories outside of Europe–but also early modern and medieval Europe suggest other templates.

Martin is known to have drawn on the Wars of the Roses and the Hundred Years War (as did Shakespeare) and quite rightly points out when criticized about the violence in Game of Thrones that his books if anything are still less distressing than the historical reality. It’s a fair point on several levels–not just ‘accuracy’, but that the narrative motion of those histories has considerable dramatic possibility that Tolkienesque high fantasy simply can’t make use of. Game of Thrones is proof enough of that point!

But GoT is not Tuchman’s Distant Mirror nor any number of other works. A while back, Crooked Timber did a lovely seminar on Joanna Clarke’s novel Jonathan Strange and Mr. Norrell. Most of the commenters focused on the way in which the novel reprises the conflict between romantics and utilitarians in 19th Century Britain, and many asked: so what do you gain by telling that story as a fantasy rather than a history?

To my mind, you gain two things. The one is that there may be deeper and more emotional truths about how it felt to live and be in a past (or present) moment that you only gain by fiction, and that some of those in turn may only be achievable through fiction that amplifies or exaggerates through the use of fantasy. The second is that you gain the hope of contingency. It’s the second that matters to the last episode of Game of Thrones.

Historical fiction has trouble with “what if”? The more it uses fiction’s permission to be “what if”, the more it risks losing its historicity. It’s the same reason that historians don’t like counterfactuals, for the most part: one step beyond the moment of contingency and you either posit that everything would have turned out the same anyway, or you are stuck on a wild ride into an unknown, imaginary future that proceeds from the chosen moment. Fantasy, on the other hand, can follow what ifs as long as it likes. A what if where Franklin decides to be ambassador to the Iroquois rather than the French is a modest bit of low fantasy; a what if where Franklin summons otherworldly spirits and uses the secret alchemical recipes of Isaac Newton is a much bigger leap away, where the question of whether “Franklin” can be held in a recognizable form starts to kick in. But you gain in that move not only a lot of pleasure but precisely the ability to ask, “What makes the late colonial period in the U.S. recognizable? What makes the Enlightenment? What makes Franklin?” in some very new ways.

Part of what governs the use of fantasy as a way of making history contingent is also just storytelling craft: it allows the narratives that history makes available to become more interesting, more compressed, more focused, to conform not just to speculation but to the requirements of drama.

So Game of Thrones has established that its reading of the late medieval and early modern brings forward not only the violence and precarity of life and power in that time but also the uses and abuses of women within male-dominated systems of power. Fine. The show and the books have established that perfectly well at this point. So now you have a character like Sansa who has had seasons and seasons of being in jeopardy, enough to fill a lifetime of shows on the Lifetime channel. And there is some sense of a forward motion in the character’s story. She makes a decision for the first time in ages, she seems to be playing some version of the “game of thrones” at last, within the constraints of her role.

So why simply lose that sense of focus, of motion, of narrative economy? If Monty Python and the Holy Grail had paused to remind us every five minutes that the king is the person who doesn’t have shit on him, the joke would have stopped being funny on the second go. If Game of Thrones is using fantasy to simply remind us that women in its imagined past-invoking world get raped every five minutes unless they are plucky enough to sign up with faceless assassins or own some dragons, it’s not using its license to contingency properly in any sense. It’s not using it to make better stories with better character growth and it is not using it to imagine “what if”? If I want to tell the story of women in Boko Haram camps as if it were suffused with agency and possibility, I would rightly be attacked for trying to excuse crimes, dismiss suffering and ignore the truth. But that is the world that we live in, the world that history and anthropology and political science and policy and politics must describe. Fiction–and all the more, fantasy–have other options, other roads to walk.

There is no requirement for the show to have Sansa raped by Ramsay Bolton, no truth that must be told, not even the requirement of faithfulness to the text. The text has already (thankfully!) been discarded this season when it offers nothing but meandering pointlessness or in the case of Sansa, nothing at all. So to return suddenly to a kind of conservation of a storyline (“False Arya”) that clearly will have nothing to do with Sansa in whatever future books might one day be written is no justification at all. If it’s Sansa moving into that narrative space, then do something more with that movement. Something more in dramatic terms and something more in speculative, contingent terms. Even in the source material Martin wants to use, there are poisoners and martyrs, suicides and lunatics, plotters and runaways he or the showrunners could draw upon for models of women dealing with suffering and power.

Fantasy means you don’t have to do what was done. Sansa’s story doesn’t seem to me to offer any narrative satisfactions, and it doesn’t seem to make use of fantasy’s permissions to do anything new or interesting with the story and the setting. At best it suggests an unimaginative and desperate surrender to a character that the producers and the original author have no ideas about. At worst it suggests a belief that Game of Thrones‘ sense of fantasy has been subordinated to the imperative of “we have to be even grosser and nastier next time”! That’s not fantasy, that’s torture porn.

Posted in Popular Culture, Sheer Raw Geekery | 5 Comments

The Ground Beneath Our Feet

I was a part of an interesting conversation about assessment this week. I left the discussion thinking that we had in fact become more systematically self-examining in the last decade in a good way. If accrediting agencies want to take some credit for that shift, then let them. Complacency is indeed a danger, and all the more so when you have a lot of other reasons to feel confident or successful.

I did keep mulling over one theme in the discussion. A colleague argued that we “have been, are and ought to be” committed to teaching a kind of standardized mode of analytic writing and that therefore we have a reason to rigorously measure across the board whether our students are meeting that goal. Other forms of expression or modes of writing, he argued, might be gaining stock in the world but they shouldn’t perturb our own commitment to a more traditional approach.

I suppose I’m just as committed to teaching that kind of writing as my colleague, for the same reasons: it has a lot of continuing utility in a wide variety of contexts and situations, and it reinforces other less tangible habits of thought and reflection.

And yet, I found myself unsettled on further reflection about one key point: that it was safe to assume that we “are and ought to be” committed. It seems to me that there is a danger to treating learning goals as settled when they’re not settled, just as there is a danger to treating any given mix of disciplines, departments and specializations at a college or university as something whose general stability is and ought to be assured. Even if it is probable that such commitments will not change, we should always act as if they might change at any moment, as if we have to renew the case for them every morning. Not just for others, but for ourselves.

Here’s why:

1) even if a goal like “teaching standard analytic writing” is absolutely a bedrock consensus value among faculty and administration, the existence of that consensus might not be known to the next generation of incoming students, and the definition of a familiar practice for faculty might be unfamiliar to those students. When we treat some feature of an academic enviroment as settled or established, there almost doesn’t seem to be any reason to make it explicit, or to define its specifics, and so if students don’t know it, they’ll be continuously baffled by being held accountable to it. This is one of the ways that cultural capital acts to reproduce social status (or to exclude some from its reproduction): when a value that ought to be disembedded from its environment and described and justified is instead treated as an axiom.

2) even if something like “teaching analytic writing” is absolutely a bedrock consensus value among faculty, if some in a new generation of students consciously dissent from that priority and believe there is some other learning goal or mode of expression which is preferable it, then faculty will never learn to persuade those students, and will have to rely on a brute force model to compel students to comply. Sometimes that works in the same way that pulling a child away from a hot stove works: it kicks the can down the road to that moment when those students will recognize for themselves the wisdom of the requirement. But sometimes that strategy puts the goal itself at risk by exposing the degree to which faculty themselves no longer have a deeply felt or well-developed understanding of the value of the requirement they are forcing on their students.

3) Which leads to another point: what if the previously consensus value is not a bedrock consensus value even among faculty? If you assume it is, rather than treat the requirement as something that needs constantly renewed investigation, you’ll never really know if an assumed consensus is eroding. Junior and contingent faculty may say they believe in it, but really don’t, which contributes to a moral crisis in the profession, where the power of seniority is used to demand what ought to be earned. Maybe some faculty will say they believe in a particular requirement but actually don’t do it well themselves. That’s corrosive too. Maybe some faculty say they believe in it but what they think “it” is is not what other people think it is. You’ll never know if the requirement or value isn’t always being revisited.

4) Maybe there is genuine value-based disagreement or discord within the faculty that needs to be heard, and the assumption of stability is just riding roughshod over that disagreement. That’s a recipe for a serious schism at some point, perhaps at precisely the wrong moment for everyone on all sides of that kind of debate.

5) Maybe the requirement or value is a bedrock consensus value among faculty but it absolutely shouldn’t be–e.g., that the argument about that requirement is between the world as a whole and the local consensus within the academia. Maybe everything we think about the value we uphold is false, based on self-referring or self-validating criteria. At the very least, one should defy the world knowingly, if one wants to defy the world effectively.

I know it seems scary to encourage this kind of sense of contingency in everything we do in a time when there are many interests in the world that wish us ill. But this is the part of assessment that makes the most sense to me: not measuring whether what we do is working as intended (though that matters, too) but asking every day in a fresh way whether we’re sure of what we intend.

Posted in Academia, Defining "Liberal Arts", Swarthmore | 2 Comments

Apples for the Teacher, Teacher is an Apple

Why does AltSchool, as described in this article, as well as similar kinds of tech-industry attempts to “disrupt” education, bug me so much? I’d like to be more welcoming and enthusiastic. It’s just that I don’t think there’s enough experimentation and innovation in these projects, rather than there being too much.

The problem here is that the tech folks continue to think (or at least pretend) that algorithmic culture is delivering more than it actually is in the domains where it has already succeeded. What tech has really delivered is mostly just the removal of transactional middlemen (and of course added new transactional middlemen–the network Uber has established in a really frictionless world wouldn’t need Uber, and we’d all just be monetizing our daily drives on an individual-to-individual basis).

Algorithmic culture isn’t semantically aware yet. When it seems to be, it’s largely a kind of sleight-of-hand, a leveraging and relabelling of human attention or it is computational brute-forcing of delicate tasks that our existing bodies and minds handle easily, the equivalent of trying to use a sledge hammer to open a door. Sure, it works, but you’re not using that door again, and by the way, try the doorknob with your hand next time.

I’m absolutely in agreement that children should be educated for the world they live in, developing skills that matter. I’m also in agreement that it’s a good time for radical experiments in education, many of them leveraging information technology at new ways. But the problem is that the tech industry has sold itself on the idea that what it does primarily is remove the need for labor costs in labor-intensive industries, which just isn’t true for the most part. It’s only true when it’s true for jobs that were (or still are) rote and routinized, or that were deliberate inefficiencies created by middlemen. Or that tech will solve problems that are intrinsic to the capabilities of a human being in a human body.

So at the point in the article where I see the promise that tech will overcome the divided attention of a humane teacher, I both laugh and shudder. I laugh because it’s the usual tech-sector attempt to pretend that inadequate existing tech will become superbly useful tech in the near-term future simply because we’ve identified a need for it to be (Steve Jobs reality distortion field engaged) and I shudder because I know what will happen when they keep trying.

The central scenario in the article is this: you build a relatively small class with a relatively well-trained, attentive, human teacher at the center of it. So far so good! But the tech, ah the tech. That’s there so that the teacher never has to experience the complicated decision paths that teachers presently experience even in somewhat small classes. Right now a teacher has to decide sometimes in a day which students will get the lion’s share of the attention, has to rob Peter to pay Paul. We can’t have that in a world where every student should get all the attention all the time! (If nothing else, that expectation is an absolutely crystallized example of how the new tech-industry wealthy hate public goods so very much: they do not believe that they should ever have to defer their own needs or satisfactions to someone else. The notion that sociality itself, in any society, requires deferring to the needs of others and subsuming one own needs, even for a moment, is foreign to them.)

So the article speculates: we’ll have facial recognition software videotaping the groups that the teacher isn’t working with, and the software will know which face to look at and how to compress four hours of experience into a thirty-minute summary to be reviewed later, and it will also know when there are really important individual moments that need to be reviewed at depth.

Here’s what will really happen: there will be four hours of tape made by an essentially dumb webcam and the teacher will be required to watch it all for no additional compensation. One teacher will suddenly not be teaching 9-5 and making do as humans must, being social as we must. That teacher will be asked to review and react to twelve or fourteen or sixteen hours of classroom experience just so the school can pretend that every pupil got exquisitely personal, semantically sensitive attention. The teacher will be sending clips and materials to every parent so that this pretense can be kept up. When the teacher crumbles under the strain, the review will be outsourced, and someone in a silicon sweat shop in Malaysia will be picking out random clips from the classroom feed to send to parents. Who probably won’t suspect, at least for a while, that the clips are effectively random or even nonsensical.

When the teacher isn’t physically present to engage a student, the software that’s supposed to attend to the individual development of every student will have as much individual, humane attention to students as Facebook has to me. That is to say, Facebook’s algorithms know what I do (how often I’m on, what I look at, what I tend to click on, when I respond) and it tries (oh, how it tries!) to give me more of what I seem to do. But if I were trying to learn through Facebook, what I need is not what I do but what I don’t! Facebook can only show me a mirror at best; a teacher has to show a student a door. On Facebook, the only way I could find a door is for other people–my small crowd of people–to show me one.

Which probably another way that AltSchool will pretend to be more than it can be, the same way all algorithmic culture does–to leverage a world full of knowing people in order to create the Oz-like illusion that the tools and software provided by the tech middleman are what is creating the knowledge.

Our children will not be raised by wolves in the forest, but by anonymously posted questions answered on a message board by a mixture of generous savants, bored trolls and speculative pedophiles.

Posted in Academia, Digital Humanities, Information Technology and Information Literacy | 4 Comments

Hearts and Minds

Much as I disliked Jonathan Haidt’s recent book The Righteous Mind overall, I’m quite interested in many of the basic propositions that this strain of cognitive science and social psychology are proposing about mind, consciousness, agency, responsibility and will. Most often what frustrates me most is not how unsettling the scholars writing in this vein are but how much they domesticate their arguments or avoid thinking through the implications of their findings.

When we read The Righteous Mind together at Swarthmore, for example, one of my chief objections to Haidt’s own analysis is that he simply asserts that what he and others have called WEIRD psychosocial dispositions (Western, Educated, Industrial, Rich and Democratic) at some point emerged in recent human history (as the acronym suggests) and have never been common or universal at any point since, including now. Haidt essentially leverages that claim into an argument that “conservative” dispositions are the real universal, which I don’t think he even remotely proves, and then gets even more into the weeds by suggesting that people with WEIRD-inflected moral dispositions would accomplish more of their social and political objectives if only they acted somewhat less WEIRD. The argument achieves maximum convolution in Haidt when he seems to suggest that he prefers WEIRD outcomes, because he’s largely stripped away the ground on which he or anyone else could argue for that preference as something other than the byproduct of a cognitive disposition. Why are those outcomes preferable? If they are preferable in terms of some kind of fitness, that they produce either better individual or species-level outcomes in terms of reproduction and survival, presumably that will take care of itself over time. If they are preferable because of some other normative rationale, then where are we getting the capacity for reason that allows us to recognize that? Is it WEIRD to think of WEIRD, in fact? Is The Righteous Mind itself just a product of WEIRD cognitive dispositions? (E.g., the proposition that one should write a book which is based on research which argues that the writing of books based on research should persuade us to sometimes make moral arguments that do not derive their force from the writing of books based on research.)

————

Many newer cognitivist, evolutionary-psychological and memetics-themed arguments get themselves into the same swamp. Is memetics itself just a meme? What kind of meme reproduces itself more readily by revealing its own character? Is “science” or “rationality” just a fitness landscape for memes? Daniel Kahneman at least leaves room for “thinking slow”, which is potentially the space inhabited by science, but the general thrust of scholarly work in these domains makes it harder and harder to account for “thinking slow”, for a self-aware, self-reflective form of consciousness that is capable of accurately or truthfully understanding some of its own conditions of being.

But it isn’t just cognitive science that is making that space harder and harder to inhabit. Various forms of postmodern and postructuralist thought have arrived at some similar rebukes to various forms of Cartesian thinking via some different routes. So here we are: the autonomous self driven by a rational mind with its own distinctive individual character and drives is at the very least a post-1600 invention. This to my mind need not mean that the full package of legal, institutional and psychological structures bound up in that invention are either fake impositions on top of some other “real” kind of consciousness or sociality, nor that this invention is always to be understood as and limited to a Eurocentric imposition. “Invention” is a useful concept here: technologies do not drift free of the circumstances of their creation and dissemination but they can be powerfully reworked and reinterpreted as they spread to other places and other circumstances.

Still, if you believe the new findings of cognitivists, we may be at the real end of that way of thinking about the nature of personhood and identity, and thus maybe at the cusp of experiencing our sense of selfhood differently as well. I think this is where I really find the new cognitivists lacking in imagination, to the point that I end up thinking that they don’t really believe what their own research supposedly shows. If they’re right (and this might apply to some flavors of poststructuralist conceptions of subjectivity and personhood, too), then most of our social structures are profoundly misaligned with how our minds, bodies and socialities actually work. What I find most queasy about a lot of contemporary political and social discourse in the US in this respect is how unevenly we invoke psychologically or cognitively inflected understandings of responsibility, morality, and capacity. Often we seem to invoke them when they suit our existing political and social commitments or prejudices and forget them when they don’t. About which Haidt, Kahneman and others would doubtless say, “Of course, that’s our point”–except that if you believe that’s true, then that would apply to their own research and the arguments they make about its implications, that cognitivism is itself evidence of “moral intuitions”.

————-

Think for example about the strange mix of foundational assertions that now often govern the way we talk about the guilt or innocence of individuals who are accused of crimes or of acting immorally. There’s always been some room for debating both nature and nurture in public disputes over criminality and immorality in the US in the 19th and 20th Century, but the mix now is strikingly different. If you take much of the new work in cognitive science seriously, its implications for criminal justice systems ought to be breathtakingly broad and comprehensive. It’s not clear that anyone is ever guilty in the sense that our current systems assume that we can be, e.g., that as rational individuals, we have chosen to do something wrong and should be held accountable. It’s equally unclear whether we can ever be expected to accurately witness a crime, nor that we are ever capable of accurately judging the guilt or innocence of individuals accused of crimes without being subject both to cognitive bias and to large-scale structural structures of power.

But even among the true believers in the new cognitive science, claims this sweeping are made at best fitfully, and equally many of us in other contexts deploy cognitive views of guilt, responsibility and evidence only when they reinforce political or social ideologies that we support. Many of us (including myself) argue for the diminished (or even absent) responsibility of at least some individuals for behaving criminally or unethically when we believe that they are otherwise the victims of structural oppression or that they are suffering from the aftermath of traumatic experience. But some of us then (including myself) argue for the undiminished personal-individual-rational responsibility of individuals who possess structural power, regardless of whether they have cognitive conditions that might seem to diminish responsibility or have suffered from some form of social or experiential trauma.

Our existing maps of power don’t overlay very well in some cases onto what the evidence of the new cognitive science might try to tell us, or even sometimes into other vocabularies that try to escape a Cartesian vision of the rational, self-ruling individual. A lot of cultural anthropology describes bounded, local forms of reason or subjectivity and argues against expecting human beings operating within those bounds to work within some other form of reason. We try to localize or provincialize any form of reason, all modes of subjectivity, but then we often don’t treat the social worlds of the powerful as yet another locality, we don’t try for an emic understanding of how particular social worlds of power see and imagine the world, but instead actually treat many social actors in those worlds as if they are the Cartesian, universal subjects that they claim to be, and thus hold them responsible for what they do as if they could have seen and done better from some point of near-universal scrutiny of the rational and moral landscape of human possibility.

———–

From whatever perspective–cognitive science, poststructuralism, cultural anthropology, and more–we keep reanimating the Cartesian subject and the social and political structures that were made in its name even when we otherwise believe that minds, selves, consciousness and subjectivity don’t work that way and ought not to work that way. I think at least to some extent this is because we either cannot really imagine the social and political structures that our alternative understandings imply (and thus resort to metaphors: rhizomes, etc.) or because we can imagine them quite well and are terrified by them.

The new cognitivism or evolutionary psychology, if we took it wholly seriously, would either have to tolerate a much broader range of behaviors now commonly defined as crimes and ethical violations as being natural (because where could norms that argue against nature possibly come from, save perhaps from some countervailing cognitive or evolutionary operation) or alternatively would have to approach crime and ethical misbehavior through diagnosis rather than democracy.

The degree to which poststructuralism of various kinds averts its anticipatory gaze when actually confronted by institutionalizations of fragmented, partial or intersectional subjectivity (as opposed to pastward re-readings of subjects and systems now safely dead or antiquated) is well-established. We hover perpetually on the edge of provincializing Europe or seeing the particularity of whiteness because to actually do it is to established the boundedness, partiality and fragility of subjects that we otherwise rely upon to be totalizing and masterful even in our imagination of how that center might eventually be dispersed or dissolved.

I’m convinced that the sovereign liberal individual with a capacity (however limited) for a sort of Cartesian rationalism was and remains an invention of a very particular time and place and thus was and remains something of a fiction. What I’m not convinced of is whether any of the very different projects that either know or believe in alternative ways of imagining personhood and mind really want what they say they want.

Posted in Academia, Oh Not Again He's Going to Tell Us It's a Complex System, Politics | 7 Comments

“The Child Repents and Is Forgiven”

I occasionally out myself here at this blog, on Facebook or at Swarthmore as having a fairly encyclopedic knowledge about mainstream superhero comics, like a few other academics, but I’ve been much less inclined to make even a limited foray into either comics scholarship or comics blogging than I have with some of the other domains of popular culture that I know fairly well from my own habits of fan curation and cultural consumption.

Nevertheless, I’ve followed many comics blogs since the mid-2000s, most of which have traversed the same arc as academic blogs or any other kind of weblogs: from a small subculture dominated by strong personalities who were drawn to online writing for idiosyncratic reasons to a more professionalized, standardized, and commercialized mode of online publication. Two days ago, a well-known male comic blogger named Chris Sims who had moved from maintaining his own early personal blog to paid writing on a shared platform blog called Comics Alliance wrote an apology for having bullied and harassed a female blogger, Valerie D’Orazio, back in that earlier era of online writing.

The timing of the apology, as it turns out, was at least partly a result of Sims breaking through from comics blogging to actually writing a major mainstream title for Marvel, an X-Men comic intended to be a nostalgic revisitation of those characters as they were in the early 1990s. News of his hiring led to D’Orazio writing about how hard that was for her to stomach, particularly given that his bullying was particularly aimed at her after she was given a similar opportunity to write a mainstream Marvel Comics title.

There’s more to it all (there always is), including an assertion by some that “Gamergaters” are somehow involved in stirring this up, but I want to take note of two separate and interesting aspects of this moment.

The first is an excellent reprise of the full discursive history involved in this controversy by Heidi MacDonald. Not only does MacDonald add a lot of nuance to the controversy while remaining very clear on the moral landscape involved, she ends up providing a history of blogging and social media that might be of considerable interest to digital humanists who otherwise have no interest in comics as a genre. In particular, I think MacDonald accurately identifies how blogging used to be a highly individualized practice within which particular writers had surprising amounts of influence over the domains that drew their attention but also had largely undiscussed and unacknowledged impact on the psychological and personal lives of other bloggers, for good and ill. In a sense, the early blogosphere was a more direct facsimile of the post-1945 “republic of letters” than we’ve often realized: bloggers behaved in many ways just as print critics and pundits behaved, with rivalries and injuries inflicted upon one another but also with relational support and mutuality. Where they were interested in a cultural domain that had almost no tradition of mainstream print criticism attached to it (or where that domain had been especially confined or limited in scope), the new blogosphere often had a surprisingly intense impact on mainstream cultural producers. I’m recalling, for example, how very briefly before I started a formal weblog I published some restaurant reviews alongside some academic materials on a static webpage, and immediately got attention from some area restaurants and from some local journalists, which I hadn’t really meant to do at all.

MacDonald underscores the difference between this early environment and now, especially in terms of identity politics. It really is not just a story of going from individual curation of a subculture to a more mainstream and commercial platform, but also of how much attention and discourse in contemporary social media no longer really reproduces or enacts that older “republic of letters”. Attention in the early blogosphere was as individually curated as the blogs themselves, and commentariats tended to be much more fragmented and particular to a site. Now commentariats are much larger in scale, much less invested in the particular culture of a particular location for content, and are directed in their attention by much more palpably algorithmic infrastructures. This is sometimes good, sometimes bad, but is at the least very different.

The second aspect of the Sims controversy that interests me is the very active debate in various comments sections about whether Sims should be forgiven (by D’Orazio or anyone else). This has become a common discursive structure in the wake of controversies of this kind. Not just a debate over what the proper rhetorical and substantive composition of contrition should be, but whether the granting of forgiveness is either a good incentive for producing similar changes in the consciousness of past and present offenders or is an attempt to renormalize and cover-up harassment by placing it perpetually pastward of the person making a pro forma apology.

One of key issues in that ongoing debate is whether the presence of self-interest so contaminates an apology as to make it worthless. E.g., if Sims has to go public in order to keep his job offer from Marvel intact, then is that a sign that he doesn’t really mean it, and thus that his apology is worthless?

I think the discussion about the dangers of renormalization, of quickly kicking over the traces, is valid. But here I’d suggest this much: if male (or white, etc.) cultural producers, professionals, politicians, etc., come to feel that their ability to succeed professionally depends upon acknowledging bad behavior in the past and committing to a different kind of public conduct in the present, then that’s a sign of successful social transformation. The presence of self-interest doesn’t invalidate a public apology, but instead documents a new connection between professionalism, audiences and success. That might turn out to be a bigger driver of change than waiting for a total and irrefutable transformation of innermost subjectivity.

Posted in Blogging, Politics | 1 Comment

Raise the Barn/Autopsy the Corpse

A more detailed thinking-through of the case of Sweet Briar, and a proposal.

Five places to start a dissection of Sweet Briar College and the decision of its Board to close the school:

Laura McKenna, “The Unfortunate Fate of Sweet Briar’s Professors”.

Jack Marshall, “The Sweet Briar Betrayal”.

Roanoke Times Editorial Board, “Our View: Sweet Briar Board Should Resign”.

Brian C. Mitchell, “The Crack in the Faberge Egg”

Deborah Durham, “Suddenly Liminal: Reflections on Sweet Briar College Closing”
—————

The thinking through. The more the details come out, the odder the decision to close appears. Sweet Briar had more liabilities and debts than its endowment size might suggest, and it clearly lacked a strategic plan that could provide answers to its shrinking enrollments. But to close so suddenly, while under the leadership of an interim President, and with no leadership in its Admissions office, makes little sense. The faculty and staff had spent a year considering plans. Why not hire a “crisis President” and take a shot at some of those plans? Surely there’s someone talented out there who would relish the chance to turn around a college in crisis. And surely the current students would appreciate their loyalty to the institution being rewarded by such an effort, rather than being pushed out the door allegedly for their own best interests. I think it’s reasonable to wonder if there isn’t a plan that isn’t being disclosed–perhaps that the only way to fully void Indiana Fletcher Williams’ will is to go completely out of business?

The proposal. If the current faculty and staff and students of Sweet Briar would welcome it, why not gather some current provosts, presidents, senior staff and faculty of liberal arts colleges together at Sweet Briar or nearby for a weekend-long summit that reviews the plans composed over the last year and suggests other possible solutions? A sequel, perhaps, to the meeting that the former President of Swarthmore Rebecca Chopp and the outgoing President of Haverford Dan Weiss organized at Lafayette College in 2012.

If there’s little interest among current faculty, staff and students at Sweet Briar, then there’s no point to trying to have such a meeting in a time-sensitive, hastily-organized way. But even if they aren’t interested, I think there should be such a meeting in the next two years, as a post-mortem. I do not accept the thought that some (including McKenna) offer that Sweet Briar is a sign of the imminent death of the small liberal-arts college, in no small measure because I don’t even think Sweet Briar was doomed to die.

————

Reading about the discussions that have been going on at Sweet Briar itself for the last year, I think it’s clear that folks there understood some of what they’d have to do to be viable, and that some of what they’d have to do would be hard to achieve, especially for faculty. Even in a situation of existential threat, it’s very difficult for faculty to dramatically reimagine the structure of a curriculum and the nature of their professional practices, and to find a way to systematically reduce the size of a faculty. You can’t have over one hundred faculty positions and only 500 students. You can’t have more than two hundred non-faculty employees and have only 500 students either.

This would be job #1 of a potential “emergency summit”: redesign a small college curriculum so that it has 75 or fewer faculty positions and yet retains intellectual and philosophical coherence. Typically when senior administrators are brought in to cut positions (or “detenure”) an institution, they do it by finding out which departments have the lowest enrollments, they do it by finding out which departments are the most politically hapless or exposed. That’s the wrong way to do it no matter what the crisis is, but it’s especially wrong in a situation where the institution itself has an identity problem.

Brian Mitchell’s “Faberge” essay points out that the small liberal-arts colleges that have scrambled to build highly distinctive, imaginative or innovative programs, or have restructured their overall institutional emphasis, are doing ok, precisely because they have something to offer prospective students beyond “small and liberal-arts”. St. John’s College is the classic established example of such a program, but there are many others: Berea College, College of the Atlantic, Quest University, Colorado College, Hampshire College. At the Lafayette meeting I mentioned, I was really struck at how many other small colleges with more limited resources were doing really creative things–and like Mitchell, I was also struck that the wealthiest and best-known liberal arts colleges were dramatically more risk-averse and mainstream.

I’m certain that there are ways to organize a faculty of fifty or seventy-five intellectuals and scholars that channels their teaching and engagement to great effect without having to offer forty-six majors, minors and certificates. I often despair of getting my colleagues at Swarthmore to grasp this same point, that a small college, even a rich one, has a choice between being a great small college or a shitty little university. The more programs a small college tries to have, the more fields it feels it must represent, the more specializations it feels it requires, the more it’s choosing to be a shitty little university. Faculty are usually the ones driving that kind of choice: this is one thing we can’t blame the administrators for. So unless a summit to #SaveSweetBriar was willing to dramatically reimagine what studying at Sweet Briar could entail, and accept that not every job can be saved, this meeting I’m proposing has to be a post-mortem that will warn the living rather than save the patient.

Job #2 is also clearly something that the faculty and senior staff at Sweet Briar are painfully conscious about, which is to break some of the restrictions surrounding the gifts that founded and sustained the college. But it’s been done: Sweet Briar found a way to get loose of the initial requirement that its students be white. Even if Sweet Briar were to remain a college for women, it could have a dynamic admissions strategy that sought out students from outside the United States, and non-traditional students inside the U.S. (which might then influence the curricular redesign in #1).

Job #3 is look at the financial picture after #1 and #2 and see what else the institution can do more cheaply or not at all. People who imagine that there’s lot of waste in a budget, any budget, are almost always wrong. But there might be administrative operations that a small college with a newly envisioned mission doesn’t need to pursue. And stop hiring consultants: that would be another purpose for this summit, to build a “pro bono” network of peer experts who can pitch in until the college is stabilized. The summit could look with fresh eyes at the day-to-day operations of the college and see what makes sense and what doesn’t make sense going forward.

Job #4 is a capital campaign that follows straight off of #SaveSweetBriar. Use the resigned, reimagined curriculum as a selling point to bring in new supporters, as well as tap the obviously considerable goodwill of Sweet Briar’s established donor base. I think a summit could at least help lay the groundwork for such a campaign.

This is obviously ambitious for a weekend, especially if it’s a meeting convened on short notice. But I don’t think it’s completely implausible.

If this ends up being a post-mortem instead, then the review of the issues involved could be broader, but I still think might follow the same rough contours: curricular design, admissions practices, donor practices, fiscal restraint (that avoids being austerity). All of it aimed at asking: how can liberal-arts colleges avoid making the same mistakes? What do we have to do in order to secure our collective future?

Posted in Academia, Swarthmore | 7 Comments

#Save Sweet Briar

The more I read about the decision to shut down Sweet Briar College, the less sense it makes to me.

Essentially, when I look at Sweet Briar, I see the following:

1) A physical plant, a faculty and a staff that are formidable assets.
2) A sizeable endowment.
3) Complex liabilities in terms of conditions of gifts, etc., that might be negotiable with the right legal strategy.

I see also the following things that need to be done:

1) Dramatic change in the curriculum. Sweet Briar has a huge, sprawling curriculum in relationship to the size and character of its student body. Yes, this means shedding staff and faculty, but more importantly, it means coming up with a distinctive idea about what the education at Sweet Briar is about.
2) Wider recruitment. Of international students, maybe of men if the legal strategy can be found, of non-traditional students.
3) Novel strategies for setting tuition. Maybe Sweet Briar could be the first SLAC to be brutally honest about “discounting” in relationship to means-testing.
4) Rapid commitment of new energies behind pedagogical innovation. Suppose you straight-up say, “We’ll take students whose parents have a lot of money to subsidize education and we’ll give them a completely new form of individual attention, something they can’t get from a MOOC or a large impersonal university or even a traditional selective small college. We’ll build singular programs around singular individuals, every single one.” Maybe, for example, give every student admitted to Sweet Briar gets a “budget” to spend on commissioning particular courses or instructors. Anything that makes it seem like a place that is not like anywhere else in terms of its pedagogy.

I see assets, I see possibilities, and I see a Board of Trustees and an interim President who gave in preciptiously rather than explore those possibilities and assets.

I’d love to see a pro bono project of small liberal-arts college presidents, provosts and faculty who would agree to descend upon Sweet Briar for a weekend of creative thinking, to help their Board and President see the futures they haven’t seen. I’ll pledge my time right now if there’s sufficient interest in such a thing.

Posted in Academia | 8 Comments

Where There’s Smoke

My main problem with Laura Kipnis’ much-discussed essay “Sexual Paranoia” is the excluded middle it outlines. Practicioners of dialectic modes of argument often claim that this approach is necessary in order to locate and recommend that middle. It’s the “Untouchables” theory of rhetorical struggle: they put one of yours in the hospital, you put one of theirs in the morgue! Until it’s all over and everybody gets to live in peace and drink because Prohibition was repealed, or something like that.

I think Kipnis is right that building rules and formalisms that encode a particular kind of person who depends upon institutions and governments to protect them from harm is a mistake in a great many ways. I think she’s wrong in imagining that the alternative is an empowered human subject who makes decisions about sex, erotics and love within the alternative formalism we’ve chosen to call “consent”, a sort of contractual relation between autonomous self-owning individuals. In the new rules, we forbid relationships that we definitionally hold to be non-consensual because of how we describe power as a function of formal institutional roles. In the old rules that Kipnis extols, we sort every erotic and sexual relationship into consent and non-consent and apply an if-then assessment. If non-consent, criminal; if consent, allowable.

The excluded middle here is the messiness of being human, which Kipnis says she prizes (and her powerful, important scholarship throughout her career backs that statement up). But that messiness has to include the possibility that acts, feelings, relations which satisfy even the new rules as being “affirmatively consensual” could be nevertheless profoundly objectionable in those same messy, human terms. And some of them are sufficiently objectionable that they would not just be a “you say tomato, I say tomato” kind of matter for individuals to sort out on their own, but that institutions might in totally human and subjective terms decide to act upon. Kipnis is against the new rules, but in many ways implicitly is defending the old rules (which are just as much rules): that you might suffer the contempt of friends and colleagues, but you should never fear the discipline of institutions. I think the most human thing would be for institutions to act as humanely as we dream of individuals acting: as judicious, wise, complex, sensitive but also strong, decisive and resolute where need be. To act not just because they must (the lawyers say!) or not act because they mustn’t (the lawyers say again!)

Kipnis doesn’t name him by name, but the case of Peter Ludlow at Northwestern is clearly on her mind. In the excluded middle, why not just say what clearly should be said? That he should not have done what he himself admits that he did, and that the wrongness of its doing doesn’t depend on the particulars of consent? That an ideology that maintains that we own ourselves, that we can give consent or refuse it as autonomous individuals, is also an ideology that should allow that we can and should own ourselves sufficiently to keep our zippers zipped in many circumstances? If we’re to hold on to liberal autonomy, let’s hold on to most of it. The worst of all worlds would be to hold on to consent as a liberal form of contract but to dispense with its associated aspiration for self-control and self-mastery. The specter of a self that can consent but cannot be expected to act differently across different social and professional worlds, that has its desire spilling over the walls because that self is a dark romantic kernel inside the rational contracting shell is a familiar ghost, but we shouldn’t welcome its recurrent haunting.

The case that makes this point most clearly for me is of the Yale moral philosopher described by a graduate student who had an affair with him. The details are depressingly familiar, as the author herself recognizes as the essay wears on: an older man who lies proficiently about his marital status, about his sex life, about his intentions. Who turns out to tell the same lies to many women. If that were all of it alone, then that alone is worth writing about, worth sharing, worth accusing. Why not? Why should serial deceit be rigorously private and protected? Surely real individual freedom, especially in matters of sex, love and desire, should include the freedom to share our stories–and our warnings. But also in this case, and all cases of relationships between people, power matters. Because it turns out that the Yale moral philosopher isn’t just a serial liar and intellectual hypocrite, but very possibly is also in breach of the old rules of consent that Kipnis agrees are still vitally important to maintain and enforce. She says of them that the real harassers should suffer all that is coming to them: but we should hardly wait to see a fire break out every time there’s smoke in the air. In all our institutions in modern life, the air is thick with smoke. The lies that old men tell, the advice that fraternity brothers give about drunk women at parties, and so on: our lives are often like the former mining town of Centralia Pennsylvania, where coal seams burn underground unchecked, the fire of harassment and assault always underneath. Kipnis invokes Andrea Dworkin as if to laugh at where we’ve arrived, making mainstream institutional systems of discipline and punishment that affirm her view of all heterosexuality as contaminated by power. Kipnis is right to reject the essential gloominess of Dworkin’s view of so many human relationships as fundamentally contaminated and irredeemable, but Dworkin’s description of power being everywhere in sexuality (and otherwise) is fairly on the mark.

So why not a Yale University which in human and humane terms says to that moral philosopher: we don’t approve of what you’re doing with your reputation as a scholar and teacher, of what you’re doing as a human being, even if you’ve been careful enough to follow some writ, to discipline your desire just enough so as not to hurt and lie to a person who is at this moment your student, to follow the rules just enough. We don’t approve in general of how you use your influence and your power, we don’t think very much of a moral philosophy that applies so very little to your own conduct. And so: go somewhere else? When did a few books full of moral philosophy and a bunch of lectures become so valuable that they earned someone a lifelong place no matter whom they’ve hurt or how they act? Why not imagine institutions that could be just wise enough, just knowing enough, that they might act in human terms, just as we expect from our wise and knowing friends and acquaintances? (Even, perhaps, from our wise enemies.) Why not imagine institutions less as stern sovereigns, or as machines that protect us from both messy desire and weary wisdom? Why not imagine communities–including communities of work–as legitimately collapsing public and private together, as being just as messy as individuals are in how they reward and forbid, act and fail to act? If we want the notion of individuals consenting–and individuals being responsible for their consent–then perhaps we should add to that another shopworn idea, that with great (or even modest) power comes great (or even modest) responsibility.

A defense of the necessary, even desirable, messiness of human life is not about painting a huge unknown “grey area” and saying that everything within it is nobody’s business but the people in the grey. It’s not saying that what happens in Vegas stays in Vegas. It ought to be the opposite: a brutally honest commitment to humanistic empiricism, to the vivisection of the human heart, to the unflinching witnessing of what we do, what we are, what we feel. And if we see, when we see, lies and pain and suffering, we shouldn’t rush to call it desire and pleasure and freedom.

Posted in Academia, Politics | 5 Comments

Practice What We Preach?

I’ve been reworking an essay on the concept of “liberal arts” this week. One of the major issues I’m trying to think about is the relatively weak match between what many liberal arts faculty frequently say about the lifelong advantages of the liberal arts and about our ability to model those advantages ourselves. In quite a few ways, it seems to me that many academics do not demonstrate in their own practices and behavior the virtues and abilities that we claim follow on a well-constructed liberal arts education. That is not necessarily a sign that those virtues and abilities do not exist. One of the oldest known oddities surrounding teaching is that a teacher can guide a student to achievements that the teacher cannot himself or herself achieve. Good musicians can train great musicians, decent artists can train masterful ones, and so on. Nevertheless, it feels uncomfortable that we commonly defend liberal arts learning as producing competencies and capacities that we do not ourselves exhibit or even in some cases seem to value. The decent musician who is training a virtuoso performer nevertheless would like to play as well as their pupil if they only could, and tries to do when possible.

Let me give four examples of capacities or skills that I have seen many faculty at many institutions extol as good outcomes of a liberal arts education.

First, perhaps most commonly, we often claim that a liberal arts graduate will be intellectually adaptable, will be ready to face new challenges and new situations by learning new subjects, approaches and methods on an as-needed or wanted basis.

Second, many of us would argue that a well-trained writer, speaker and thinker should be able to proficiently and persuasively argue multiple sides of the same issue.

Third, faculty often claim that a liberal arts graduate will be able to put their own expertise and interests in wider perspective, to see context, to step outside of the immediate situation.

Fourth, many liberal-arts curricula require that students be systematically engaged in pursuing breadth of knowledge as well as depth, via distribution requirements or other general-education structures.

So, do most faculty in most colleges and universities model those four capacities in their own work and lives? My impressionistic answer would be, “Not nearly enough”.

Are we adaptable, do we regularly tackle new subjects or approaches, respond well to changing circumstances? Within narrowly circumscribed disciplinary environments, yes. Most active scientific researchers have to deal with a constantly changing field, most scholars will tackle a very new kind of problem or a new setting at some point in their intellectual lives. However, many of us insist that learning new subjects, approaches and methods is an unforgiving, major endeavor that requires extensive time and financial support to work outside of the ordinary processes of our professional lives. That’s not the kind of adaptability we promise our graduates. We’re telling them that they’ll be better prepared to cope with wrenching changes in the world, with old lines of work disappearing and new ones appearing, with seeing fundamentally new opportunities and accepting new ways of being in community with others. And I really believe that this is a fair promise, but perhaps only because the major alternative so far has been narrowly vocational, narrowly pre-professional, training, which very clearly doesn’t prepare students for change at all. We win out by default. If students and parents increasingly doubt our promise, it might be in some measure because we ourselves exemplify it so poorly. Tenured faculty at research universities keep training graduate students the same way for professorial work even as the market for academic labor is gutted, for example, and largely leave those students to find out for themselves what the situation is really like.

Most of us show little or no aptitude for or zest for arguing multiple sides of an issue in our own advocacy within our communities, and only a bit more so in our work as scholars. Ad arguendo is a dirty phrase in most of the social media streams I read: I find that it is rarer and rarer to see academics experimenting with multiple branches of the same foundational line of thought, or exploring multiple foundations, for either the sheer pleasure of it or for the strengthening of their own most heartfelt case. Indeed, I see especially among some humanists a kind of anti-intellectual exasperation with such activity, as something one does reluctantly to manage social networks and maintain affective ties rather than as a demonstration of a deeply important capacity. The same goes for putting ourselves in some kind of larger perspective, of understanding our concerns as neither transcendently important nor as woefully trivial. We promise to show our students how to make connections, see their place in the world, to choose meaningfully, and then do little to strengthen our own capacities for the same.

Do we have our own “distribution requirements”? At the vast majority of academic institutions, not at all. Is there any reward at all for learning about other fields, for learning to understand the virtues and uses of disciplines other than one’s own, for generalism? Any imperative to do so? No, and in fact, many faculty will tell you that this isn’t possible given the intensive demands on their time and attention within their own fields of study and their own teaching labor. But if it’s not possible for us, how is it possible for our students? Most liberal-arts faculty teach in institutions that maintain as one of their central structural principles that it is readily possible for a student to move from advanced mathematics to advanced history to studio art to the sociology of elementary education in a single week and to do well in all of those subjects. If we think that is only possible for one brief pupating moment until a final irreversible choice is made, we ought to say so, and thus indemnify ourselves against the demands we make of our students. That would sit uncomfortably alongside all the grand claims we make about learning how to think, about the idea that a major isn’t a final choice, that you can do lots of things with a liberal arts education, however.

———

Liberal arts faculty have got to much more effusively and systematically demonstrate in our own lives and practices what we say are the virtues of a liberal arts education. Or we have to offer a trickier narrative about those virtues, one that explains how it is that we can teach what we cannot ourselves do. Which might also raise another question: are we actually the best people to be doing that teaching?

Posted in Academia, Defining "Liberal Arts", Swarthmore | 6 Comments

The People Perish

The trouble with Hilary Clinton’s email is not Hilary Clinton’s email.

The trouble is that the Democratic Party is apparently committed beyond recall to nominating an individual to be President whose entire strategic vision is:

a) I’m owed. It’s my turn.
b) Remember how good it felt to break a barrier to aspiration in 2008? You can feel that way again.
c) Something something demographics.

Particularly c). As long as we’re remembering 2008, remember all that absolute horseshit that progressives were unloading about how the demographics were against the Republican Party, how it was just a bunch of old white people, about the ascendancy of a new American majority? You don’t even need to have a platform, or a vision, or an ideology! It’s destiny!

You can look long and hard to find any other signs of a Democratic idea or vision and not find it. At best, what you’ll see is the same bland technocratic defense of competency that the party has offered since Mondale’s defeat in 1984. We’re not crazy, our guys went to good schools, we make good policy, look at this nice range of legislation we drafted. But at best the Obama Administration is a hodgepodge of good and bad even on technocratic grounds. Eric Holder’s Justice Department lays out the facts on Ferguson? Great, if reactive, but I’ll see that and raise you Arne Duncan’s destructive Education Department, which could just as easily have been Bush’s Education Department.

On vision, though? It’s nowhere. Competency without conviction is not enough. The Republican Party base has a ton of conviction and it is sufficient to produce the outcomes they want whether or not they are actually in power, because they can speak clearly and consistently about what they’re looking for in every single issue they encounter, indeed, on issues they have yet to encounter. Put that up against competency without vision, and it will push the technocrat towards accommodating the only strong, coherent, aligned voices speaking on a particular issue.

The idea that Clinton is inevitable is possibly the most depressing prospect in mainstream electoral politics that I’ve seen in my lifetime. The best I could hope for at this point is that she’s the Millard Fillmore of her party, the last of a kind and a confirmation of the necessity to break up the Democrats as they are and build something new in their place.

Posted in Politics | 4 Comments