Hearts and Minds

Much as I disliked Jonathan Haidt’s recent book The Righteous Mind overall, I’m quite interested in many of the basic propositions that this strain of cognitive science and social psychology are proposing about mind, consciousness, agency, responsibility and will. Most often what frustrates me most is not how unsettling the scholars writing in this vein are but how much they domesticate their arguments or avoid thinking through the implications of their findings.

When we read The Righteous Mind together at Swarthmore, for example, one of my chief objections to Haidt’s own analysis is that he simply asserts that what he and others have called WEIRD psychosocial dispositions (Western, Educated, Industrial, Rich and Democratic) at some point emerged in recent human history (as the acronym suggests) and have never been common or universal at any point since, including now. Haidt essentially leverages that claim into an argument that “conservative” dispositions are the real universal, which I don’t think he even remotely proves, and then gets even more into the weeds by suggesting that people with WEIRD-inflected moral dispositions would accomplish more of their social and political objectives if only they acted somewhat less WEIRD. The argument achieves maximum convolution in Haidt when he seems to suggest that he prefers WEIRD outcomes, because he’s largely stripped away the ground on which he or anyone else could argue for that preference as something other than the byproduct of a cognitive disposition. Why are those outcomes preferable? If they are preferable in terms of some kind of fitness, that they produce either better individual or species-level outcomes in terms of reproduction and survival, presumably that will take care of itself over time. If they are preferable because of some other normative rationale, then where are we getting the capacity for reason that allows us to recognize that? Is it WEIRD to think of WEIRD, in fact? Is The Righteous Mind itself just a product of WEIRD cognitive dispositions? (E.g., the proposition that one should write a book which is based on research which argues that the writing of books based on research should persuade us to sometimes make moral arguments that do not derive their force from the writing of books based on research.)

————

Many newer cognitivist, evolutionary-psychological and memetics-themed arguments get themselves into the same swamp. Is memetics itself just a meme? What kind of meme reproduces itself more readily by revealing its own character? Is “science” or “rationality” just a fitness landscape for memes? Daniel Kahneman at least leaves room for “thinking slow”, which is potentially the space inhabited by science, but the general thrust of scholarly work in these domains makes it harder and harder to account for “thinking slow”, for a self-aware, self-reflective form of consciousness that is capable of accurately or truthfully understanding some of its own conditions of being.

But it isn’t just cognitive science that is making that space harder and harder to inhabit. Various forms of postmodern and postructuralist thought have arrived at some similar rebukes to various forms of Cartesian thinking via some different routes. So here we are: the autonomous self driven by a rational mind with its own distinctive individual character and drives is at the very least a post-1600 invention. This to my mind need not mean that the full package of legal, institutional and psychological structures bound up in that invention are either fake impositions on top of some other “real” kind of consciousness or sociality, nor that this invention is always to be understood as and limited to a Eurocentric imposition. “Invention” is a useful concept here: technologies do not drift free of the circumstances of their creation and dissemination but they can be powerfully reworked and reinterpreted as they spread to other places and other circumstances.

Still, if you believe the new findings of cognitivists, we may be at the real end of that way of thinking about the nature of personhood and identity, and thus maybe at the cusp of experiencing our sense of selfhood differently as well. I think this is where I really find the new cognitivists lacking in imagination, to the point that I end up thinking that they don’t really believe what their own research supposedly shows. If they’re right (and this might apply to some flavors of poststructuralist conceptions of subjectivity and personhood, too), then most of our social structures are profoundly misaligned with how our minds, bodies and socialities actually work. What I find most queasy about a lot of contemporary political and social discourse in the US in this respect is how unevenly we invoke psychologically or cognitively inflected understandings of responsibility, morality, and capacity. Often we seem to invoke them when they suit our existing political and social commitments or prejudices and forget them when they don’t. About which Haidt, Kahneman and others would doubtless say, “Of course, that’s our point”–except that if you believe that’s true, then that would apply to their own research and the arguments they make about its implications, that cognitivism is itself evidence of “moral intuitions”.

————-

Think for example about the strange mix of foundational assertions that now often govern the way we talk about the guilt or innocence of individuals who are accused of crimes or of acting immorally. There’s always been some room for debating both nature and nurture in public disputes over criminality and immorality in the US in the 19th and 20th Century, but the mix now is strikingly different. If you take much of the new work in cognitive science seriously, its implications for criminal justice systems ought to be breathtakingly broad and comprehensive. It’s not clear that anyone is ever guilty in the sense that our current systems assume that we can be, e.g., that as rational individuals, we have chosen to do something wrong and should be held accountable. It’s equally unclear whether we can ever be expected to accurately witness a crime, nor that we are ever capable of accurately judging the guilt or innocence of individuals accused of crimes without being subject both to cognitive bias and to large-scale structural structures of power.

But even among the true believers in the new cognitive science, claims this sweeping are made at best fitfully, and equally many of us in other contexts deploy cognitive views of guilt, responsibility and evidence only when they reinforce political or social ideologies that we support. Many of us (including myself) argue for the diminished (or even absent) responsibility of at least some individuals for behaving criminally or unethically when we believe that they are otherwise the victims of structural oppression or that they are suffering from the aftermath of traumatic experience. But some of us then (including myself) argue for the undiminished personal-individual-rational responsibility of individuals who possess structural power, regardless of whether they have cognitive conditions that might seem to diminish responsibility or have suffered from some form of social or experiential trauma.

Our existing maps of power don’t overlay very well in some cases onto what the evidence of the new cognitive science might try to tell us, or even sometimes into other vocabularies that try to escape a Cartesian vision of the rational, self-ruling individual. A lot of cultural anthropology describes bounded, local forms of reason or subjectivity and argues against expecting human beings operating within those bounds to work within some other form of reason. We try to localize or provincialize any form of reason, all modes of subjectivity, but then we often don’t treat the social worlds of the powerful as yet another locality, we don’t try for an emic understanding of how particular social worlds of power see and imagine the world, but instead actually treat many social actors in those worlds as if they are the Cartesian, universal subjects that they claim to be, and thus hold them responsible for what they do as if they could have seen and done better from some point of near-universal scrutiny of the rational and moral landscape of human possibility.

———–

From whatever perspective–cognitive science, poststructuralism, cultural anthropology, and more–we keep reanimating the Cartesian subject and the social and political structures that were made in its name even when we otherwise believe that minds, selves, consciousness and subjectivity don’t work that way and ought not to work that way. I think at least to some extent this is because we either cannot really imagine the social and political structures that our alternative understandings imply (and thus resort to metaphors: rhizomes, etc.) or because we can imagine them quite well and are terrified by them.

The new cognitivism or evolutionary psychology, if we took it wholly seriously, would either have to tolerate a much broader range of behaviors now commonly defined as crimes and ethical violations as being natural (because where could norms that argue against nature possibly come from, save perhaps from some countervailing cognitive or evolutionary operation) or alternatively would have to approach crime and ethical misbehavior through diagnosis rather than democracy.

The degree to which poststructuralism of various kinds averts its anticipatory gaze when actually confronted by institutionalizations of fragmented, partial or intersectional subjectivity (as opposed to pastward re-readings of subjects and systems now safely dead or antiquated) is well-established. We hover perpetually on the edge of provincializing Europe or seeing the particularity of whiteness because to actually do it is to established the boundedness, partiality and fragility of subjects that we otherwise rely upon to be totalizing and masterful even in our imagination of how that center might eventually be dispersed or dissolved.

I’m convinced that the sovereign liberal individual with a capacity (however limited) for a sort of Cartesian rationalism was and remains an invention of a very particular time and place and thus was and remains something of a fiction. What I’m not convinced of is whether any of the very different projects that either know or believe in alternative ways of imagining personhood and mind really want what they say they want.

Posted in Academia, Oh Not Again He's Going to Tell Us It's a Complex System, Politics | 7 Comments

“The Child Repents and Is Forgiven”

I occasionally out myself here at this blog, on Facebook or at Swarthmore as having a fairly encyclopedic knowledge about mainstream superhero comics, like a few other academics, but I’ve been much less inclined to make even a limited foray into either comics scholarship or comics blogging than I have with some of the other domains of popular culture that I know fairly well from my own habits of fan curation and cultural consumption.

Nevertheless, I’ve followed many comics blogs since the mid-2000s, most of which have traversed the same arc as academic blogs or any other kind of weblogs: from a small subculture dominated by strong personalities who were drawn to online writing for idiosyncratic reasons to a more professionalized, standardized, and commercialized mode of online publication. Two days ago, a well-known male comic blogger named Chris Sims who had moved from maintaining his own early personal blog to paid writing on a shared platform blog called Comics Alliance wrote an apology for having bullied and harassed a female blogger, Valerie D’Orazio, back in that earlier era of online writing.

The timing of the apology, as it turns out, was at least partly a result of Sims breaking through from comics blogging to actually writing a major mainstream title for Marvel, an X-Men comic intended to be a nostalgic revisitation of those characters as they were in the early 1990s. News of his hiring led to D’Orazio writing about how hard that was for her to stomach, particularly given that his bullying was particularly aimed at her after she was given a similar opportunity to write a mainstream Marvel Comics title.

There’s more to it all (there always is), including an assertion by some that “Gamergaters” are somehow involved in stirring this up, but I want to take note of two separate and interesting aspects of this moment.

The first is an excellent reprise of the full discursive history involved in this controversy by Heidi MacDonald. Not only does MacDonald add a lot of nuance to the controversy while remaining very clear on the moral landscape involved, she ends up providing a history of blogging and social media that might be of considerable interest to digital humanists who otherwise have no interest in comics as a genre. In particular, I think MacDonald accurately identifies how blogging used to be a highly individualized practice within which particular writers had surprising amounts of influence over the domains that drew their attention but also had largely undiscussed and unacknowledged impact on the psychological and personal lives of other bloggers, for good and ill. In a sense, the early blogosphere was a more direct facsimile of the post-1945 “republic of letters” than we’ve often realized: bloggers behaved in many ways just as print critics and pundits behaved, with rivalries and injuries inflicted upon one another but also with relational support and mutuality. Where they were interested in a cultural domain that had almost no tradition of mainstream print criticism attached to it (or where that domain had been especially confined or limited in scope), the new blogosphere often had a surprisingly intense impact on mainstream cultural producers. I’m recalling, for example, how very briefly before I started a formal weblog I published some restaurant reviews alongside some academic materials on a static webpage, and immediately got attention from some area restaurants and from some local journalists, which I hadn’t really meant to do at all.

MacDonald underscores the difference between this early environment and now, especially in terms of identity politics. It really is not just a story of going from individual curation of a subculture to a more mainstream and commercial platform, but also of how much attention and discourse in contemporary social media no longer really reproduces or enacts that older “republic of letters”. Attention in the early blogosphere was as individually curated as the blogs themselves, and commentariats tended to be much more fragmented and particular to a site. Now commentariats are much larger in scale, much less invested in the particular culture of a particular location for content, and are directed in their attention by much more palpably algorithmic infrastructures. This is sometimes good, sometimes bad, but is at the least very different.

The second aspect of the Sims controversy that interests me is the very active debate in various comments sections about whether Sims should be forgiven (by D’Orazio or anyone else). This has become a common discursive structure in the wake of controversies of this kind. Not just a debate over what the proper rhetorical and substantive composition of contrition should be, but whether the granting of forgiveness is either a good incentive for producing similar changes in the consciousness of past and present offenders or is an attempt to renormalize and cover-up harassment by placing it perpetually pastward of the person making a pro forma apology.

One of key issues in that ongoing debate is whether the presence of self-interest so contaminates an apology as to make it worthless. E.g., if Sims has to go public in order to keep his job offer from Marvel intact, then is that a sign that he doesn’t really mean it, and thus that his apology is worthless?

I think the discussion about the dangers of renormalization, of quickly kicking over the traces, is valid. But here I’d suggest this much: if male (or white, etc.) cultural producers, professionals, politicians, etc., come to feel that their ability to succeed professionally depends upon acknowledging bad behavior in the past and committing to a different kind of public conduct in the present, then that’s a sign of successful social transformation. The presence of self-interest doesn’t invalidate a public apology, but instead documents a new connection between professionalism, audiences and success. That might turn out to be a bigger driver of change than waiting for a total and irrefutable transformation of innermost subjectivity.

Posted in Blogging, Politics | 1 Comment

Raise the Barn/Autopsy the Corpse

A more detailed thinking-through of the case of Sweet Briar, and a proposal.

Five places to start a dissection of Sweet Briar College and the decision of its Board to close the school:

Laura McKenna, “The Unfortunate Fate of Sweet Briar’s Professors”.

Jack Marshall, “The Sweet Briar Betrayal”.

Roanoke Times Editorial Board, “Our View: Sweet Briar Board Should Resign”.

Brian C. Mitchell, “The Crack in the Faberge Egg”

Deborah Durham, “Suddenly Liminal: Reflections on Sweet Briar College Closing”
—————

The thinking through. The more the details come out, the odder the decision to close appears. Sweet Briar had more liabilities and debts than its endowment size might suggest, and it clearly lacked a strategic plan that could provide answers to its shrinking enrollments. But to close so suddenly, while under the leadership of an interim President, and with no leadership in its Admissions office, makes little sense. The faculty and staff had spent a year considering plans. Why not hire a “crisis President” and take a shot at some of those plans? Surely there’s someone talented out there who would relish the chance to turn around a college in crisis. And surely the current students would appreciate their loyalty to the institution being rewarded by such an effort, rather than being pushed out the door allegedly for their own best interests. I think it’s reasonable to wonder if there isn’t a plan that isn’t being disclosed–perhaps that the only way to fully void Indiana Fletcher Williams’ will is to go completely out of business?

The proposal. If the current faculty and staff and students of Sweet Briar would welcome it, why not gather some current provosts, presidents, senior staff and faculty of liberal arts colleges together at Sweet Briar or nearby for a weekend-long summit that reviews the plans composed over the last year and suggests other possible solutions? A sequel, perhaps, to the meeting that the former President of Swarthmore Rebecca Chopp and the outgoing President of Haverford Dan Weiss organized at Lafayette College in 2012.

If there’s little interest among current faculty, staff and students at Sweet Briar, then there’s no point to trying to have such a meeting in a time-sensitive, hastily-organized way. But even if they aren’t interested, I think there should be such a meeting in the next two years, as a post-mortem. I do not accept the thought that some (including McKenna) offer that Sweet Briar is a sign of the imminent death of the small liberal-arts college, in no small measure because I don’t even think Sweet Briar was doomed to die.

————

Reading about the discussions that have been going on at Sweet Briar itself for the last year, I think it’s clear that folks there understood some of what they’d have to do to be viable, and that some of what they’d have to do would be hard to achieve, especially for faculty. Even in a situation of existential threat, it’s very difficult for faculty to dramatically reimagine the structure of a curriculum and the nature of their professional practices, and to find a way to systematically reduce the size of a faculty. You can’t have over one hundred faculty positions and only 500 students. You can’t have more than two hundred non-faculty employees and have only 500 students either.

This would be job #1 of a potential “emergency summit”: redesign a small college curriculum so that it has 75 or fewer faculty positions and yet retains intellectual and philosophical coherence. Typically when senior administrators are brought in to cut positions (or “detenure”) an institution, they do it by finding out which departments have the lowest enrollments, they do it by finding out which departments are the most politically hapless or exposed. That’s the wrong way to do it no matter what the crisis is, but it’s especially wrong in a situation where the institution itself has an identity problem.

Brian Mitchell’s “Faberge” essay points out that the small liberal-arts colleges that have scrambled to build highly distinctive, imaginative or innovative programs, or have restructured their overall institutional emphasis, are doing ok, precisely because they have something to offer prospective students beyond “small and liberal-arts”. St. John’s College is the classic established example of such a program, but there are many others: Berea College, College of the Atlantic, Quest University, Colorado College, Hampshire College. At the Lafayette meeting I mentioned, I was really struck at how many other small colleges with more limited resources were doing really creative things–and like Mitchell, I was also struck that the wealthiest and best-known liberal arts colleges were dramatically more risk-averse and mainstream.

I’m certain that there are ways to organize a faculty of fifty or seventy-five intellectuals and scholars that channels their teaching and engagement to great effect without having to offer forty-six majors, minors and certificates. I often despair of getting my colleagues at Swarthmore to grasp this same point, that a small college, even a rich one, has a choice between being a great small college or a shitty little university. The more programs a small college tries to have, the more fields it feels it must represent, the more specializations it feels it requires, the more it’s choosing to be a shitty little university. Faculty are usually the ones driving that kind of choice: this is one thing we can’t blame the administrators for. So unless a summit to #SaveSweetBriar was willing to dramatically reimagine what studying at Sweet Briar could entail, and accept that not every job can be saved, this meeting I’m proposing has to be a post-mortem that will warn the living rather than save the patient.

Job #2 is also clearly something that the faculty and senior staff at Sweet Briar are painfully conscious about, which is to break some of the restrictions surrounding the gifts that founded and sustained the college. But it’s been done: Sweet Briar found a way to get loose of the initial requirement that its students be white. Even if Sweet Briar were to remain a college for women, it could have a dynamic admissions strategy that sought out students from outside the United States, and non-traditional students inside the U.S. (which might then influence the curricular redesign in #1).

Job #3 is look at the financial picture after #1 and #2 and see what else the institution can do more cheaply or not at all. People who imagine that there’s lot of waste in a budget, any budget, are almost always wrong. But there might be administrative operations that a small college with a newly envisioned mission doesn’t need to pursue. And stop hiring consultants: that would be another purpose for this summit, to build a “pro bono” network of peer experts who can pitch in until the college is stabilized. The summit could look with fresh eyes at the day-to-day operations of the college and see what makes sense and what doesn’t make sense going forward.

Job #4 is a capital campaign that follows straight off of #SaveSweetBriar. Use the resigned, reimagined curriculum as a selling point to bring in new supporters, as well as tap the obviously considerable goodwill of Sweet Briar’s established donor base. I think a summit could at least help lay the groundwork for such a campaign.

This is obviously ambitious for a weekend, especially if it’s a meeting convened on short notice. But I don’t think it’s completely implausible.

If this ends up being a post-mortem instead, then the review of the issues involved could be broader, but I still think might follow the same rough contours: curricular design, admissions practices, donor practices, fiscal restraint (that avoids being austerity). All of it aimed at asking: how can liberal-arts colleges avoid making the same mistakes? What do we have to do in order to secure our collective future?

Posted in Academia, Swarthmore | 7 Comments

#Save Sweet Briar

The more I read about the decision to shut down Sweet Briar College, the less sense it makes to me.

Essentially, when I look at Sweet Briar, I see the following:

1) A physical plant, a faculty and a staff that are formidable assets.
2) A sizeable endowment.
3) Complex liabilities in terms of conditions of gifts, etc., that might be negotiable with the right legal strategy.

I see also the following things that need to be done:

1) Dramatic change in the curriculum. Sweet Briar has a huge, sprawling curriculum in relationship to the size and character of its student body. Yes, this means shedding staff and faculty, but more importantly, it means coming up with a distinctive idea about what the education at Sweet Briar is about.
2) Wider recruitment. Of international students, maybe of men if the legal strategy can be found, of non-traditional students.
3) Novel strategies for setting tuition. Maybe Sweet Briar could be the first SLAC to be brutally honest about “discounting” in relationship to means-testing.
4) Rapid commitment of new energies behind pedagogical innovation. Suppose you straight-up say, “We’ll take students whose parents have a lot of money to subsidize education and we’ll give them a completely new form of individual attention, something they can’t get from a MOOC or a large impersonal university or even a traditional selective small college. We’ll build singular programs around singular individuals, every single one.” Maybe, for example, give every student admitted to Sweet Briar gets a “budget” to spend on commissioning particular courses or instructors. Anything that makes it seem like a place that is not like anywhere else in terms of its pedagogy.

I see assets, I see possibilities, and I see a Board of Trustees and an interim President who gave in preciptiously rather than explore those possibilities and assets.

I’d love to see a pro bono project of small liberal-arts college presidents, provosts and faculty who would agree to descend upon Sweet Briar for a weekend of creative thinking, to help their Board and President see the futures they haven’t seen. I’ll pledge my time right now if there’s sufficient interest in such a thing.

Posted in Academia | 8 Comments

Where There’s Smoke

My main problem with Laura Kipnis’ much-discussed essay “Sexual Paranoia” is the excluded middle it outlines. Practicioners of dialectic modes of argument often claim that this approach is necessary in order to locate and recommend that middle. It’s the “Untouchables” theory of rhetorical struggle: they put one of yours in the hospital, you put one of theirs in the morgue! Until it’s all over and everybody gets to live in peace and drink because Prohibition was repealed, or something like that.

I think Kipnis is right that building rules and formalisms that encode a particular kind of person who depends upon institutions and governments to protect them from harm is a mistake in a great many ways. I think she’s wrong in imagining that the alternative is an empowered human subject who makes decisions about sex, erotics and love within the alternative formalism we’ve chosen to call “consent”, a sort of contractual relation between autonomous self-owning individuals. In the new rules, we forbid relationships that we definitionally hold to be non-consensual because of how we describe power as a function of formal institutional roles. In the old rules that Kipnis extols, we sort every erotic and sexual relationship into consent and non-consent and apply an if-then assessment. If non-consent, criminal; if consent, allowable.

The excluded middle here is the messiness of being human, which Kipnis says she prizes (and her powerful, important scholarship throughout her career backs that statement up). But that messiness has to include the possibility that acts, feelings, relations which satisfy even the new rules as being “affirmatively consensual” could be nevertheless profoundly objectionable in those same messy, human terms. And some of them are sufficiently objectionable that they would not just be a “you say tomato, I say tomato” kind of matter for individuals to sort out on their own, but that institutions might in totally human and subjective terms decide to act upon. Kipnis is against the new rules, but in many ways implicitly is defending the old rules (which are just as much rules): that you might suffer the contempt of friends and colleagues, but you should never fear the discipline of institutions. I think the most human thing would be for institutions to act as humanely as we dream of individuals acting: as judicious, wise, complex, sensitive but also strong, decisive and resolute where need be. To act not just because they must (the lawyers say!) or not act because they mustn’t (the lawyers say again!)

Kipnis doesn’t name him by name, but the case of Peter Ludlow at Northwestern is clearly on her mind. In the excluded middle, why not just say what clearly should be said? That he should not have done what he himself admits that he did, and that the wrongness of its doing doesn’t depend on the particulars of consent? That an ideology that maintains that we own ourselves, that we can give consent or refuse it as autonomous individuals, is also an ideology that should allow that we can and should own ourselves sufficiently to keep our zippers zipped in many circumstances? If we’re to hold on to liberal autonomy, let’s hold on to most of it. The worst of all worlds would be to hold on to consent as a liberal form of contract but to dispense with its associated aspiration for self-control and self-mastery. The specter of a self that can consent but cannot be expected to act differently across different social and professional worlds, that has its desire spilling over the walls because that self is a dark romantic kernel inside the rational contracting shell is a familiar ghost, but we shouldn’t welcome its recurrent haunting.

The case that makes this point most clearly for me is of the Yale moral philosopher described by a graduate student who had an affair with him. The details are depressingly familiar, as the author herself recognizes as the essay wears on: an older man who lies proficiently about his marital status, about his sex life, about his intentions. Who turns out to tell the same lies to many women. If that were all of it alone, then that alone is worth writing about, worth sharing, worth accusing. Why not? Why should serial deceit be rigorously private and protected? Surely real individual freedom, especially in matters of sex, love and desire, should include the freedom to share our stories–and our warnings. But also in this case, and all cases of relationships between people, power matters. Because it turns out that the Yale moral philosopher isn’t just a serial liar and intellectual hypocrite, but very possibly is also in breach of the old rules of consent that Kipnis agrees are still vitally important to maintain and enforce. She says of them that the real harassers should suffer all that is coming to them: but we should hardly wait to see a fire break out every time there’s smoke in the air. In all our institutions in modern life, the air is thick with smoke. The lies that old men tell, the advice that fraternity brothers give about drunk women at parties, and so on: our lives are often like the former mining town of Centralia Pennsylvania, where coal seams burn underground unchecked, the fire of harassment and assault always underneath. Kipnis invokes Andrea Dworkin as if to laugh at where we’ve arrived, making mainstream institutional systems of discipline and punishment that affirm her view of all heterosexuality as contaminated by power. Kipnis is right to reject the essential gloominess of Dworkin’s view of so many human relationships as fundamentally contaminated and irredeemable, but Dworkin’s description of power being everywhere in sexuality (and otherwise) is fairly on the mark.

So why not a Yale University which in human and humane terms says to that moral philosopher: we don’t approve of what you’re doing with your reputation as a scholar and teacher, of what you’re doing as a human being, even if you’ve been careful enough to follow some writ, to discipline your desire just enough so as not to hurt and lie to a person who is at this moment your student, to follow the rules just enough. We don’t approve in general of how you use your influence and your power, we don’t think very much of a moral philosophy that applies so very little to your own conduct. And so: go somewhere else? When did a few books full of moral philosophy and a bunch of lectures become so valuable that they earned someone a lifelong place no matter whom they’ve hurt or how they act? Why not imagine institutions that could be just wise enough, just knowing enough, that they might act in human terms, just as we expect from our wise and knowing friends and acquaintances? (Even, perhaps, from our wise enemies.) Why not imagine institutions less as stern sovereigns, or as machines that protect us from both messy desire and weary wisdom? Why not imagine communities–including communities of work–as legitimately collapsing public and private together, as being just as messy as individuals are in how they reward and forbid, act and fail to act? If we want the notion of individuals consenting–and individuals being responsible for their consent–then perhaps we should add to that another shopworn idea, that with great (or even modest) power comes great (or even modest) responsibility.

A defense of the necessary, even desirable, messiness of human life is not about painting a huge unknown “grey area” and saying that everything within it is nobody’s business but the people in the grey. It’s not saying that what happens in Vegas stays in Vegas. It ought to be the opposite: a brutally honest commitment to humanistic empiricism, to the vivisection of the human heart, to the unflinching witnessing of what we do, what we are, what we feel. And if we see, when we see, lies and pain and suffering, we shouldn’t rush to call it desire and pleasure and freedom.

Posted in Academia, Politics | 5 Comments

Practice What We Preach?

I’ve been reworking an essay on the concept of “liberal arts” this week. One of the major issues I’m trying to think about is the relatively weak match between what many liberal arts faculty frequently say about the lifelong advantages of the liberal arts and about our ability to model those advantages ourselves. In quite a few ways, it seems to me that many academics do not demonstrate in their own practices and behavior the virtues and abilities that we claim follow on a well-constructed liberal arts education. That is not necessarily a sign that those virtues and abilities do not exist. One of the oldest known oddities surrounding teaching is that a teacher can guide a student to achievements that the teacher cannot himself or herself achieve. Good musicians can train great musicians, decent artists can train masterful ones, and so on. Nevertheless, it feels uncomfortable that we commonly defend liberal arts learning as producing competencies and capacities that we do not ourselves exhibit or even in some cases seem to value. The decent musician who is training a virtuoso performer nevertheless would like to play as well as their pupil if they only could, and tries to do when possible.

Let me give four examples of capacities or skills that I have seen many faculty at many institutions extol as good outcomes of a liberal arts education.

First, perhaps most commonly, we often claim that a liberal arts graduate will be intellectually adaptable, will be ready to face new challenges and new situations by learning new subjects, approaches and methods on an as-needed or wanted basis.

Second, many of us would argue that a well-trained writer, speaker and thinker should be able to proficiently and persuasively argue multiple sides of the same issue.

Third, faculty often claim that a liberal arts graduate will be able to put their own expertise and interests in wider perspective, to see context, to step outside of the immediate situation.

Fourth, many liberal-arts curricula require that students be systematically engaged in pursuing breadth of knowledge as well as depth, via distribution requirements or other general-education structures.

So, do most faculty in most colleges and universities model those four capacities in their own work and lives? My impressionistic answer would be, “Not nearly enough”.

Are we adaptable, do we regularly tackle new subjects or approaches, respond well to changing circumstances? Within narrowly circumscribed disciplinary environments, yes. Most active scientific researchers have to deal with a constantly changing field, most scholars will tackle a very new kind of problem or a new setting at some point in their intellectual lives. However, many of us insist that learning new subjects, approaches and methods is an unforgiving, major endeavor that requires extensive time and financial support to work outside of the ordinary processes of our professional lives. That’s not the kind of adaptability we promise our graduates. We’re telling them that they’ll be better prepared to cope with wrenching changes in the world, with old lines of work disappearing and new ones appearing, with seeing fundamentally new opportunities and accepting new ways of being in community with others. And I really believe that this is a fair promise, but perhaps only because the major alternative so far has been narrowly vocational, narrowly pre-professional, training, which very clearly doesn’t prepare students for change at all. We win out by default. If students and parents increasingly doubt our promise, it might be in some measure because we ourselves exemplify it so poorly. Tenured faculty at research universities keep training graduate students the same way for professorial work even as the market for academic labor is gutted, for example, and largely leave those students to find out for themselves what the situation is really like.

Most of us show little or no aptitude for or zest for arguing multiple sides of an issue in our own advocacy within our communities, and only a bit more so in our work as scholars. Ad arguendo is a dirty phrase in most of the social media streams I read: I find that it is rarer and rarer to see academics experimenting with multiple branches of the same foundational line of thought, or exploring multiple foundations, for either the sheer pleasure of it or for the strengthening of their own most heartfelt case. Indeed, I see especially among some humanists a kind of anti-intellectual exasperation with such activity, as something one does reluctantly to manage social networks and maintain affective ties rather than as a demonstration of a deeply important capacity. The same goes for putting ourselves in some kind of larger perspective, of understanding our concerns as neither transcendently important nor as woefully trivial. We promise to show our students how to make connections, see their place in the world, to choose meaningfully, and then do little to strengthen our own capacities for the same.

Do we have our own “distribution requirements”? At the vast majority of academic institutions, not at all. Is there any reward at all for learning about other fields, for learning to understand the virtues and uses of disciplines other than one’s own, for generalism? Any imperative to do so? No, and in fact, many faculty will tell you that this isn’t possible given the intensive demands on their time and attention within their own fields of study and their own teaching labor. But if it’s not possible for us, how is it possible for our students? Most liberal-arts faculty teach in institutions that maintain as one of their central structural principles that it is readily possible for a student to move from advanced mathematics to advanced history to studio art to the sociology of elementary education in a single week and to do well in all of those subjects. If we think that is only possible for one brief pupating moment until a final irreversible choice is made, we ought to say so, and thus indemnify ourselves against the demands we make of our students. That would sit uncomfortably alongside all the grand claims we make about learning how to think, about the idea that a major isn’t a final choice, that you can do lots of things with a liberal arts education, however.

———

Liberal arts faculty have got to much more effusively and systematically demonstrate in our own lives and practices what we say are the virtues of a liberal arts education. Or we have to offer a trickier narrative about those virtues, one that explains how it is that we can teach what we cannot ourselves do. Which might also raise another question: are we actually the best people to be doing that teaching?

Posted in Academia, Defining "Liberal Arts", Swarthmore | 6 Comments

The People Perish

The trouble with Hilary Clinton’s email is not Hilary Clinton’s email.

The trouble is that the Democratic Party is apparently committed beyond recall to nominating an individual to be President whose entire strategic vision is:

a) I’m owed. It’s my turn.
b) Remember how good it felt to break a barrier to aspiration in 2008? You can feel that way again.
c) Something something demographics.

Particularly c). As long as we’re remembering 2008, remember all that absolute horseshit that progressives were unloading about how the demographics were against the Republican Party, how it was just a bunch of old white people, about the ascendancy of a new American majority? You don’t even need to have a platform, or a vision, or an ideology! It’s destiny!

You can look long and hard to find any other signs of a Democratic idea or vision and not find it. At best, what you’ll see is the same bland technocratic defense of competency that the party has offered since Mondale’s defeat in 1984. We’re not crazy, our guys went to good schools, we make good policy, look at this nice range of legislation we drafted. But at best the Obama Administration is a hodgepodge of good and bad even on technocratic grounds. Eric Holder’s Justice Department lays out the facts on Ferguson? Great, if reactive, but I’ll see that and raise you Arne Duncan’s destructive Education Department, which could just as easily have been Bush’s Education Department.

On vision, though? It’s nowhere. Competency without conviction is not enough. The Republican Party base has a ton of conviction and it is sufficient to produce the outcomes they want whether or not they are actually in power, because they can speak clearly and consistently about what they’re looking for in every single issue they encounter, indeed, on issues they have yet to encounter. Put that up against competency without vision, and it will push the technocrat towards accommodating the only strong, coherent, aligned voices speaking on a particular issue.

The idea that Clinton is inevitable is possibly the most depressing prospect in mainstream electoral politics that I’ve seen in my lifetime. The best I could hope for at this point is that she’s the Millard Fillmore of her party, the last of a kind and a confirmation of the necessity to break up the Democrats as they are and build something new in their place.

Posted in Politics | 4 Comments

The Trouble With Sustainability II: A Dynamic Steady-State?

Have human beings ever built organizations that can sustain projects over very long time spans?

Yes. Cathedral-building is a classic example. The joint-stock company, at least in its earliest iterations, is another example that many would cite. Organizations that by design are intended either to complete work that can’t be finished in a lifetime or to focus an organization on the longer-term and protect it from short-term calculations. Arguably Westphalian state sovereignty was constructed to protect governments and rulers from destabilizing forms of short-term calculation and contingency, to standardize claims to territory and authority.

I think it would be fair to say that if there have been such organizational structures, there are very few of them surviving in the present. Modernity in this moment is massively short-term in almost every respect. Hence organizations like the Long Now Foundation that are trying to think about what such structures in the 21st Century might require.

Sustainability is surely a project that requires a custodial approach that extends over centuries rather than single election cycles. It simply won’t happen if it’s left as a matter of discrete decisions, particular policies, or even adoptions of new habits by individuals and institutions.

What we will eventually need is organizations–and maybe societies–that do not require growth. What that idea meant in the 20th Century, more or less, was the creation of some form of authority that would manage economic and social system so as to prohibit growth, e.g., some form of state socialism or at the least state management. I’m not going to belabor the point too much, but I think it’s plain that this is not going to cut it. Not just because states are themselves just as potentially dangerous a concentration of power as capital and just as prone to chase accumulation on behalf of their elites, but because controlling regimes out to prohibit growth will always confuse growth and change.

Getting to human systems that exhibit both internal dynamism while having little to no net change in their intake of resources without austerity, impoverishment or stasis requires new structures for which there are very few meaningful analogies. It takes understanding systems design, in particular how or whether you can design for emergent complexity.

Many sustainability advocates embrace biophiliac designs in thinking about production, consumption and waste. So biophilia is a good place to start: are there natural systems that achieve dynamic equilibria, or are self-maintaining steady states in some other fashion? (Some economists would insist that this is exactly what capitalist growth is, but they make that claim work by placing finite material resources, population growth, etc. outside of that system, which is precisely the problem that humanity now faces.)

There are some good examples of homeostasis in biological organisms and in ecological systems, arguably scaling up to the entire planet. Homeostatic systems aren’t necessarily good or desirable in and of themselves–in biological and ecological contexts, they operate within some larger fitness landscape that is not stable over time, rather than with complete autonomy.

But homeostasis–or more generally systems with negative feedback loops–are a fairly good place to start thinking in design terms, because most such systems do not require a control apparatus or central authority, they can maintain equilibria without a command hierarchy. Moreover, they can do so while maintaining internal diversity and heterogeneity, e.g., homeostatic systems can have many different parts or agents that operate simultaneously and independent of one another.

The dystopic fear about no-growth, steady-state futures for humanity generally involves the proposition that they would necessarily entail both command hierarchies and enforced homogeneity. So at least there are at least some natural systems that demonstrate that this isn’t necessary: you can get a system that maintains itself without an ever-expanding use of more and more inputs that doesn’t require command structures and doesn’t eliminate internal hetereogeneity.

But what does that look like in human terms, either for individuals or institutions? Right now most human institutions, including Swarthmore, maintain systems within which virtually every individual and unit assumes that growth in their domains of primary interest is their normal expectation, that dynamism is only possible with the addition of new resources: more funding, more people, more dedicated infrastructure. Pressure against growth is usually exerted from above: responsibility for the total budget, for the overall institutional use of resources, is vested in a command hierarchy, and that command hierarchy is also charged with considering the “fitness landscape” within which the institution operates.

That structure is what produces cycles of growth followed by austerity rather than some form of steady homeostatic dynamism. Individuals and units work the internal landscape of the organization to capture a greater share of resources in order to demonstrate their own dynamism and earn rewards for it. The command structure of the organization desperately seeks more resources so that this internal process doesn’t turn into a zero-sum game. If they hit a firm resource limit, the process of internal competition doesn’t stop, but instead sharpens. If the available resources actually shrink, the competition gets even more intense as the command structure increasingly impoverishes parts of its own structure in order to feed the internal winners.

The parts or units of a homeostatic system in most natural examples are not competing with each other: there is an “inside” that works together to enable the whole to operate on a larger fitness landscape.

What would organizations, whether universities or hospitals or bureaucracies or corporations, look like if they were at a relatively steady-state but internally dynamic? E.g., where allocations of resources shifted somewhat as needed but also where there was change and innovation that didn’t necessarily require resources, that was in some sense energetically neutral? That’s possible, after all. It’s almost the ideal embodied in Marx’s famous “hunt in the morning, criticize after dinner” quotation: a fixed allocation to the individual, but individuals freed to generalize their use of that allocation according to their desires and needs. Or if you like, it’s the Valve employee manual: once you’re inside the organization, you’re freed from competing with others to secure resources from a central command. The individual worker is the resource, and they allocate it to the projects and concepts to which they wish to contribute.

Those encouraging analogies aside, it’s still hard to see how to get there from here. This is not cathedral-building, even if it operates at the same temporal scale or longer. It’s extremely hard even for people deeply committed to sustainability to give up the notion that innovation, creativity and reform require the allocation of new resources, in substantial measure because it often seems very hard to imagine not doing something else that’s already being done. It’s as if we believe that we must always be hunting, fishing, rearing cattle and criticizing simultaneously morning, noon and night, and any new activity on top of that requires more people or more time or more energy, that we mustn’t ever get to the moment where we agree that perhaps right now, not so much rearing of cattle is needed.

I really can’t see the immediate next steps in the lives of people within institutions. Do we have to think differently first, do we need new structures to work and live within, do we need both at once, or is there just some kind of magical better mousetrap that could step forward as a total alternative, sufficiently different from its very first moment?

Posted in Oh Not Again He's Going to Tell Us It's a Complex System, Politics | 10 Comments

The Trouble With Sustainability I: The Clock of the Short Now

More than a week later, I continue to really think about our recent “sustainability charrette” at Swarthmore College led by folks like David Orr, Hunter Lovins, John Fullerton and Nikki Silvestri.

At least one of the things I keep thinking about, however, is an issue that could not possibly be discussed in any short meeting intended to focus attention on the concrete, specific things that a single institution might choose to do in order to pursue sustainability.

Even the speakers agreed that it’s not entirely clear what “sustainability” is, and David Orr soberingly pointed out that you could potentially achieve sustainability and yet fail to build a humane, just society (what he called “solar fascism”). I would go a step further, however, and point out that most existing attempts to move towards sustainability radically underestimate just how unprecedented that move will be for human subjectivity and personhood if we manage to achieve it.

I think that’s important, because if you underestimate how different any sustainable future is (fascist or free), you likely will not really understand how to make meaningful steps in that direction right now.

There are almost no examples in human history of a generation of people voluntarily giving up what they already have or deferring what they could plausibly have in deference to what people yet to be born will need.

Yes, individuals sacrifice for their children or grandchildren. At least some of the time, they’re not giving up what they could have, however: they’re just in a situation where the only possibility of social mobility is multigenerational. At least some of what people give up in their lives for family is self-interested in some fashion if you look at it closely, given in expectation of reciprocal care later on, or as part of a kin-based social structure that delivers general benefits to all contributing members.

Yes, individuals sacrifice their lives or well-being for the greater good. But even democratic societies have often made such sacrifices at least partially or wholly compulsory at some level. If not, such sacrifices are also compensated for, if not commensurate in value with the health or life of the person sacrificing one or both. The referent of the sacrifice is often contemporary and concrete rather than abstractly futureward. Soldiers in WWII might have fought for democracy or country, but their sense of what those entailed was largely rooted in their own experience.

Yes, individuals sacrifice much of what they have, either power or resources, in favor of transforming their own societies into more humane, just or sustainable societies. Either out of altruism or self-interest, or perhaps both.

Sustainability requires equally concrete sacrifices for wealthy people and wealthy nations that can only barely be related to contemporary losses or circumstances–which is one reason that sustainability advocates have to rely as much as they do on endangered polar bears, hurricanes and visible droughts even though the emergent consequences of climate change at and beyond +2 C are likely to be systemically new forms of material and biological life, and the sufferings of that future humanity therefore almost as radically difficult for us to conceive as it would be for twelfth-century peasants growing flax to imagine doing something differently in their lives in order to ease the circumstances of suicidal Foxconn workers in 21st Century eastern China.

The problem is not just one of imagination, since in fact human beings are reasonably good at envisioning things which don’t yet exist and even at letting those visions motivate them to act in wholly new ways. It also requires a fundamental moral logic that sustainability advocates usually have to simply assume rather than argue.

If I’m in a room full of religious people who believe in life after death, and that the life to come will be dictated by decisions we made in this life, I don’t have to convince them that they ought to act righteously in order to secure the afterlife they desire. (Ought to isn’t the same as actually acting, but that’s a different problem.)

But in a room full of otherwise secular people? What’s my reward for foregoing something now in order to benefit people who are not even born yet, people I will never know? Why shouldn’t I live for my own satisfactions right now? I am going to be dead a long time. If my great-great-great grandchildren are gathering algae from the soupy, fungus-infested marsh that used to be the foothills of the Appalachians and telling tall tales about how there used to be animals besides rats and cockroaches hereabouts, what’s that to me?

Please don’t give me the “pay it forward, people in the past were looking out for you” line. That’s not going to persuade anyone at a deep level, it’s a sentimental logic fit largely for Hallmark cards. My parents were looking out for me. My grandparents were looking out for me. My teachers in my life were looking out for me. My great-great-great grandparents? They never imagined me, nor did they do anything in their lives that was done in anticipation of me. How could they have, even if they were fine people? (I frankly don’t know anything about them as individuals, so the veil of ignorance runs in both directions.) My circumstances today are as unimaginable to them as the future after climate change (or even after successfully averting the worst scenarios of climate change) is to me.

Even when I wish all of those who came before me had done something radically better than what they did–never have allowed the Atlantic slave trade to flourish, for example–I can scarcely imagine as a historian what the circumstances of that collective counterfactual plausibly could have been. Any change like that would not just have required foregoing self-interest, but also a radically different understanding on a much bigger scale of time and space about what the iterative consequences of small, simultaneous actions could be. My paternal great-great-grandfather, for example, would have had to think differently before leaving Ireland about a concept of whiteness that he had yet to experience, and would have had to do something on arrival other than just head to Iowa and try to farm, but all the “somethings” are things that he likely couldn’t have even imagined until well after the point at which they could have been done.

Almost every analogy we make to argue for the urgency of the cause of sustainability is to campaigns for moral and social transformation that arrived in the disastrous aftermath of oppressive, destructive systems, not in anticipation of them.

The people who made enduring things which I rely upon in my daily existence today did not have me in mind, did not make those things for me or give up something so that I could have them. They made those things for their own benefit and purposes. Or they were forced to make those things for the benefit of others. That those enduring things are still here for me to use is almost an epiphenomenal side effect of the benefits they bestowed upon their makers, or the suffering they caused. Build a building for your own purposes? It’ll be around for someone else to buy and use. Create a Constitution to govern your society? It has an enduring impact to do that, but you’re not giving up something that deeply benefits you so that everyone will be far better off in a distant future. You’re solving problems you have right now, reducing risks and liabilities in your own situation.

You could argue that the Quakers who founded Swarthmore College were looking out for me. Only they weren’t: the college they founded absolutely did not have me or its current students or its current society meaningfully in mind. The only thing they gave us was an institutional framework that could be redesigned and repurposed going forward. There is no reason in that sense when you build something for yourself, with your own resources, to be deliberately spiteful and make it fall apart or break the moment you’re done with it, to “take it with you”. But that’s a far cry from consciously giving up something you have or could have in favor of people who don’t even exist.

That might be where an argument in favor of doing just that could begin, however. Somewhere along the way to sustainability, 21st Century human beings are going to have to accept a radical new kind of material culture, with new prospects and processes. That doesn’t have to mean impoverishment (or authoritarianism), but we will need to accept a moral view of the future that simply hasn’t existed in the past, doesn’t have meaningful analogues or precedents. However, if we demand that everyone has to feel that way now, all at once, that the necessary prerequisite of sustainability is to have all at once a boundless kind of altruism combined with a very different temporal imagination, it’s not going to happen.

A good analogy might be rights-based individualism. It didn’t exist at some point in the past, but at some point became a very deep and fundamental part of how most of us experience being human, it became integral to our subjectivity and consciousness. It wasn’t a straightforwardly instrumental change, and many of the moments and movements and arguments that moved human beings towards feeling as if they were individuals with their own bodies and distinctive minds, individuals with rights, were contradictory, fragmentary, and incomplete.

At a time when even the few human institutions that did have longer time horizons are crumbling under the pressure of short-term calculation, expecting a fundamental epistemic transformation of selfhood, agency and perspective to happen like an epiphany on the road to Damascus may feel as if it is a requirement. But that is in some sense as materially impossible as demanding that we invent a technology to sequester all industrial emissions from the atmosphere in five years, or refreeze the Arctic tomorrow.

Even if you steer clear of the new paradigms of cognitive science, you have to recognize that consciousness has its own long horizons. Embracing, or at least accepting, a different material existence now on behalf of a humanity we will never meet or know, is something that we can only learn to do in small and halting ways, at least to start.

Posted in Oh Not Again He's Going to Tell Us It's a Complex System, Politics | 9 Comments

Wary About Wisdom

Cathy Davidson has been steadily working away at the problem of inequality within higher education and at how higher education contributes to inequality.

I admire the intensity of her focus and her willingness to consider radical rethinking of institutions of higher learning. However, I think she’s up against a much harder problem than even she credits in her latest arguments for the liberal arts as a “a start-up curriculum for resilient, responsible, ethical, committed global citizens.”

Davidson has argued for a long time, in concert with many other reformers in education, for abandoning the industrial infrastructure of modern educational institutions–the idea of taking standard inputs (matriculating students) and producing standard outputs (graduates) through a series of industrially-organized allocations of time and labor. Put students in a room at a set time, do a standardized type of work or dump a standard unit of information, send them away at a set time, test and measure, do quality assessment (aka grading), throw away the substandard. Repeat.

Instead, she often counters, we should be contributing to human flourishing. Education should happen for every student seeking it at its own time and pace. For one person, competency and mastery might bloom in an hour, for another in a week, another in a month: let the institution match its pace to that. Don’t chop up knowledge into manageable reductions, skills into atomized pieces. Don’t suppress what students are really thinking through because there isn’t time to listen, because the assembly line must continue to move along. Don’t turn degrees into Skinner boxes. And so on.

It’s a familiar critique, and I endorse much of it. In part because I can imagine the classrooms and institutions that would follow these critiques. To me, much of what Davidson asks for can be done, and if done will show a greater and more effective fidelity to what many educators (and the wider society) already regard as the purposes of education, whether that’s the cultivation of humanity or teaching how to add. I have no trouble, in other words, arguing for the wholly conventional value of a substantially reimagined academy in these terms.

However, in any educational project that emphasizes the cultivation of humanity, at least, there is a difficult moment lying in wait. It’s fairly easy to demonstrate that specialized knowledge or skills are not present in people who have not received relevant training or education. When we talk about wisdom or ethics, however, I think it’s equally easy to demonstrate that people who have had no educational experiences at all, or education that did not emphasize wisdom and ethics, nevertheless possess great wisdom or ethical insight.

Arguably, our current educational systems at the very least are neutral in their production of wisdom, ethical insight, emotional intelligence and common sense. (Unless you mean that last in the Gramscian sense.) Davidson might well say at this point, “Exactly! Which is why we need a change.”

I can see what a learner-driven classroom looks like, or how we might rethink failure and assessment. I don’t know that I can see what an education that produces ethics and wisdom looks like such that I would be confident that it would produce people who were consistently more wise and more ethical than anyone without that education would be.

What I unfortunately can see is that setting out to make someone ethical or wise through directed learning might actually be counterproductive. Because to do so requires a prior notion of what an ethical, wise outcome looks like and thus creates the almost unavoidable temptation to demand a performative loyalty to that outcome rather than an inner, intersubjective incorporation of it.

If we thought instead about ethics and wisdom as rising out of experience and time, then that might attractively lead back towards the general reform of education towards projects, towards making and doing. However, if that’s yet another argument for some form of constructivist learning, then beware fixed goals. A classroom built around processes and experiences is a classroom that has to accept dramatically contingent outcomes. If we embrace Davidson’s new definition of the liberal arts, paradoxically, we have to embrace that one of its outcomes might be citizens whose ethics and wisdom are nothing like what we imagined those words contained before we began our teaching. We might also find it’s one thing to live up to an expectation of knowledgeability and another altogether to live up to an expectation of wisdom.

Posted in Academia, Defining "Liberal Arts" | 6 Comments