Nobody Expects the Black Swans?

I liked The Black Swan better than John Holbo and much of the Crooked Timber commentariat. But I completely agree with a lot of the criticisms. The book itself is not a good read because Taleb gets so caught up in invective against a world that has scorned him: at times it reads like Victor von Doom’s critique of Reed Richards and Tony Stark rather than an exploration of prediction and probability. It’s also perfectly right to say that the economic events of this last year were not unpredictable nor were they unpredicted: there were quite a few analysts out there who called some aspect of the current crisis well in advance of its unfolding. If no one cared to listen, that’s an institutional and political problem, not a problem with knowledge itself.

If I find the book useful, it’s probably because I just discard all of that filler and distraction and use the book to affirm some working inclinations that I have already developed. I do think that Taleb is right that there are sudden or significant shifts in history which are in some sense both unpredictable and in retrospect, perfectly comprehensible.

My current simple take on complex-systems theory, emergence and related ideas is that this body of thought is applicable to some but not all historical transformations. I think there are some moments in human history where the simultaneous actions of many independent agents have created a relatively sudden and unexpected “phase change” which has altered the social, economic and political environment in fundamental ways which none of those agents planned or anticipated, where one system or structure of human life gives way to a significantly new system.

This is what I end up thinking of as “black swans”: moments where some novel structure of feeling or being crystallizes from a great many independent forces and actions and surprises everyone involved. I think it’s very easy to overstate how much historical change is best described in this way. What is important when this description applies, however, is to resist the usual impulse of social science to isolate determining causes, to find the real or underlying reason why change happens. This kind of change is not incomprehensible or inexplicable, but it equally can’t be picked apart into discrete causes which can be given differing weights. If we’re talking about a “black swan” in the past, we have to find a way to appreciate the total process of change, to hold as much as we can in view at one time, because it is the simultaneity of interactions between many agents and actions that produces these unexpected changes.

Change the perspective to predictive rather than retrospective analysis. What I take from my understanding of “black swans” is that this kind of transformation limits the potential horizons of predictive social science and policy built on that kind of social science. Inasmuch as there are ever black swans of the kind that Taleb describes, you can’t build a better mousetrap and plan for them. We might learn to adapt to them, to live with them, but not master them. This is fine with me, but I think this proposition sits pretty ill with a lot of scholars, to be told that you can only go so far with knowledge, that there isn’t any tool or method or research that will get you past this problem. This is where I stop to let a kind of Romantic, Counter-Enlightenment attitude get on board my personal train, and it’s in that spirit that I read Taleb’s book.

—————

Holbo takes on another aspect of Taleb’s argument, which concerns the evolutionary psychology of prediction and probability. Again, I completely agree that the argument as it is made in the book is pretty crappy.

I’m no fan of evolutionary psychology in general, moreover. But there is a kind of salvage job of Taleb’s points that would work pretty well for me. What I understand him to be saying is that in our current world, black swans happen more often than they once did, due to the density and size of the modern global system. Human beings, in his view, are cognitively adapted to an environment where black swans happen far less often.

On one level, I think that’s more or less on the money. As a species, we’re inclined toward identifying patterns and then making meaning from those patterns. We see coincidences as a hidden map of secret intentions, and try to make probability fall into regular beats and rhythms. When we predict, we usually do it by extrapolation, by guessing that the future will be like the changes we’ve recently lived through, only more so. So in this sense, we’re never going to be good at anticipating highly improbable or unexpected transformations. There’s always going to be a disconnect between our emotional inclinations about probability and our scientific understanding of it.

On the other hand, Holbo and CT commenters are completely right that in the past, black swans happened all the time at the microhistorical level of individual human lives. Maybe premodern societies were less prone to black swan events at the macrolevel, but in any past society, ordinary human life was constantly shadowed by unpredictable change, by moments where many forces and actions converged to produce a new way of being or living or thinking for one person or for a small group of people. In this sense, black swans aren’t a unique affliction of modernity, nor can it possibly be true that we’re seriously maladapted to them. Maybe we’re not inclined to expect such transformations, but at the same time, we’re often able to roll with the punches when they happen. In the aftermath of both microhistorical and macrohistorical black swans, the really amazing thing is the extent to which an abruptly strange and alien past is quickly forgotten or reimagined to conform to the newly normal.

Posted in Books, Miscellany, Production of History | 4 Comments

Why Can’t You?

I had a fun conversation with a student this week who had a number of challenging questions about issues to pose to me. The question I’m still knocking around: if academic cultural critics understand expressive culture so expertly, why can’t they create it? Wouldn’t it be better to always have experience in creating the cultural forms that you study?

I noted that this is an old and familiar (if legitimate) challenge. It popped up recently in Ratatouille, for example, but this is an old battle littered with bon mots and bitter denunciations. Thinking about it during the conversation, I tried to map out the range of existing answers that scholars and critics have offered at various times. Here’s what I came up with off the cuff as variant kinds of responses to this issue, quickly sketched.

1) Yes, it is better to have creative experience if you aim to critique or teach about expressive culture. So there are some who would say that a cultural critic should have at least tried to create the form that they’re primarily interested in. There are some branches of academic criticism where there is a greater number of people with that kind of experience in the form.

2) Scholarly criticism is just a refinement or deepening of the critical response of audiences in general. Since audiences form stable, long-running views and understandings of expressive culture (which most creators acknowledge are important, since they depend on audiences or actively want to affect or please them), scholarly criticism derives legitimacy from the distinction between audience and creator, representing the response of the audience.

3) Consumption of culture and creation of culture are interdependent activities but they are also strongly distinctive from one another in both their form and their function. Criticism has its own norms, integrity, theory, character and aesthetics. I’d say this is the most common, orthodox opinion among scholarly literary critics over the long haul.

4) Cultural creators do not have a transparent or expert knowledge of even their own creations, let alone the creative or expressive work of others, experience does not create communicable knowledge that can be shared with others. That takes scholarly study, which can create knowledge which the producer of expressive culture may not have, even about their own work. Variant form: just because you’re experienced at making something, you may not be capable of teaching others how to do it or explaining what other creators are doing. Taken to its extreme, this view suggests that criticism is the higher-order activity, more intellectually demanding. The “intentional fallacy” and “the death of the author” live somewhere within this precinct as well.

5) Academic or expert critics study less of the content or form of expressive criticism and more of the sociology, economics of publication or performance, history and psychology of expressive culture. E.g., this proposes a straightforward division of labor (the creator knows about the work itself; the critic knows about all the external conditions which govern the work).

6) Creating culture is an independent activity of high worth; criticism a dependent one of less worth. Criticism is separable from creation, but it’s lesser and ought to be humble about its dependency on the first-order activity of creators. Possibly this line of argument also includes a charge to critics that they be respectful about the difficulties of creating culture. Unsurprisingly popular with some authors and cultural producers…

7) It’s important to have cultural criticism, and a critic should know something about doing work in a given medium, but some media do not allow for individual acts of creation due to technological or financial barriers.

More?

Posted in Academia | 17 Comments

The Road to Utopia

Mark Taylor’s op-ed has not exactly lit a fire under America’s academics. A lot of the criticism I’ve seen has been pretty legitimate. I’m still enamored of the idea of trying to make departments more supple and flexible as administrative units (Michael Berube also highlighted the department/discipline distinction at Crooked Timber).

Let me pick up on one theme that came out in a number of critiques, because I think it’s a valid rebuke of a lot of laments about the academy, including some of my own. Namely: how do you get from here to there, wherever there might be? Can you model or demonstrate some prototype or trial version of the reforms you’re arguing for?

This is one thing that has driven me absolutely wild about a lot of the conservative Mark-Bauerlein-style attacks on academic groupthink: the critics who most incessantly harp on that argument are themselves anything but pluralistic or exploratory in the way they read the work of other scholars, and show little interest in or patience for trying to persuade colleagues in sympathetic terms to change their practices.

If you’ve got a very constrained or particular criticism of academic institutions (say, for example, the role of big-money athletics) it’s plausible that there are policy solutions that could be dictated from outside those institutions, and that the road to those solutions is relatively short and simple (if unlikely to be taken). But if you’ve got a comprehensive antipathy towards the contemporary academy, you either need to talk about building alternative institutions (and how that might be done) or talk concretely about the plausible scenarios for reform. Blue-sky doodling is fine, too, but it calls for a much gentler kind of rhetoric, a more tentative voice.

Getting real about reform should involve:

1) Asking whether any institution has tried some version of what the reformer wants, and an honest assessment of the success or failure of such efforts.

2) At least a modest attempt to talk about the constituencies and interests which might support some version of the reformer’s plans, and under what conditions their ambitions might actually move forward.

3) Some attempt to concretize the new organizational and cultural forms that the reformer prefers. What would life actually be like in the new university? What would the curriculum actually consist of? What would be some of the nitty-gritty operational details?

4) Some commitments about targets, outcomes, and tests of success or failure for the new institutional designs.

Posted in Academia | 8 Comments

Save the Giblets

Historic preservation societies and their campaigns to save particular buildings or landmarks make me a little uncomfortable at times.

There are four basic rationales for historic preservation. Some advocacy groups work with all four, others have a very exclusive preference for only one of these approaches.

1. Preserve buildings or landmarks which are aesthetically distinctive.
2. Preserve buildings or landmarks which are the best representative examples of a type or kind of structure from the era of their construction or main usage.
3. Preserve buildings or landmarks which are important symbols of national or cultural heritage in some distinctive manner.
4. Preserve buildings or landmarks which are peculiarly associated with an important discrete historical event or events.

There is an unannounced fifth rationale: to use preservation to block some other kind of development that advocates don’t want or don’t like.

Yesterday, the National Trust for Historic Preservation announced their annual list of the most endangered historical sites. I was a little needled just by the hyperbole attending the announcement and a little more needled by the trendy invocation of preventing climate change as a rationale for preservation by the Trust’s president Richard Moe. (e.g., he argued that destroying the Century Plaza Hotel in Los Angeles wastes the environmental resources that went into constructing it in the first place, and that it would take too long for the new green buildings intended to go on its site to return savings.) The Trust used to glom onto similarly weak arguments about economic revitalization, so this is an old habit.

Break down the list, and you see different combinations of the preservationist argument in play. There’s an attempt to argue that the Century Plaza Hotel needs preservation for its aesthetic, but honestly, there’s a lot of buildings that have something of the same tweaking of modernist brutalism (e.g., the adding of a curve). So the Trust also argues that a lot of important things have happened at the building. They make a similar argument about the hangar where the Enola Gay was kept, that the association with a historic event is sufficient reason to want to preserve the building. Other targets are more conventionally about aesthetic history like Frank Lloyd Wright’s Unity Temple and Miami Marine Stadium.

Once I get past being vaguely annoyed by the way the Trust frames its goals, what’s the problem? A lot of preservationists are savvy to the deeper problems that attend on the whole idea, but many brush past those issues in their zeal. If we preserve everything, where does novelty come from? (Quite a few of the buildings which we now preserve would never have come into being if preservationists had been as active in the past as they are now.) If we preserve only exemplary buildings and landmarks, don’t we end up misunderstanding the past? On the other hand, why should we preserve typical buildings of the past at the expense of servicing typical forms of utility in the present? If we make reuse of older buildings and facilities and change their interiors, what have we preserved? Such a revitalized building is not longer anything like what it once was. We also make consistently bad guesses from the vantage of the present about what the future will want to be able to tangibly view and see from the past. What seems banal now becomes valuable only when it’s the last of its kind, but the last building standing is often not the best example of its kind, the thing we would have saved if only we understood what we were about to lose. What ends up on the preservationist agenda is often there because of serendipity, not because a consistent working towards an idea about value.

Preservation often makes a fetish of the real, the same kind of fetish that operates with the collection of historical artifacts. The actual autograph, the genuine item, the notion of an unmediated connection to the past as it was. When you look down that list of 11 from this year, some of the targets (like the Wendover Airfield hangar) come uncomfortably close to the rhapsodies of a collector who keeps a used pair of Elvis’ underwear in a lucite display case. What history does the hangar reveal by the fact of its continued existence? If mere contact with the Enola Gay is enough to sanctify a relation to history, then there are far more sites of that quotidian kind that we do not presently mark or set aside: factories, transportation networks, and so on. If the point of preserving a site is association with an event, shouldn’t we look instead to the sites which provide the most provocative, difficult and multi-sided lens for understanding that event? Hiroshima or Trinity strike me as well ahead in line to a dilapidated plane hangar. But there is a sense in the Trust’s request that the reality of the hangar’s connection to a real plane is enough in its tangibility, that it has self-evident value.

Historians know that documents in archives (or material artifacts in collections) never “speak for themselves”, as if mere contact with them is an ephiphany. Preservationists don’t always seem to have the same awareness, perhaps because they can’t afford to have a critical, exploratory presentation of their interests in a public sphere that is relentlessly unkind to nuance and ambiguity.

Posted in Production of History | 6 Comments

A Penny Saved Is a Penny You’ll Have During the Rapture

The scene: the supermarket checkout line this afternoon. The woman ahead of me and the clerk are having an animated conversation.

Clerk: “I’ve read the Left Behind books, you know. It makes you think, it really does.”
Woman: “Yes, it’s just like Revelation now.”
Clerk: “Completely.”
Woman: “You know Our Lady of Guadalupe? Well, she’s from Mexico City too. So it makes sense that it would start there.”
Clerk: “Though I thought it wouldn’t be until 2012.”
Woman: “You have to be ready to meet Our Maker anytime. I think this is it, though.”
Clerk: “The Aztec calendar is more accurate than ours, isn’t that true.”

Woman finishes paying, walks away. As I leave the store, she’s looking over her receipt carefully and heads back into the store, looking to question something on the bill. As I head out the door, I look back and she’s energetically showing the receipt to the manager.

Posted in Miscellany | 3 Comments

Taylor on the University

I can tell my views about the current state of academia are no mystery by the number of people who’ve told me to take a look at Mark C. Taylor’s piece calling for the reorganization of universities in today’s New York Times.

I do indeed like quite a lot of what Taylor has to say. Let me start with the part I dislike the most. I think he way oversells the degree to which some kind of online instruction can let universities share specialists. Frankly, this works against his first two proposals, in that it speaks to preserving a largely conventional understanding of specialization rather than rethinking what “collaboration” might mean in ways that are more native to online communication and media. Taylor has a long record of enthusiasm for online and distance education in forms that I look on skeptically. He made a presentation at Swarthmore some years ago on behalf of a company called Global Education Network, a company that seemed to me to be long on dot-com hucksterism in its pitch and short on real grounded value. What online collaboration can do is erode some of the apparatus that shields conventional forms of academic expertise from wider forms of skeptical review and inhibits the circulation of knowledge. Online collaboration is less about teaching and more about publication and conversation.

What I like most in Taylor’s proposal is his desire to rethink the administrative and intellectual infrastructure of the department. At a recent meeting on budget issues here, I was trying to push this kind of argument, but I think I got misunderstood as making a more conventional call for the outright elimination of some departments. The useful functions of departments, especially at small institutions, are easily distributed to larger administrative units or completely decentralized to individual faculty. Mostly they’re just barriers, both to teaching and to generative conversation.

I also really like Taylor’s call for short-term programs studying connected problems: I think some large research universities have taken this approach by funding deliberately short-lived research institutes or thematic projects. To be fair, though, I see some big practical problems with “programs” as he describes them. In the real world of academia, when you start trying to put together something like this to capture resources from some kind of common pool, you either tend to have it dominated by a Napoleonic figure who has a strong-person monomania and the charisma to capture resources or you tend to end up with a vague, wishy-washy general umbrella concept which builds a big coalition but has no distinctive identity. What you’d really need is someone with oversight responsibilities who can recognize a real “working-group” after it has formed naturally out of interests and projects which are already ongoing.

Posted in Academia | 18 Comments

She Did It Her Way

By now, you’ve probably seen Susan Boyle’s performance on Britain’s Got Talent. At least a few scholars and critics, bless their skeptical hearts, have argued against accepting the seeming spontaneity of the clip at face value.

I’ve argued in the past for the everyday intelligence of popular audiences, most especially children, and I would do the same in this case. Meaning, while people may react to the narrative framework of reality TV as if it were “real”, I believe they’re also aware at some level of its artifice. Holding a particular piece of reality TV up as authentic is a relative rather than absolute judgment. Viewers may be conscious on some level that the editors of The Amazing Race have chosen to highlight, underscore or compress the evolving story of this season into a clash over the terms and themes of identity politics and the limits of competitiveness (a popular hook with reality shows). But if they respond to that story and to the ways that the players act within it as real or vivid, that’s both because the narrative itself is real to their own social experience (however compressed and edited it may be in the show) and because they appreciate the artfulness of the editorial compression, the craft of the staging.

So Jason Mittell is undoubtedly right that the producers at Britain’s Got Talent had some inkling of what was coming when Boyle stepped to the mike, and possibly the judges as well. Possibly even the audience, who knows. Certainly anyone who came to the clip on YouTube knew from the first moment, given the set-up, that they were not about to see another William Hung “She Bangs” clip.

Mittell asks why the clip is seen as “another triumph of the human spirit”. Here is where I think debating the clip’s authenticity or spontaneity is beside the point. With reality TV, the question is the same as it is with drama, even if the genre framing of narrative is different, even if audiences do imaginative work with what they see in some slightly different ways. The point is not to be surprised that a story is being told, but to ask what story, and why.

The story of Susan Boyle in her clip is, as Mittell notes, something of a revisitation of the earlier performance of Paul Potts on the same program. In one sense, it is a story about performance, audition and the audience themselves (with the judges including themselves expansively in the “we” of the audience) in which the audience are asked to cast themselves as Snidley Whiplash, as the villain. Like all actors playing villains, we like to chew a bit of scenery (hence the eye-rolling and derision captured on the faces of the audience). We know of ourselves that we’ve watched other competitions and laughed or mocked the ineptitude of early auditions, accepting the ways in which appearance and stereotype are sometimes used by such programs to cue us that a laughable or pathetic spectacle is going to unfold. The Susan Boyle and Paul Potts clips are offered as moral reversal and thus as rebuke of us as audience and to a very limited extent, of the programs themselves, though mostly if anyone’s held to blame, the framing holds us and our desires to blame. This is a kind of debate that always works around and within reality TV: who watches? who determines? where does the authorial responsibility come to rest?

At another level, the Susan Boyle clip is gripping because it is a powerful version of the primal moral fable of modern liberalism. One of the things I still like about Paul Berman’s book Terror and Liberalism, given the disastrously consequential hubris of much of its argument, was Berman’s observation that one of the weaknesses of liberalism in the 21st Century is that its appeal is cold, distant, disembedded from community and passion and everyday experience. Berman’s bad answer to that dilemma was that liberalism would have to be more militant, “hot” through the violent deployment of power and struggle.

In a way, the Susan Boyle story is a reminder that liberalism actually has heartfelt, emotionally rich stories that are intimately familiar to many people in many societies. Chief among them is the insistence that individuals contain within them talents, character, particularities which are poorly described by stereotypes or collective identities and poorly managed or appreciated by social institutions and conventions. We hear that story constantly from childhood in various cliched forms, to the point that we’re scarcely aware of how embedded it is in our common sense: never judge a book by its cover, ugly duckling, I am somebody, self-made man, I did it my way. Sometimes we tell it as a story of struggle: the heroic individual seizing or wresting their particular worth away from hostile forces. Sometimes we tell it as a story of epiphany (tragic or comic): how the world or the community comes to realize its failure at appreciate an individual and thus to appreciate individuality itself.

That’s as “warm” a narrative as you could ask for, and it’s certainly one which motivates and underwrites social and political action in the world as strongly as communitarian, collective or religious visions of belonging. Of course it’s also trite and sentimental and in its stupider or uglier forms just as prone to underwriting malicious or extreme action as any illiberal narrative. But I think the positive reaction to Susan Boyle’s performance has as much to do with the continued appreciation for the idea of the heroically unique individual liberated from stereotype and social convention as anything else.

Posted in Popular Culture | 1 Comment

Theatricality

I’ve not had a lot to say on the blog about stories that I have a lot to say about, partly because of an upswing in my periodic feelings of discomfort with the echo-chamber aspect of blogging as well as my irritation with the debasement of public discourse by hacks, pundits and talking heads. It seems completely worthless to say anything about US policy on torture when Karl Rove can go on TV and complain that the release of the memos makes torture ineffective by disclosing the details. As Jon Stewart said, “What, is torture some kind of magic trick that’s ruined when you know the secret”? It almost seems beside the point to be outraged about both the policy and the apologists for it. It’s like complaining that people at a nudist camp don’t have any clothes on.

However, there is one sense in which the legal sanctioning of torture under the Bush Administration, with all of its obscene bureaucratic precision and full-on banality-of-evil legalese, was in fact a magic trick.

The anthropologist Adam Ashforth argued that commissions of inquiry in southern Africa (and by inference, elsewhere) were largely a form of theater designed to affirm the state’s authority over information and knowledge rather than open-ended processes of investigation, that they were elaborate performances. My only caveat to this observation has been to argue that while commissions, blue-ribbon panels, and so on may be theatrical, they are sometimes improvisational rather than scripted, that officials do not always control or anticipate what takes place within such a process.

Ashforth extended this argument in an article by observing that torture is frequently a very similar phenomenon, that it has rarely been about obtaining actual information which the state requires, whether that’s about ticking time bombs or the names of dissidents. Instead, he argued, modern states torture in order to prove that they can torture, as a performance of power over the bodies and lives of people within their territorial sovereignty. For this to work, torture both has to be secret (which amplifies its drama) and yet also a spectacle hazily retold and represented within popular discourse.

Which fits the policies of the last eight years pretty well, and makes you wonder whether some of the people leaking information, memos and photos were doing so with the intent of enhancing their understandings of the usefulness of torture rather than contesting it. It’s pretty clear that if you’re waterboarding the same person over a hundred times in a month, you’re not looking for urgent information. You’re doing it because you can, to perform vengeance and toughness and resolve through operatic sadism. It’s a very different way of saying, “Yes we can!”. It’s Romper Room 101, not quite yet up to the full rats-on-face horror but well on its way.

My quibble with Ashforth’s argument still holds: this was a performance with many improvisations. Moreover, as with many performances, what the players and directors imagined audiences would think and do and what audiences actually felt about the staging may have been and will continue to be rather different.

The only new thing with the current disclosures is really that every pretense and excuse and hypothetical, every fig-leaf, is gone. No bad apples, just an orchard the size of an entire political class. No “it was just enhanced interrogation, not torture”. No Mark-Bowden-style “These are professionals who know what they’re doing and do it only when they must”: the people who signed the memos and drafted the policies were totally clueless about the actual precedents and roots of the methods they endorsed. No time-bombs to defuse. Just the need to be seen as capable of the same authoritarian brutality as many other states around the world, just keeping up with the Joneses.

Posted in Politics | 6 Comments

Show Me the Money

I’ve been following conversations here and at many other higher education institutions about financial issues. As I said somewhat sheepishly to colleagues yesterday, a lot of what I believed in when times were flush is still what I believe in when times are dire. The same principles about money and priorities apply for me. Two things are especially important to me:

1) I completely accept that academics are going to use heterodox principles for determining spending priorities. Meaning, we’re going to argue that some curricular and institutional commitments are so important to our ethos that they cannot be subjected to simple cost/benefit rubrics such as putting the most support into the most popular or most heavily enrolled subjects. Not just because of ethos, but also because academic institutions have a different commitment to the long-term than a business might: we’re trying to make administrative decisions that hold water for twenty or thirty years, and when it comes to knowledge production, arguably we’re trying to operate at a far longer time scale than that. At the same time, we’re also going to want to make some decisions based on cost/benefit analysis. I think it’s wrong to try and reduce every financial decision down to only one underlying principle, to make consistency the only value. However, this heterodoxy carries a price: it means that everyone at all times has to be willing to expose a financial decision to all these perspectives. Nobody gets to withhold financial decisions from cost/benefit analysis, and nobody gets to assert that high demand for their teaching or research services is sufficient justification for high allocation of resources.

2) Some decentralization of financial decision-making is consistent with the productive institutional norms of academia. But again, disciplinary or departmental influence over decisions about budgets and resources does not entitle disciplines or departments to intellectual or administrative independence. We all owe each other a constantly renewed explanation of our institutional importance and we all owe each other skeptical examination of those explanations. I think that’s true when times are good and there’s a desire to add some new faculty or staff capacity, and it’s even more true when times are bad and some kind of current capacity is necessarily going to be reduced or degraded.

I think this is true in a liberal arts institution even when it’s not a question of resources: we should always be interested in what other departments do, and part of being interested as an intellectual is having some critical awareness about the intellectual and programmatic commitments of other disciplines. I cannot competently advise or teach a student in the history department if I’m not prepared to tell them when their interests or arguments within historical study have started to shade towards another epistemological, methodological or disciplinary tradition. I need to be knowledgeable enough to tell a student when their preferred approach is better served in another social science or another humanistic discipline, or when the questions they’re asking are better worked through one of the natural sciences. If I’m knowledgeable enough to do that, I’m knowledgeable enough to make some rough judgments about the legitimacy of the budgetary or financial preferences of other departments.

——

I think most academics are less prepared or willing to talk about the cost/benefit side of these decisions. That’s partly because we often have restricted access to all the information needed to assess particular budgetary decisions, and so we avoid having a strong opinion. It’s partly because we’re anxious (properly) that the language of cost/benefit can easily be misused to vulgar or destructive ends, that it proposes a false equivalence between businesses and universities.

That said, in a small undergraduate college where teaching is the priority, here’s a couple of basic cost/benefit principles about faculty work that seem to me to be very important.

1) Specialization is expensive.

This is the hobbyhorse I ride most frequently at this blog, so long-time readers, bear with me. I think there are principled arguments for a more generalist approach to liberal arts education and publication, but the financial rationale also matters. The more highly specialized a faculty becomes, and the more that they feel that they can only teach to their specializations, the more intolerable the absence of other specializations becomes and the more pressure there is to address those absences through the addition of other specialists. When you argue that the value of your teaching lies in the importance of your specialization to your discipline and to the curriculum and build the curriculum accordingly, you make the institution far less flexible over the very long-term. It can only address changes in knowledge or in social priorities with new personnel and resources. The institution can only understand the lack of other specializations as an absence or failure. No matter how big a university is, there are areas of study it does not address, but a 40-person history department can get a lot closer to coverage of the map of available specializations than a 9-person department can. The more driven you are by specialization, the more that the 9-person department can by definition never be excellent in comparison to the 40-person one, regardless of the talents of the faculty members. Because at that point, you’re hiring the specialization, not the teacher. There are real financial pressures that mount steadily when courses stop being roughly equivalent to each other in the overall manner that they provision intellectual value, critical thought, skills and competencies, to students.

2) Highly sequential curricular designs are expensive, especially when they’re combined with generous leave policies.

As an extension of the first point, when a curriculum is built around a lengthy sequence of required courses which trend towards more and more specialized inquiry, it has a big pricetag for two primary reasons. The first is that for students to graduate, all rungs of the curricular ladder must be available with sufficient frequency for every student to move through the progression in a timely fashion, especially if there is no formal system of rationing the numbers of students allowed to climb any given ladder. This means a lot of leave replacements or it means fewer leaves. The second is that the longer the sequence, the more it is likely to require many small classes at the end (see the next point) unless there is a strict restriction of possible lines of sequential study, which tends to be less and less plausible the more an institution caters to specialization. E.g., if a history department is built around trying to have faculty who are highly competent in narrowly-defined specializations, they are more likely to argue that a sequential curricular design requires a possible culmination in each and every one of those specializations.

3) Small classes are expensive. Persistently small classes are really expensive.

A course with smaller-than-average enrollments, even in a small institution, is a more expensive course. A faculty member whose courses are always small is a more expensive faculty member. A department or discipline whose courses are always small is a more expensive department. This may be absolutely ok, because either the pedagogical objectives of a course which are important for principled reasons cannot be served above a certain size or because a particular subject or discipline is deemed of such principled importance to other disciplines or the overall mission of the institution that it is worth having at any price. A 1:1 allotment of financial resources to enrollments is a really bad idea for any college or university, of any size: that entails giving up any conviction about what an educated person ought to know.

The more you budget based on matching enrollments, the more you should drop any and all requirements or curricular structure. That being said, courses that are persistently smaller than the average enrollments at a college need to be justified in terms of principle. This tends to be a really uncomfortable subject for discussion in any college that I’m familiar with, and you can see why: it can quickly become a swamp of personal complaint, full of insults intended and unintended. So maybe it’s wise to stay away from trying to micromanage, but if money’s an issue, it’s also important not to overlook the budgetary consequences of enrollments.

4) Tenure is not that expensive, but it’s not cheap, either.

The hostility to tenure outside of academia comes from a lot of different places, and I’m not interested in reviewing all of them right now, as I’ve talked endlessly on this subject in the past. But one of the common perceptions is that tenure itself is a major financial liability. That’s not true to the extent commonly supposed. If all other things are equal in a curriculum, a tenured and an untenured faculty member with a long-term renewable contract cost about the same in terms of salary and benefits. If you’re looking to radically change the average class size upward, eliminate multiple departments in a very short-term timeframe, in short, to do a more-or-less corporate kind of downsizing, then tenure prevents an administration from taking those actions and is in that sense expensive.

But making a Mercedes-Benz factory into a Ladi factory is only partially about the wages and skill levels of the workers: it’s more about changing entirely the consumers you’re selling to. If you want to get rid of tenure so that you can completely restructure a college or university to have far fewer faculty, a completely different mix of subjects taught or far larger classes, that’s not really about your costs, it’s about making a new kind of educational product intended for a different market. The expense of tenure is pretty much the same thing as the expense of specialization and sequentialization and as a result, the expense of tenure rises proportionately to specialization and sequentialization in relation to the size of the institution. It simply means that over a thirty-year period, you’re locked into a single teacher once you’ve given them tenure. If you wish you had another subject represented in the curriculum fifteen years down the line and you have no hope that an existing specialist will address that absence, you’re stuck unless you spend more money. If you have a highly generalist faculty and they become aware of an exciting new area of knowledge, they may be able to address it within the curriculum without having to create a new position. The cost of tenure is the loss of flexibility. The less flexible an institution is in other respects, the more costly tenure becomes.

——

There’s a flip side to all these issues as well. Too much generalism and students will be very poorly prepared to excel in highly specialized careers or in graduate study. Many students know this, and so you lose them to other institutions. Too few areas of study represented or accomodated, and you look pretty crappy compared to peer institutions. (Though caveat emptor: at all sizes, universities and colleges tend to have a certain amount of Potemkin-village storefronts in their course catalogs, programs which are more notional than actual.) No sequences at all and students end up with formless and non-comparable experiences. Big classes have their own costs, not just to learning but in terms of facilities and infrastructural support, as well as tending to reduce the diversity of the curriculum.

It’s just that a lot of these kinds of decisions are often seen as cost-free or entirely driven by principle (or even just by the inertia of generalized academic values). Cost maybe should not be the exclusive driver in thinking about how to structure higher education, but if you’re not aware of how much it matters by now, you’re really now paying attention.

Posted in Academia, Swarthmore | 5 Comments

Mistakes Were Made

There’s been two legal cases in the news lately involving officials and teenagers, and you’ve probably read about both of them. The first involves a young woman who was strip-searched because administrators suspected she had a double-strength ibuprofen concealed in her clothing, the other involves a threat to prosecute a teenager who sent a picture of herself in a bra to another teenager’s cellphone.

Allow me to recommend to you my colleague Barry Schwartz’ TED talk, “The Real Crisis? We Stopped Being Wise”. I don’t always apply Barry’s arguments in this talk the way that he does, but I think the thrust of what he’s saying is spot-on. This is why I have such a strong interest in the perverse or unexpected affects of bureaucratic initiatives or fixed policies, whomever is promoting those approaches, to whatever ends, because so often we turn to policy to save us from having to work through the human world around us one step at a time, with our hearts and minds fully engaged.

In a wise society, it’s possible that someone would have asked whether the fair enforcement of an anti-drug policy required being tough on ibuprofen. It’s possible that concerns about “sexting” and exploitation of young people would lead to concern about any pictures of that kind. But equally, I feel like in a wise society that someone would say, “No, let’s calm down here and not do something dumb.” If someone didn’t, in the face of the colossal mistake of threatening a prosecution for child pornography or a humiliating strip-search, in a wise society, the person responsible would own up to their mistake, apologize, and try to find a way to make up for their error in judgment.

Just as the common thought about Watergate is that the cover-up was worse than the crime, so too is the stone-walled refusal to admit error in these cases far more infuriating than the original mistakes. The observers who think the enforcement of the rules are more important than the mistakes end up siding with the rules even when they concede that maybe the particular actions of authorities were a bit imprudent. It just makes me feel that somehow we’ve really lost our way. Maybe we never were on the right track in the past, either, but these cases feel in some ways like such a simple matter of the misuse of authority, of letting the rules rather than ordinary human common sense drive the business of everyday life.

EDIT: I’ll add this case discussed today at Ta-Nehisi Coates’ blog, just for an example of an authority in charge doing the right thing. A Dallas cop stopped a family who ran a red light as they rushed to the hospital to try and see a dying family member before the end. Cop doesn’t pay any attention to what he’s being told and proceeds to berate the driver. But the police chief is a straight-shooter: not only does he say bluntly that his officer was in the wrong, he makes it clear that your authority and training aren’t expected to substitute for your ordinary human empathy and understanding, that there’s no excuse for failing to show common sense. That’s what wisdom first, rules second looks like.

Posted in Politics | 5 Comments