Calling All Librarians, Info Scientists, Digital Humanists

I’m still struggling with how to begin a project that I would like to be a lifelong commitment for the rest of my career. The issues are technical and conceptual.

What I want to do is begin publishing and archiving my notations on scholarly (and maybe some non-scholarly) readings, to document my workflow as a reader and thinker. I see this as having several important uses.

First, scholars studying book history and print culture have now thoroughly made me over into a believer in the value of marginalia. The beauty of digitization in this sense is that it allows us to create marginalia without defacing a singular physical copy of a text. On the other hand, the ephemeral character of a lot of digital reading and notation means that much of the marginalia that we might have will never come into being or will be lost.

Second, I think one of the major reputational problems that academia has, particularly humanities scholars, is that much of our workflow is invisible or poorly understood even by sympathetic publics. I’ve been cleaning out a closet here in my office this week and even I’m a bit flabbergasted by the transcript of my working life since 1988: piles and piles of notes and commentary on readings, compilations of library records intended to drive both research and my awareness of my fields of specialization, and a lot of other tracings of a working life. At least some of this might be more useful to me and to others as a tool for inquiry and study if it were searchable and visible, but such records might also help me and other scholars to document and explain what it is that we do.

I’m clear about what it is that I want to do and why I want to do it. What I’m not clear on is how and where.

Here’s the specifications that I want to match:

1. I want to publish and archive these digital marginalia as data in a platform-agnostic form that could be pushed into multiple locations. Say, for example, that I could have a page or location at this blog where they would appear, but I might also publish them through my catalog at LibraryThing and as shared Zotero notes, for example.

2. I would like the marginalia to have a fixed link the notes to the specific bibliographic record of the material that they are based upon, to have that link embedded in the data.

3. I want the baseline marginalia to be machine-readable and available to anyone else who would like to take the data and associate it with relevant catalogs or other archives, under a Creative Commons license that only requires attribution. The attribution I’d actually like to embed into each record so that the data-sharing can be automated. Other users would be free to add their own metadata or cataloging information.

4. I want to create some kind of simple tagging scheme that can support my own folksonomy to help me understand and search the eventual archive of my notes while having the total archive also be searchable in a more open-ended way.

5. I want the archive to be visible to external search engines.

6. I want the archive to have no special or particular support needs or costs beyond the storage space and Internet connectivity required. E.g., nothing that would require serious customization and maintenance by staff besides myself. This is another reason for platform-agnosticism, to minimize the futureward hassles involved in migrating the data to future information infrastructure.

I can see how to do some of these things with my very crude knowledge, but most of what I can think of fails at least one, maybe several, of these conditions, and is in many ways quick and dirty. If I’m going to invest the effort in shifting this workflow from written notes and fragmented data that I have in both analog and digital form, I want to do it in as stable and useful a form as I can manage.

Suggestions, ideas, criticisms all welcome.

Posted in Academia, Digital Humanities, Information Technology and Information Literacy, Swarthmore | 16 Comments

Hell Is Other Gamers (And Some Games)

Game developers talking about “culture” are often deeply frustrating. Either they are overly credulous about how design directly and symmetrically can create a particular set of cultural practices and outlook within a game, as my friend Thomas Malaby has observed about Second Life, or they see gamer culture as a hard-wired or predetermined result of cognitive structures and/or the wider culture of the “real world”. Only rarely both in a somewhat more nuanced but contradictory way: Raph Koster, for example, has at times argued that particular design features in games (say, the implementation of dancing and music in Star Wars: Galaxies) can create or transform cultural predispositions among players but also has argued in his Theory of Fun that gameplay and “fun” are driven by fixed cognitive structures and tendencies.

Developers tend to favor one of these two viewpoints because they either make the culture of play in a particular game something that they can design towards or they make it a fixed property that they have no power over, something they can imagine either completely controlling or being completely helpless to control, and in any event, something easy to summarize in a reductive, mechanical way. They’d rather either than what the culture of play in a particular game really is, an emergent and contingent result of interactions between particular design features, the general cultural history of digital games and their genres, the particular sociological habitus of the players, and the interpretation of visual and textual elements within the game by different players (individually and in groups).

When Aris Bakhtanians said that sexual harassment was “part of the fighting-game community” he was, in a way, perfectly correct in an empirical sense. This is not to say that all or even most players of fighting games, even in competitive gaming, practice harassment of the kind Bakhtanians infamously displayed, but that sexual harassment and harassing attitudes are commonly witnessed or overheard in a great deal of online gaming, as are the harsh and infantile abusive responses flung at people who complain about such behavior or expression. The one truth sometimes spoken in such responses is that outsiders don’t really understand how such things get said or what they mean. Outside critics and designers alike would often prefer for “culture” of this kind to be easily traced to the nature of the game itself, either its semantic content or the structure of play, or for the culture of the game to be nothing more than a microcosm of some larger, generalized culture or cognitive orientation, an eyedrop of sexism or racism or masculine misbehavior in an ocean of the same. If that’s the case, either there’s something quite simple to do (ban, suppress or avoid the offending game or game genre) or the game is only one more evidentiary exhibit in a vastly larger sociopolitical struggle and not an issue in its own right.

Understanding any given game or even a singular instance of a game as “culture” in the same sense that we understand any other bounded instance of practice and meaning-making by a particular group of people, with all the unpredictable, slippery and indeterminate questions that approach entails, means that if you care about the game as an issue, you have to spend time reading and understanding the history and action of play around a particular game. The stakes are very much not just academic (are they ever?): certainly the viability of a particular game as a product in the marketplace hangs in the balance, sometimes an entire genre of game or an entire domain of convergent culture is at financial risk. But also at stake are the real human feelings and subjectivities of the players themselves, both within the game culture and in the ways that those identities and attitudes unpack or express in everyday life as a whole. If we’re going to argue that game cultures teach all sorts of interesting and useful social lessons, or lessons about systems and procedures (as we should) then we have to accept that some of the social lessons can be destructive or corrosive. Not in the simple-minded, witless way that the typical public complaint about violent or sexist media insists on arguing, sure, but we still have to ask what the consequences might be.

I sometimes identify myself as a “game culture native” who happens to express his views about games within scholarly discourse rather than a scholar drawn from outside to look at games. So in native parlance, one of the things that strikes me again and again when I play multiplayer games is that I find it extraordinarily painful to recognize that what I romantically imagine as a refuge for geeks is in fact horribly infested with the kinds of bullies that we were all trying to get away from back in the 1970s. When I first started playing computer and console games in early 1980s, they enraptured me more than stand-up arcade games in part because you could play them privately in the home or in quiet computer labs on a device that you controlled, and communicate with others in-game largely at your own discretion or preference. They also tended to be more complex and slower than coin-op games and to derive much more of their themes and narratives from existing science-fiction and fantasy. The games themselves were a refuge, and their enabling technology was a refuge. Much of the same was true, at least for me, with pen-and-paper role-playing games. They were so derided and marginalized in the mainstream culture of my peers that I never felt any particular risk that some popular kid or hulking bully was going to show up in the middle of a gaming session and take my lunch money.

By the time that game culture spread more widely in the 1990s and 2000s, neither of these feelings held particularly well, and nowhere did I feel that more acutely than in commercial virtual-world games from Ultima Online onward. Suddenly here I was, exploring a dungeon and fighting monsters with a group of strangers, at least some of whom seemed pretty much like the kids who had shoved me into fences or kicked me in elementary and junior-high school. It wasn’t as personally threatening to me as a confident, secure adult but it was at the least depressing and repellant. The general Hobbsean malaise that these players brought to gameplay was seasoned by extraordinary forms of malevolent play that came to be calling “griefing” and by an accelerating willingness to give uninhibited voice to crude sexual boasting, misogyny, racial hatred and gay-bashing. Sometimes, I ended up feeling that there wasn’t any real sentiment or deliberate feeling behind the braggadacio–at a certain cultural moment, calling something “gay” in gamer parlance really did feel to me as if it was a non-referential way to simply say something was dumb or annoying–but a lot of the time there was in fact real force and venom behind the words.

Over time, many of us learned to ignore much of this behavior as background noise or to use the increasingly responsive tools provided by developers to control exposure to obnoxious or harassing individuals. We played only with friends or trusted networks of people, we used /ignore tags in general chat to make it impossible to ‘hear’ offensive players, we didn’t play in games known to have particularly ugly or unpleasant internal cultures. We realized that some of the most offensive behavior and attitudes are basically adolescent transgressions against mainstream consensus. A griefer or troll doesn’t care what the semantic content of their griefing is, only that it bothers or angers someone, so the easiest way to deflate them is to ignore them. We learned that sometimes being offensive is also a competitive tactic, as it is in many sports or other games: being deliberately obnoxious can unbalance or obsess a competitor.

But it still gets to me sometimes personally. It’s just that doing anything about this cultural history is no easier than it is do something about anything else “cultural”.

To give an example of the complexity, let me turn to World of Warcraft. I hadn’t played World of Warcraft in months: I’m bored by the game itself and I feel as if I’ve learned everything in a scholarly or intellectual sense that I can from its player culture. In the last week, I played a bit at my daughter’s urging. It was interesting up to the point that I went off to do some “daily quests” in an area called Tol Barad where players fight each other every two hours or so. The quests are standard WoW design: boring, repetitive, Zynga-like exercises whose completion gives the player a bit of money and a small gain in reputation with an in-game faction. At a certain point, the player will have enough reputation with that faction to purchase improved gear that will make the character more powerful. The repetition is somewhat soothing, a kind of gentle mindlessness, but to really progress through doing the quests, players have to do them every day for a substantial period of time. In this particular area, the daily quests are leavened by a battle between the players themselves. If your side wins, it gains access to another set of daily quests within the zone and to several areas of content for larger groups to complete together. If your side loses, you have no access to these quests until the next battle several hours later.

The battles are at least potentially fun and interesting, and a relief from collecting crocodile hides. So I hung around Tol Barad until the battle. World of Warcraft has over the years refined its formula for these kinds of battles. It now caps the total participants (to keep one side from being ridiculously dominant in numerical terms), it forces everyone to join a single large “raid group” (to make it easier for everyone to communicate and monitor their own side), and it offers mechanics that try to balance strategic choices, short-term tactical coordination and a reasonably even chance for both sides to win. My side in this case lost, partly because it was less coordinated. Ok, fine, it was still sort of fun. But as the loss became imminent, a torrent of abuse began to spill out through the raid group. A small number of players started shrieking about how bad everyone else was, what failures we all were, how we should be embarrassed to play the game, how we were a bunch of useless faggots and so on. Over a basically trivial part of the game that will be repeated again and again all day long. That’s pretty typical in WoW: the more you play and the more that your play associates you with strangers, the more you will see both extraordinarily poor behavior by individuals (that is often condemned by the consensus of a group) and generically poor behavior that is ignored or accepted as inevitable even though most people do not themselves participate in that behavior.

This surely limits both the numbers of people who might play WoW or any game like it and the comfort level of players within the game to participate in all the activities it offers. But consider how complicated both the genesis and consequences of this aspect of the game’s culture really is.

First, consider the evolution of “chat” as an expressive practice within virtual-world games. A game like WoW is shaped by a very long design history that goes back to non-commercial MUDs and MUSHs in which chat channels were the major way in which the game supported a sense of community or sociality within the game, and thus the expectation that such a game should be social. The sociality of WoW and other games like it is still a defining attribute, and is notoriously credited with keeping players as participants long after they’ve grown bored with the content. So you have to have chat. Whenever the designers of WoW have attempted to curtail “global” or large-scale chat that tends to expose the totality of the game’s culture to the worst expressive practices of its ugliest margins, players have typically managed to subvert their intentions and recreated a global or large-scale chat channel. Early commercial virtual-worlds spent much more time and money trying to police the semantic content of player expression, or tried to use filters to prevent offensive expression. Both efforts were easy to defeat, the first simply through volume and persistence, the second through linguistic and typographic invention. Attempts by players themselves to discourage or sanction offensive expression only have had force inside small social groups. A competitive guild can often impose restrictions on what its members do, booting a griefer or harasser. But such a player is simply expelled into the “general population”, and there’s always another guild around the corner that needs a member, or in WoW’s later evolution, a random pick-up group that will endure such a player for the short time that it must bear his or her company.

It’s not just the mechanics by which you say things, but what you’re doing that matters. Almost all of WoW’s gameplay involves the incremental accumulation of resources that will help players in the incremental accumulation of better resources. This is competitive in two ways: first, that a resource you gain is often a resource denied to someone else. Second, that your total accumulation of resources is read off into the game’s public culture as a status effect, sorting players into hazy hierarchies. These hierarchies are temporally unstable: no matter how powerful you are, each expansion of the game will render your previous power over the environment and your previous superiority to other players null and void. They are structurally unstable: Blizzard frequently tinkers with the game mechanics and may at some point put a given type of character at a substantial in-built disadvantage or advantage to others, regardless of how much they have accumulated or how skilled the player is in controlling a character’s actions. These hierarchies do not have an even symbolic meaning across the whole of the game’s culture. Some players never engage in competitive accumulation: a dedicated “casual” who plays with a small group of friends and a serious “hardcore” who plays with a large group of equally dedicated and intense players rarely intersect, rivalrously or otherwise. But the large “middle class” of the game are often competitive with both poles: needing casuals in order to carry out competitive acquisition, wanting parity with the hardcores. When a game is built around the rivalrous but incremental accumulation of resources, its very structure encourages certain forms of aggression, status-laden disdain, and attempts to suppress rivalrous action by any means necessary.

If you want a contrast, look at something like the sharing of creature designs in Spore. Spore wasn’t a terribly successful game, but it did create a fantastically successful player ecosystem in terms of people being highly motivated to create interesting designs and share them with as many people as possible. The fundamental structure of a game’s design influences the kind of sociality that appears within its culture, and it invites or fosters imagined alignments between a game culture and the wider culture. Incremental accumulation, social hierarchy and the strong desire of people at the “top” to have permanent structural separations between themselves and the plebeians who have to collect boar livers or file TPS reports? That’s a bridge for a lot of ugly sentiment and frustration to cross regularly between WoW and the world.

But then consider also the history of gamer sociology, or the movement between games, neither which Blizzard is particularly responsible for or able to control. Even within virtual worlds, there are really bad neighborhoods and relatively anodyne ones. Sometimes by design. I actually accept and admire the ugliness of the internal culture of EVE Online: it has the same authorial intentionality (by both designers and players) that any other work of art set in an ugly or unpleasant aesthetic might. Toontown is light-hearted because of content, because of mechanics, and because it disables the sociality of players on purpose. Sometimes as an emergent, accidental evolution. I don’t think there’s any simple reason exactly why multiplayer game culture on X-Box Live should be as baroquely unpleasant and misanthropist as it is, but I simply won’t do anything multiplayer on that platform unless I absolutely have to for research. The worst I’ve experienced on WoW is nothing like what you’d hear in a really ugly session of a bunch of random strangers in a multiplayer shooter on XBLA. Gamer culture is and has been for a very long time leavened by young men who at their worst spew a lethal cocktail of nerdrage, bullying and slacker entitlement into conversational spaces, forcing other players to retreat, ignore or leave.

There is no simple instrumental pathway into that kind of “culture”: any attempt to change it by command is going to be useless at best, actively backfire at worst. Here game designers sometimes have good ideas: giving players tools to shape their socialities helps a lot. If being an “anonymous fuckwad” leads to increasing exclusion or marginality within a game culture, enforced by mechanics that players themselves control, then it takes much more deliberate agency to be a fuckwad. But if developers are going to consider giving players more agency over their own social practices and institutions, they also have to think about where their designs have become the equivalent of chutes herding cattle towards slaughter. The kind of operant conditioning that Blizzard has made the defining feature of MMO design, and which has been Zynga’s stock in trade, doesn’t encourage the growth of rich social worlds that can evolve and complicate. If you’re a farmer growing a monoculture, you don’t expect a forest–and you’re far more vulnerable to parasites and disease wiping out your crops.

Posted in Digital Humanities, Games and Gaming, Oh Not Again He's Going to Tell Us It's a Complex System | 1 Comment

A Sample Class Prep

I’m still thinking about ways to put some of my note-taking in my ordinary workflow into a disseminated or published form. More on that soon, as I’m seeking advice. But one other summer project has been to dig into my old files, figure out exactly what I’ve got there, and what if anything I want to keep.

For some reason, I kept a big pile of graded papers from my first semester working at Swarthmore and some class prep notes from a course I taught early on. One of the biggest misconceptions about teaching at any level, but often particularly in higher education, is that most faculty just teach “cold”, from what they know, or from old or static preparations. Maybe there’s someone who does it that way, but for me, every class session takes specific preparation that typically includes a review of the material and a sort of “game plan” outline of the issues and material I want to be sure to discuss or cover at some point.

Reading my old notes, I was a bit more detailed and extensive in the preparation then but my practices have remained substantially the same between 1998 and today. I was a little bit more anxious then to make sure that the entire content of readings was fully summarized and understood before moving on to more open discussions.

—–
Here’s my notes for a class session in my Gender and Colonialism course, in a week where the students had read Chandra Talpade Mohanty and Edward Said.

Orientalism (Said)
critique of “discourse”

    what is “discourse”?
    what is “discourse” as described by Mohanty?
    “Western feminism”: where is it located, how has it been produced? Is Zed Books really an example of “Western feminism”?

how does discourse have power? what kind of power? what’s the theory of power behind the idea of hegemony?

    does “Western feminism” really have hegemony? (a “coherence of effects” p. 52)
    doesn’t this end up rendering “Western feminism” as a monolith?
    Woman with a capital-W vs. “women”

‘Third World woman’ as analytic category

    what is ‘powerlessness’ (relate to theories of ‘power’)?
    alterity and pathology (ref. Mohanty’s critique of the ‘Third World woman’ construction
    universalism and rights-talk
    victimization, victims and colonialism
    role of religion, family, economic development in ‘Third World woman’ construction
    ‘sisterhood is global’ vs. ‘patriarchy is global’

Problems?

    how local do we go? Aren’t “African woman”, “South African woman”, “Zulu woman”, “educated Zulu woman in the 1930s” all constructions that typify or generalize too? Is there a point where we stop having the problems of “Third World woman” as construct? (Or vice-versa, do these constructions suggest that “Third World woman” isn’t a problem?) How do you know what the right level of generalization is? Are nations or regions or communities the unit of comparison, or is comparison itself the problem? Equally, aren’t “patriarchy”, “worker”, “domesticity” etc. terms with the same kinds of problems? Is this a ‘know it when I see it’ thing? Maybe it’s not the construction per se but particular cases of its use…

    maybe Western feminism doesn’t have ethnocentric goals? Maybe feminism or other ‘isms’ should be universal? (Humanism?) Is there a problem with just using ‘Western’ as epithet?

    maybe non-Western women don’t care about ‘Western feminism’ until they’re operating in cosmopolitan or ‘Western’ discursive or institutional contexts? Maybe non-Western women do “represent themselves”, just not where scholarly or cosmopolitan cultures can easily see or record them doing so?

    maybe colonialism and modernity actually created a ‘shared identity’ of “Third World women” that is now real for all that it’s also a construct? Crying over spilled milk?

——–

From a subsequent class on silence, speech and representation, working from readings by Spivak, Susan Gal and Luise White:

Summary of Gal’s argument

    distinction between speech and speech acts

    What does Spivak mean by “subaltern”? by “subaltern speech”?

1. Speaking to themselves: is it possible to describe or reproduce what colonial subjects said to each other?
2. Speaking to colonial rulers: did colonialism listen to its subject? did it leave transcripts of what it ‘heard’?
3. Can 1st world scholars/audiences ‘hear’ what colonial and postcolonial subjects say ‘to’ us? Do they speak from within scholarship or description?
4. if not to all three, is this an accidental or instrumental ‘silence’? what does it mean?

Discuss ‘silence’ as idea

    silence as withholding/refusal
    silence as accidental or incidental absence of talk in a setting
    silence as forgetting or repression (in psychological sense)
    silence as the absence of power in speech (people speak but it doesn’t matter; they aren’t heard)
    silence as oppression, as the suppression of speech

Luise White on silence as an opportunity or invitation to the work of interpretation

Talk about pedagogy, classroom discussion, etc.
If something is a “male space” is that necessarily a criticism or is it just a description? Explore issue, see where students are at on this, try to get folks involved in discussion here

——–

From the next class session on Dangarembga’s Nervous Conditions

Using novels as sources/documentation: dangers and possibilities

“Voice” as a problematic concept (and “audience”)

gender and aspiration (class): “I was not sorry my brother died”; Majuru and other women p. 138

gender and power (what would be ‘colonial’ here if anything?)

    missionaries and education
    division of labor
    rural/urban
    memory and the past (pp. 18-19)

what if moving closer to colonial power has an emancipatory possibility for women? p. 18, Mr. Matimba’s intervention, p. 74 and “disrespect”. Is this an accident or is it a deliberate product of colonial authority?

    education and inequality
    bridewealth and commodification of women
    “another step in the direction of my freedom” p. 138

what is “freedom” in general? In terms of gender? In terms of gender in colonial and postcolonial situations? is freedom necessarily Western or universalizing? (“the things I could have done”, p. 102, p. 174)

“tradition” as problem category/construction

    washing pp. 40-41
    dancing p. 42
    “Nyamarira that I loved” p. 39
    other examples (food preparation, etc.

masculinity and hierarchy
“Babamukuru was God”, p. 70

———–

I also was keeping notes on scenes from films that I thought depicted interesting examples of colonial masculinity for a possible “mix tape” (which back then, I did by bringing in a bunch of VHS tapes cued to particular scenes…) presentation. I remember that my main film was “The Man Who Would Be King”. I think I continued this list elsewhere, on another notepad. Perhaps that will turn up soon as I keep going through this material.

1. Harem scene in “Spy Who Loved Me”
2. Uncle Tom’s Cabin sequence in “King & I”
3. General Dyer court-martial in “Gandhi”
4. “Mountains of the Moon”, scenes that play up Burton/Speke homoeroticism but also enduring violence
5. Pissing scene in “Shogun”
6. Going native/fire dance in “Dances With Wolves”
7. Opening scene in “Robinscon Crusoe of Clipper Island”
8. “Shaka Zulu”: effeminate Englishmen v. masculine Shaka

Posted in Academia, Swarthmore | 1 Comment

What Has It Got In Its Pocketses?

Three movies for The Hobbit. Like a lot of other geeks, this makes me wary. (Non-geeks who know geeks are probably feeling despair instead, knowing they’re going to be dragged to all three.)

Joking about the second film being a documentary on the genealogy of the Stewards of Gondor aside, I’ve been doing a sort of mental inventory of what content Jackson could use and how that might add up to six to eight hours of narrative without the grotesque bloating of his King Kong. I actually think this kind of “cultural workshopping” is a great potential meeting ground for traditional humanistic critical analysis and practical cultural production. (My Swarthmore colleague Craig Williamson has taught some terrific classes along precisely these lines.)

So here’s what I come up with out of the LOTR appendices and The Hobbit (TH), either narratively important bits that happen entirely “off-screen” of the main book or that could receive a fuller visual and cinematic treatment in the film than in the book.

The War of the Dwarves and Orcs. (Over Moria, before TH starts.)

Gandalf’s espionage mission into Dol Guldur in which he obtains the map of the Lonely Mountain from Thorin’s father Thrain II.

Beorn wiping out a bunch of goblins with an army of bears and/or bear-men while Bilbo and the dwarves sleep in his house.

The White Council’s assault on Dol Guldur along with expository set-ups earlier in the narrative (e.g., consultations between Gandalf and Elrond while the group is hanging out in Rivendell.)

Elvish doings in Mirkwood that intersect with the dwarves.

More Elvish doings while Bilbo is doing his sneaking around. (Have to be careful here because the elves have to feel a bit more comical and less capable than LOTR elves.)

Smaug vs. Laketown/Bard

Gandalf’s return from Dol Guldur to Battle of the Five Armies. (He gets intelligence about what’s going on from somewhere, after all.)

Saruman seeing Sauron in the palantir and being ensnared by him. Could happen at various points in this narrative. (This very question is debated by characters in LOTR.)

Battle of the Five Armies

Bilbo’s return trip and his new life in the Shire.

Gollum leaving the Misty Mountains, going to Mordor, being interrogated by Sauron, and later being captured by Gandalf and Aragorn.

Balin’s attempt to recapture Moria.

——–

Ok, is there a narrative line here that is neither bloated nor confusingly digressive? I can kind of see one, actually.

You cannot start the first film with the War of Dwarves and Orcs or Gandalf’s spy mission to Dol Guldur. That would be a tonal and narrative disaster. You have to start where the book starts.

But how do you make those two bits of story into something more fleshed out, something less of a footnote?

Well, the War of Dwarves and Orcs is important for injecting a much more personal arc into the Battle of Five Armies and giving the story one of three major antagonists. Smaug is the Big Bad of the film, and if Dol Guldur is going to have a major role, Sauron in his guise as the Necromancer is the other. But the third is right there in plain sight: Bolg, the orcish ruler of Moria, son of Azog, who killed Thorin’s grandfather and started the War of Dwarves and Orcs. All of this even rates a footnote when Gandalf starts yelling about what’s coming at the beginning of the Battle of Five Armies. So you use the backstory to give Bolg a way more personal presence in the overall narrative. What Jackson might even do is put Bolg into the Misty Mountain confrontations–maybe have him be visiting the Great Goblin, have him be chasing Thorin once they get out, have him survive the battle with Beorn and see him go off to raise up his armies. So a flashback to the war over Moria (which also lets you introduce Dain early on) somewhere around the time the dwarves first encounter the goblins might work very well.

The Dol Guldur spy mission can be expanded really well whenever Gandalf first gives over the map to Thorin. That might be a great thing to do in Rivendell–Gandalf is getting them ready to go through Mirkwood, he’s making plans with Elrond, so he can flash back to Dol Guldur. The other place that could happen is just before Gandalf rides off for the White Council’s assault on DG. So film #1, with these two flashbacks, gets you to the edge of Mirkwood–the big set-pieces that can close it out would be Beorn’s battle against the goblins and wargs and Gandalf flashing back to his spy mission and the terrible danger of an all-out assault on DG.

Film #2 is where I think the bloat problem is going to loom largest, because the obvious closers for a second segment if there have to be three are arriving in Laketown and the Assault on Dol Guldur. The dwarf and Bilbo segment of that film is almost weightless–it has the tension and character development of the long journey in Mirkwood, the spider battle and then the more light-hearted tension of being stuck in the elvish fortress and the barrel escape. The DG thing can have a certain amount of build-up through Gandalf meeting up with Radagast, preparations in Lorien or Isengard (maybe with some tension about whether Saruman has already been turned or not), and then the Big Battle. That whole battle is entirely open in narrative terms: we have no idea whether it’s just the three wizards, Galadriel and Elrond personally storming the place or whether there are armies clashing as well. But I think the best end for that is for them to think they honestly kicked Sauron’s ass, and for Gandalf to feel safe returning to help the dwarves with their Smaug problem.

I don’t think a second film can go any deeper into the main Hobbit story and that’s where I would see the worst danger of bloat. But since he’s going to go for three, that’s the only logical stopping place I can see, short of Lonely Mountain (the film could plausibly extend into the maneuverings at Laketown and end with the dwarves and Bilbo starting towards the mountain).

Third film would then be the standard story through to Smaug’s attack on Laketown. Then maybe we see that Gandalf learns that Bolg’s armies are heading for Lonely Mountain and it becomes a race–probably there can be a few set-pieces along the way of Gandalf fighting with some part of Bolg’s forces. Then we go back to Thorin being an asshole and Bard and the Elf-King blockading the Gate, Dain’s arrival, and then Gandalf getting all shouty about the Battle of the Five Armies. Big climactic battle, doubtless we get a dwarfo-a-orco between Thorin and Bolg at some point (I seem to remember that we hear a bit about that confrontation when Gandalf infodumps on Bilbo after he wakes up).

Then get Bilbo back to the Shire and DO NOT stretch that out, play it straight to the books, with one exception–have Aragorn meet Gandalf and Bilbo on the way home, have Gandalf confide to Aragorn that he’s a bit worried about this Ring of Bilbo’s but what the hey, Sauron has been crushed again so no biggie. Conclude the film with Gollum leaving the Misty Mountains and do NOT do it quite the way it happens in the books. Have him meet up with Aragorn and maybe Legolas, have Aragorn realize that this guy is talking about Bilbo and that his chatter about the Precious and the Master is quite alarming. Aragorn interrogates him, gets the whole Smeagol thing down. Gollum escapes, takes off for Mordor, because he figures that’s the only way to be safe from Aragorn and the elves. Aragorn goes in pursuit all the way to the Black Gate, sees Mount Doom erupting and realizes Sauron’s back in Barad-dur, figures he has to get back to Gandalf and Elrond right away with the bad news.

End the trilogy with Bilbo hanging in Bag End, smoking his pipeweed, saying “Well, all’s well that ends well”, and a furtive glance at the place where he’s got the Ring stored.

It can work, I think, but that second movie really worries me.

——

Also, Jackson’s been quoted saying that they’re going to expand or create a role for a female protagonist of some kind. Other than Galadriel there is literally no one in the main narrative or the appendices who they can use, so this will have to be a reimagined or wholly invented character. I suppose you could make one set of dwarf siblings into females but that messes with Tolkien canon about dwarves and besides, the one bit of footage everyone has seen doesn’t have an obvious female dwarf in sight.

Posted in Sheer Raw Geekery | 7 Comments

Hacker Job

Before I get to worrying about algebra, Andrew Hacker’s essay in the Sunday NYT made me worry about writing and research. As in, “This is poorly written” and “I don’t think he did much research”.

If I were marking the column up like a first-year student’s paper, I’d immediately be all over the meandering, confused structure of the essay and its tone-deaf alternation of tremulousness and tendentiousness (a combination that a lot of Hacker’s writing in recent years has demonstrated). And then I’d mark it up for the weakness of the research behind it. All of the questions he’s asking have been asked before and debated at length in the history of American education in general and mathematics education in specific, but much of the affect of Hacker’s own essay is of the discovery of some long-ignored or never-asked question. Most crucially, he never really asks (or looks into) the basic question, “So why do most mathematics educators believe so strongly that algebra is an important objective in K-12 education?”

The way that Hacker frames the issue is consistent with the corrosive form of populism he’s been peddling lately, that he is uncovering a sort of “educators’ conspiracy” which has no real explanation other than the self-interest of the educators. If he were to frame it as, “This is an interesting on-going debate where the various sides have coherent or well-developed arguments that have both technical and philosophical underpinnings, and here’s the side that I’m on”, he’d be doing a public service. As it is, he’s just yanking some chains, either calculatedly or out of feeble cluelessness.

If you were going to reassemble the column so that it built up to a genuine argument, I think it might look something like this:

1. Algebra is a common part of the mathematical education of most Americans as well as in other school systems around the globe. By way of introduction, here’s what algebra is. (Baseline definition.)

2. Algebra is a common stumbling block for American students, far more than other subject they study in K-12 education. (Evidence thereof, which Hacker cites fairly well.)

3. Why do we believe algebra is an important educational objective? What do mathematicians, educators and others say about this? How did it get into the common K-12 sequence?
3a. Because algebra is believed to be an important conceptual precursor to every other form of advanced mathematical thought and inquiry.
3b. Because algebra is believed to be an important practical precursor to mathematical skills used in many professions. E.g., because “you will need it later in life”.
3c. Because algebra is believed to be “good to think”, a way to get high school students to regard mathematics as a form of critical and imaginative thought rather than an area of rote calculation.

Here’s where Hacker really falls down: these questions aren’t evaluated or explored a remotely systematic or coherent way.

4. Are these assertions true?
4a. Could you learn other fields of advanced or practical mathematics without any knowledge of algebra? Or is there a simple knowledge of algebra that is sufficient for certain kinds of progression?
4b. Do many careers really use algebra, or have knowledge of algebra assumed in their use of quantitative skills and data? Are there everyday uses of algebra that are important to an educated citizenry?
4c. Is algebra really useful for quantitative forms of critical or imaginative thought?

5a. If 4a. and 4b are in fact true, is there a different or better way to teach algebra that would allow more students to progress successfully through it, or at least to mitigate or excuse their inability to do so? If 4a and 4b are not true, why do we believe them to be true?
5b. 4c presupposes that the goal of high school is progression towards critical and imaginative thought. Are we sure that should be the case? If it’s not the case for math, shouldn’t that be true for everything? Maybe this is an argument against high school in toto, at least as it commonly exists?

5a is where Hacker’s essay seemed to me to just crash and burn. He fumbles around in the dark when he concedes that yes, it’s important for people to be quantitatively literate both as citizens and for their employment prospects but that no, you don’t need algebra for either. I’m not particularly quantitatively literate myself, but I think trying to read and work with statistical data with no knowledge whatsoever of algebra would be very difficult. I don’t know how you’d do anything with algorithms without having some conceptual grasp of algebra. (Just to mention two of the things that Hacker agrees citizens and employable people ought to be able to do.)

It might be that there is a different way to approach algebra that helps high school students glean some of its conceptual value, that there is a problem with how it is commonly taught or imagined. I’m sympathetic to that general question about most high school education. For example, while I think the study of literature or history and the craft of analytic writing should have a progression throughout high school, there’s plenty of room to question what kinds of literature students should read, or what ways they ought to study and know history. But this sort of terrain is way too sophisticated and subtle for Hacker, who is really doing a lot to degrade the brand value of expertise lately.

Posted in Academia, Politics | 16 Comments

Don’t Bring Policy to a Culture Fight

E.J. Dionne suggests that gun control advocates have given up and are “rationalizing gutlessness”.

I’ve moved in my own life from being intensely certain that comprehensive restrictions on gun ownership were an important political objective to being indifferent to the issue. The reason is not that I think it would be a bad thing to change national and state policy on guns. Moderate licensing and regulation on par with what we ask of the owners and operators of cars and restrictions on certain types of armaments and ammunition still seem reasonable and useful to me. Over time, however, I have become more sensitive to two things. First, that there are many responsible, careful gun owners motivated in some cases by hunting and in other cases by the pursuit of security who have felt underappreciated and stereotyped by gun control advocates. Second, and more importantly, the entire issue has become a fiercely powerful synedoche for much vaster complexes of sociocultural identity, conflict and anxiety, and it got that way through organic, complex currents flowing up out of American history, some of them deep and some of them relatively recent. That makes it a disastrous target for top-down changes in policy or law.

This is a wall that Americans as a whole seem increasingly determined to smash their heads into. I’m going to be grossly simplistic about the history of the last 75 years for a moment. Starting in the late 1940s and accelerating in the 1960s, US liberals and progressives managed to demolish much, if not all, of a massive web of legal and governmental structures that actively enforced discrimination, racism, sexism, and inequality. The problem that arose in the wake of that change was that discrimination and inequality did not disappear and in some cases, seemed frustratingly likely to persist. Crudely speaking, the only people who were content to stop at that point were extremely strong believers in negative liberties, e.g., those who felt that once you removed strong governmental or other structural impediments to the social freedom of all citizens, it was up to the citizens to enact their own liberty. For anyone else, there were only two broad options for further action, in the direction of positive liberties: statutes and policies designed to move society towards equality and true freedom, or programmatic attempts to instrumentally change the culture and consciousness of people and institutions that were reproducing inequality and discrimination.

I know this is familiar ground for me at this blog, but initiatives in both of those directions in the 1970s and 1980s were an important force in the long-term coalescence and political empowerment of American cultural conservativism. There’s a Newtonianism at work here: each attempt to use policy and law to create or empower social transformation, or conscious attempts to alter culture in a predetermined direction has created an equal and opposite attempt to do the same thing in the other direction, to use policy and law to compel others to follow conservative moral and cultural practice, or to use civic and educational institutions to secure the content of culture and consciousness.

You can disagree with my implied view of the initial causality here and argue that the interventions of the 1970s were just one more waltz in a long dance of hegemony and counter-hegemony. Or get irritated with the compulsive “balancing” going on here. I’m particularly sympathetic to the latter objection in the sense that I’m not implying moral equivalency between the two “sides” in this push-and-pull.

The point here is that this is a political fact: that when a particular practice gets deeply, powerfully written into culture, identity, consciousness, you generally cannot force it back out again through government or civic dictate. The harder you try, the more you provide thermodynamic fuel that makes your target stronger and more resilient. This should not be news to my colleagues and friends who are social and cultural specialists in history, anthropology or sociology. We can see this kind of dialectic at work very powerfully in the societies that we study in the past, or in some other part of the world. Frequently, our sympathies are with the people and communities that governments and civic institutions are trying to change. Even when we’re uncomfortable with some of the moral or practical consequences of their beliefs and practices–say, with witchcraft discourses; gendered differences dictated by spiritual or religious belief; hierarchies in the domestic sphere and family life; mutilations and alterations of the body–we tend to acknowledge both that these practices acquire new vigor and meaning when they are the focus of strong “top-down” efforts to change or eliminate them and that such efforts at forcible change typically overlook the subtlety and richness of the cultural worlds they are striving to transform.

There are practices you can change by fiddling with tax incentives or promoting public education or partial bannings. They tend to be practices that are either highly marginalized already at the time that the state or civil society takes an interest, or practices that have a relatively shallow historical rooting. Gun ownership in America is neither of those things. It doesn’t matter that there are places in the world with few guns and little gun crime: the histories of gun ownership and of state-society relations in such places are different. You can’t simply transpose one onto the other by policy and law.

If you protest that this condemns us simply to more serial-killer shootings in public places, more uses of guns in urban violence with innocent bystanders falling right and left, you’re right. We are condemned, at least for now. There is nothing that can change that: no law or police-force or government agency big enough for it. Any more than there has been a law or police-force or government agency big enough to win the “drug war”, make people stop having racist thoughts, stop wanting to view pornography or to make people stop eating Chick-Fil-A.

When lots of people are doing something and valuing it as a part of their lives, it cannot be changed by fiat, no matter how good the arguments on paper are for doing it.

What I think we lack sometimes is confidence that in the long run of things, a certain kind of homespun wisdom wins out in culture. When you look at the social transformations of the last two centuries in many societies, there are some you can credit to the forcible intervention of the state or dominant social classes, but a lot that just sort of incrementally and complicatedly happened. Sometimes because things that used to make sense just stopped making sense, or the cost of a certain kind of practice became higher for pervasive and unplanned reasons. Sure, you can keep talking about why guns are a bad idea, or why the fantasies of certain gun-owners are just actively dangerous or wrong. (Say, that it would have helped anything for there to be three or four guys carrying handguns in Aurora. Anybody with police or military experience, any responsible gun owner, knows that’s stupid bravado.) The conversation can continue. I think the more curious, the more exploratory, the more interested in the range of actually-lived practices people are (on all sides), the more possible it becomes for real change to occur, for the great knotted muscle at the heart of contemporary American life to relax, unwind and open up.

Posted in Oh Not Again He's Going to Tell Us It's a Complex System, Politics | 6 Comments

Listen Up You Primitive Screwheads

(Army of Darkness reference for the uninitiated.)

I hereby volunteer: the next pundit who talks about how MOOCs are going to save higher education some big bucks needs to meet me for drinks at the establishment of his or her choosing, I’ll foot the bill, and in return I just ask for the chance to politely and rationally CHEW THEIR FUCKING EARS OFF. And then if they really want they can write an op-ed the next week and pretend they thought of everything I said by themselves and I’ll never let on otherwise.

Do you really WANT TO SAVE SOME MONEY using INFORMATION TECHNOLOGY? Ok, try this one on for size. Why weren’t you blathering on asking why the heck we all bought Blackboard or if you really want to go into the dark ages, WebCT, for years and then kept buying it when we had a less expensive (though not free, if you look at support and management costs) open-source alternative? Especially asking why institutions that didn’t even necessarily need a course management system bought them and got stuck with them and came to see them as indispensible when at least some of the time they were really just exotic devices for password-walling-off fair-use excerpts of material used in classes?

No, no, even better. All the institutions who can create consortia and companies to offer MOOCs seemingly on a wild impulse, try asking why have they been incapable of creating far bigger and more ambitious consortia for open-access publishing of scholarly work, something that’s been technically and institutionally plausible for a decade. I’ve always heard that the first problem is the stubborn desire of individual institutions to go it alone, maintain their independent identity. But suddenly hey presto! MOOC-collaborations galore. Maybe it’s because the for-profit publishers whose monopoly pricing has punched hundreds of universities in their unmentionables didn’t want an open-access world to come into being, and whispered in the right ears. If the idea of big savings and ethical transformation in higher education bundled together makes you so hot you want to call your publisher right now and pitch “The World Is Open” or some such thing, this is your meal ticket, not MOOCs. MOOCs are the freak-show tent off to the side by comparison.

If you want to talk about savings, those are two big areas: platforms and products that could be hacked out cheaply if only faculty and staff user communities were as flexible and adaptable and mildly literate about information technology as everyone else in the world and were therefore also universally pressuring for open-access publishing created and maintained by truly massive consortia of higher education institutions.

But that’s not what the mainstream media pundits are blabbing about everywhere because none of them know shit about higher education budgets and none of them know shit about information technology and none of them lift a finger to know anything more than whatever it is they heard from some guy whose brother’s friend knows a guy who knows a guy. They just open their columns to the most top-level stream of today’s information buzzery and let it dump into their column inches like an overflow sewer in a hurricane.

Again, pundits, let’s talk. MOOCs are damn interesting, you betcha, but seriously, if you think they’re about to solve the labor-intensivity of higher education tomorrow with no losses or costs in quality, you have a lot of learning to do. Not just about the costs and budgets of higher education today, but about the history of distance learning. Right now you guys sound like the same packs of enthusiastic dunderheads who thought that public-access television, national radio networks, or correspondence courses were going to make conventional universities obsolete via technological magic. And hey, if you’re that keen on the digital, skip the drinks, I’m happy to educate you via email.

Posted in Academia, Information Technology and Information Literacy, Intellectual Property | 15 Comments

Tales of the Burning World

One of the hardest things for academic historians to accept is that their characteristic engagement with the past is deeply, arguably inextricably, interwoven with the very particular ways that nations and modernity use history as a tool. E.g., both nations and modern societies as a whole have a very active stake in the maintenance of selected old buildings, historical landmarks, and archives, for slightly divergent reasons. Nations use history as a kind of grout to connect their fragments, cover their gaps, prevent leaks, make a (seeming) whole. Modernity uses history to gloat and reflect about its transcendence over the past: it points to carefully preserved and bracketed-off relics and monuments in ways both melancholy and triumphant.

I was really struck by this familiar point anew while travelling in Japan earlier this month. Again and again, when you read the fine print at various sites, you note that you are seeing a reconstruction of a building that burned. Burned because that’s what urban buildings made of wood regularly did in all premodern societies, including Japan, no matter how exalted or remarkable they were. Burned on purpose, as part of political, religious or military struggles, right into the recent past with World War II bombings. The reconstructions in Japan and elsewhere are often multilayered: Kenrokuen Garden and Kanazawa Castle were built, rebuilt and burned at different moments, by different kinds of regimes and rulers.

It’s a sign of modern and national consciousness’ stake in preservation that when we are made aware of such reconstruction, many of us go looking for the most authentically ancient or “real” artifact or site within a place marked off for its historicity, and discount the most recent reconstruction. Some of that reaction is born from our more intuitive ability to “read” the amendations and sanitizations that recent rebuilders enact when they remake a burnt or destroyed site. But even there we are often only sensitive to the most obvious political or philosophical elisions. Many sites now valued for historical authenticity have invisible infrastructural amendments that make them safe for visitors, which forbid or block access to risky practices commonly associated with the site in the past, or which completely transform the ambient and sensory environment which would have existed in the past.

The hunt for authenticity and the scorn for reconstruction, with all their prickly fetish for materialist accuracy, are part of the same bundle as the drive to preserve and mark off the historical. Reconstruction (and destruction) are the true “authentic” of history, particularly urban history. This isn’t just Japan, but everywhere. Human societies have mostly been like the king of Swamp Castle in Monty Python and the Holy Grail: rebuilders after fire and failure and vandalism.

Somewhat awkwardly at my talk in Japan on digitization, I tried to make a neophyte’s use of the Japanese literary and philosophical concept of mono no aware to suggest that archivists, librarians and scholars should be less avid and obsessive about the need to collect, preserve and migrate the entire flow of information, communication and knowledge through digital spaces. What I think is sometimes misunderstood is that the technical and organizational problem of digital preservation isn’t an automatic result of a vast “information explosion”, it’s a result of the hubristic drive to collect everything, to totalize the archive, the collection, to preemptively and continuously ossify the present for the sake of some future’s ability to know history.

But historical interpretation gains its imaginative force and emotional power from archives, records and ruins that are fragmented by accident, loss and distortion. I’m not saying that I don’t find it frustrating when I can’t find a document that ought to exist, that did exist at some point, but it is my very frustration that drives my work, reminds me that to make sense of people in the past is as much an art as it is a a technical sifting and distillation of fact. Much, I think, as it is in the present.

When preservation loses sight of the value of impermanence, ephemerality, and replacement, when it takes too seriously the grandiosity and overreach of both nation-making and modernity, it becomes a danger both to a richly human understanding of our actually lived past and a piecemeal assassin of the living and changing present, trying to make the material and informational world we inhabit into a stately mortuary. A measure of preservation, unafraid of necessary or pleasing reconstructions and annotations, is a very good thing, but it ought to be guided as much by whimsy and opportunity as by some comprehensive protocol.

Posted in Academia, Digital Humanities, Information Technology and Information Literacy, Production of History | 1 Comment

Showing the Money

So I sounded a note of confidence in my Chronicle of Higher Education piece that faculty are perfectly capable of constructive participation in hard fiscal choices and being responsible custodians of the liberal arts ideal within resource constraints. How sure am I that this is the case?

Well, first I’m sure that most of the people who complain that faculty are consistently irresponsible aren’t basing that on any specific or concrete reference point. A lot of the time this is just a kind of folkloric talking point or free-floating anti-intellectualism, an immaculate ressentiment. Or they’re exasperated administrators blowing off steam, who have had to deal with the neediest, most insecure or most bullying faculty and have let those experiences inform a general portrait.

More complicatedly, there are some institutions where faculty have never really been given the chance to participate meaningfully in budgetary planning, so either way (optimistic or pessimistic) the hypothesis of their likely participation isn’t fully tested. And often when faculty do get to look at budgets, they only see their own, not the bigger picture, and therefore often suspect (sometimes correctly) that the real budgetary challenges lie somewhere else within the institution. That can apply within faculty budgets, also. I’ve always found it a bit frustrating when we (and many other institutions) ask individual departments to assess their needs for journals or other library materials when the costs of subscriptions in different disciplines are so radically different. You can’t really judge a willingness to participate when you aren’t allowed to address the big picture.

That said, I don’t buy the Marc Bousquet-ish argument that it’s all about the administrators, about conservative attacks on the public sector, that it’s possible to hire far more faculty across the higher education sector with good wages, benefits and job security, to get back to a golden age. I grant that most versions of austerity are a rigged game, but I think progressives have a duty to think about limits to resources. So do faculty think that way, can they think that way?

“Can they” is easy. Yes. Do they? Sometimes. Late last semester, I was surprised when within three days, three different colleagues of mine all expressed the view that nothing has really changed about the revenues available to an elite private institution like Swarthmore and that they believed that the institution had considerable reserves of untapped wealth that could and should be used to hire more faculty and expand the curriculum. I’ve had far more colleagues acknowledge the opposite, that we have to operate within limits. But it’s still surprising to run into the former belief (which is even more common among our students): it’s a form of magical thinking. Wealthy institutions tend to have spent up to the limits of their wealth, just like struggling colleges and universities are generally straining against the limits of what’s possible. There may be other things to do with that wealth than what’s being done with it, but generally there isn’t a secret pile of cash just sitting in the basement.

So has anything changed about revenues for selective private higher education, whether small colleges or larger universities? Yes. Several fundamentals have changed.

First, for a variety of reasons, most potently static wage levels and widening income inequality in general in the United States, tuition and board cannot continue to be increased well above inflation. There’s a political constraint and an economic constraint, particularly in need-blind institutions. At some point, increasing the sticker price at a need-blind institution, if you don’t quietly fudge your formula for financial aid eligibility, might well cost you more in financial aid than it brings in. Increasing the number of students might also have a higher net cost than gain depending on how it’s done, not to mention that it might impact the quality of the education provided.

Second, public sector support for a variety of expenses in higher education has declined and is likely to continue to decline. This has a really direct and painful impact on public universities and community colleges, but measurable consequences for private ones as well, however wealthy or struggling they are in terms of their own resources. You can argue against this shift, but for the moment, I don’t see it going in the other direction.

Third, this is a much tougher environment for philanthropic contributions. Many alumni don’t have the money to give, and some others have decided that whatever they have to give ought to go to more genuinely needy causes, a view that I think is hard to argue too strongly against.

Fourth, endowment income is at the least a much scarier thing to rely upon. Institutions that didn’t build big endowments in the 1990s or that lost a disproportionately higher amount in 2007-2008 are going to find it very hard to catch up, and those that did are much warier than they were a decade ago about using this income to incur obligations that they can’t shed if there’s another big contraction.

I think there’s a more subtle fifth issue. Which is that if academics are going to say that universities should not be run like a business, that we should resist commodification of education, that we should not be corporatized, then it’s up to us to imagine how institutions can exist in the world without growth. There’s something that sticks in my craw when faculty denounce the logic of capitalism and then turn around and essentially see their own institutions as always growing, always getting bigger, always entrepreneurially snapping up new areas of research, new opportunities. The alternative isn’t just being static. I think we need to figure out how to embrace change without growth, dynamism without expansion. There may be ideal scales of institutional size that justify growth at particular times in particular ways (I’m finally convinced, for example, that Swarthmore maybe needs to grow by a few hundred students over a decade or so) but even if we were not up against limits in revenue, I think one of the challenges of the 21st Century is to figure out how to achieve transformation and innovation within limits.

Are there disproportionate cost increases in higher education? Yes. If you believe, as I do, that high-quality undergraduate education is necessarily labor-intensive, then you are going to be exposed to health care cost increases to a very great degree. Energy costs hit residential higher education hard. Various kinds of regulatory and statutory compliance have hit higher education to a disproportionate degree. And there are unique costs, most notably with libraries and instructional technology. (Though in my humble view, that’s one area where higher education could hit back very hard against external service providers and get a lot of budgetary relief.)

So the budget crunch at wealthy and struggling institutions is real. What are the things that responsible faculty could do as institutional stewards that might help? Yes, yes, I know, cut administrative jobs, increase public support and all that “the other guy goes first” stuff, but let’s leave that aside for now.

a) Teach more, research less. I will now be entering the Witness Protection Program.

To be clear, I don’t mean this to apply to Swarthmore or really any teaching-intensive undergraduate college. This is about large research universities where faculty have teaching loads of 1/0, 1/1 or even 1/2. I know, some research brings revenue to the university and having the principal investigator teach more costs money rather than saves it. And if I move to the next obvious thought, higher teaching loads for faculty who don’t bring in research revenue, a raft of concerns about commodification, equity and so on come roaring into view. But it’s one way to increase the number of courses available to undergraduates and graduates and preserve the quality of the curriculum, to demonstrate the value of tenured faculty to the educational mission. I’m going to be blunt here: we have way too much research being produced in most fields, if you consider it just as an intrinsic good in its own right. Yes, research is also a big part of being an active, engaged teacher, but the kind of research that makes people good “teacher-scholars” seems to me to be much more flexible in terms of the scale of effort and time that it requires, and allows people to teach more than they sometimes do.

b) Recognize that curricular design has costs.

Both at Swarthmore and elsewhere, I see a lot of faculty who don’t really appreciate this point. Many folks defend the design of their departmental major or of general education in purely intellectual or scholarly terms, that they set requirements as they must be set to ensure the quality of learning outcomes, that there are things that students simply must know or do if they are to study a particular topic. I’m sometimes convinced that this proposition is in fact true, but even when it is, I think there’s probably unexplored flexibility (three required classes instead of five, one introductory course instead of two, that kind of thing). Sometimes I think it’s flatly not true, that a department has talked itself into believing that there is only one possible way to teach the discipline, regardless of how hard it is to staff the resulting curriculum. When faculty have strong autonomy in matters of curricular governance, it’s up to them to always keep the costs of a particular design in mind. The longer the sequence of requirements, and the more specialized the instruction required in that sequence, the more expensive it is to sustain.

c) A liberal arts approach doesn’t fixedly require any particular subjects to be taught.

This doesn’t mean that you teach any damn thing: what you teach still has to be recognizable in terms of the history and uses of knowledge in the contemporary world. I’ve talked about this recently so I won’t go any further, save to say that this is an area where I think faculty can be frustratingly contradictory: arguing against canons and core knowledge when it would constrain their personal autonomy and interests, arguing for them when they’re trying to get resources for their own interests. But this is where words like “flexibility” and “nimbleness” can have some fiscal as well as intellectual meaning: not that this is an argument for contraction, but it is what allows resources to flow where needed within a university or college faculty. Taking this view seriously also imposes a non-budgetary obligation on all faculty, which is that everyone has to contribute to the overall institutional curriculum and institutional culture, not just to their particular specialized subject. In many ways, the point here is that very strong disciplinary specialization is expensive, whatever its other intellectual merits and demerits might be. If it’s expensive, the arguments for it need to be correspondingly strong.

d) Be preemptively generous about inequities of funding within faculty budgets.

Swarthmore, for example, has a fixed per-faculty allotment for travel and a fixed per-faculty fund to support research expenses that you can apply for. I’ve long thought that we might use that money more effectively if we had some kind of need-based flexibility. I know very well, for example, that a ticket to South Africa has a really different cost than a ticket to Chicago. It’s true that adjudicating different levels of need would impose a cost in time and hassle, and might actually end up being more expensive. But we make a big deal out of the autonomy and responsibility of faculty. On paper at least, it might be nice if we could self-police our actual needs for support of this kind and not require an elaborate deliberative process.

e) Streamline faculty participation in administrative work.

Speaking of elaborate deliberative processes, though faculty love to grouse about committees and service, at least some of the service work of faculty at many institutions is work they’ve imposed upon themselves. Where it’s possible to streamline, where there are decisions that don’t really have to be made laboriously from scratch year-after-year, that’s a potential savings of time and effort that could go into more productive or generative work. This is especially true when folks believe that the elaborate process in question is largely about monitoring or restricting the activities of some other department or division.

f) Beware of wandering or misinvested ego.

Individual faculty should have great pride in their work as individuals, in their teaching, their research, their service. That’s healthy. What’s not so healthy, and is sometimes costly, is when individual ego gets attached to departments, disciplines, divisions or other large-scale institutional structures. We should defend our actual colleagues and their practicing autonomy as professionals, but it really shouldn’t be so important whether a given department is at X or Y size compared to a similar or related department. That kind of internal competition often really does impose all sorts of subtle and occasionally obvious costs on the larger institution, often to no useful end.

I don’t know in the end whether any of these shifts in cultural outlook add up to much on the bottom line, but they strike me as the most potent ways that cost-consciousness can and should be a part of how faculty act as stewards of the larger institutional mission.

Posted in Academia, Swarthmore | 5 Comments

Etsy Education

Let me revive another point I’ve made before at this blog about online education of any kind, including blended learning and “flipped classrooms”. This thought may be a bit less comforting to folks at the University of Virginia, though it is of no comfort whatsoever to the members of Virginia’s Board of Visitors.

One of the red herrings in the debate about any form of instructional technology, online instruction, machine learning or distance education, whether it’s as modest as the use of PowerPoint or as dramatic as MOOCs, is that they intrinsically and inevitably lower the quality of instruction in comparison to face-to-face teaching and assessment.

I’m not the first nor will I be the last to observe that much of what is seen as negative or lower-quality about instructional technology and automation in higher education is simply building upon very long-term changes in teaching and assessment that predate digitization and the Internet. Marc Bousquet put it beautifully in the Chronicle of Higher Education recently>: “Machines can reproduce human essay-grading so well because human essay-grading practices are already mechanical”. Large-scale testing of all kinds has for three decades been increasingly required by state law and institutional buyers to force graders to follow algorithmic constraints in how they mark writing and other responses, for the sake of consistency.

That’s only one example. One thing that very well-designed MOOCs may accomplish is to demonstrate that there is not that much difference between a lecture-based course with 800 students in it in which the only face-to-face contact that students have is with teaching assistants of highly variable quality and a MOOC. In some contexts and subjects, the MOOC might well be superior: an educational institution can work very hard to perfect the content presentation in a MOOC, make it richly multimedia and engaging, whereas a university can only be somewhat assured of the quality of all the lectures in all the courses, particularly if the MOOC-makers spent as much on assessment as a university might on hiring teaching assistants.

I’ve argued with Margaret Soltan about this a number of times, but I think there is only a slight distinction between a bad PowerPoint presentation and a bad lecture, that to see the technology as responsible for bad pedagogy is to mistake a symptom for a cause. As a graduate student, I had to work with a professor who gave lectures verbatim that he wrote ten years previously, with almost no feeling of a live or interactive performance. We could have played the tapes of the lectures (which he had in fact made) with a picture of him on screen with almost no real difference. That’s before PowerPoint or the Internet, at the dawn of computers.

Instruction at large universities has in many cases been heading steadily in this direction for decades: large classes, remote lecturers, sage-on-the-stage pedagogies. If MOOCs and other instructional technologies step into that trend, they are at their worst doing little more than continuing it. At their best, in the case of sensitive or interesting versions of blended learning and flipped classrooms, they may well be actively reversing it.

What a MOOC might do in some cases is clarify for students (and their families) that they have been paying higher and higher prices for a product whose quality has been dropping in inverse proportion as faculties have been adjunctified, classrooms overstuffed, instruction commodified, and so on. This doesn’t mean that there isn’t education out there that’s worth the price, but it might clarify that that education is a handmade, artisanal product that can’t be scaled up for all or most of the students seeking an undergraduate degree. MOOCs might pose a valuable challenge that makes institutions that charge foie gras prices prove that they’re not just serving McNuggets.

For an institution like the University of Virginia, what that means is that you really need someone more like President Sullivan, who can figure out how to sharpen the sense of what a liberal arts education is and why it matters, to demonstrate the special value of a handcrafted education while working to make it financially viable. You don’t need strategic dynamists are just dreaming up new ways to brand the McNuggets that they’ve already decided have to be served. But if you push back as an artisan, you have to to figure out how to continue to make artisanal products in a mass-production world, which is no easy challenge.

Posted in Academia | 5 Comments