What Won’t Change (II)

Progressives, conservatives, independents, libertarians, the disgustedly disengaged: whatever your political affiliation, you need to stop waiting for a Presidential (or Congressional) election to bring you closure. The day after, everyone you hate and fear politically is right here, just waiting to vote again.

I’ve listened to liberals confidently predicting that the demographic base of American conservatism is right at the edge of crumbling. Next time or next time or next time, they say, it’ll be safe at last to believe in sane health care policy, the extension of rights to gays and lesbians, meaningful regulation of crony capitalism, strong environmental protections, and so on. I’ve read conservatives confidently predicting that liberals are only one major defeat away from moving to Europe or retreating into their walled communities–or worse yet, more extreme conservatives fantasizing that the next victory will give them the power to forcibly repress an opposition that they view as seditious and illegitimate. The disengaged dreamed last time that they’d see a new politics full of meaning and substance: they might believe it again from someone else.

The foundations of American political division at this point are deep and abiding enough than nothing like an election every four years is going to move them much. Nothing that any President does or could do, any Congress, will shift those foundations much, though some politicians have and will continue to make them even stronger and more immobile. No policy, no law, will change the math that will make most elections hard-fought, ill-felt and close.

It’s hardly surprising that there should be so many fantasies, some light and sardonic, some ugly and sincere, of hard overrule by one faction over the other. Because the only way out is through one of two gates: a bare, fragile majority (or a large plurality that locks in an advantage in the Rube Goldberg machine of American politics) forces itself on the communities and people who reject it, or we work out some kind of renewed social contract among a much larger center, to hold strongly together as a people and a nation. The latter doesn’t seem to interest much of anyone at the moment. The current incumbent, as I see it, made some pretty earnest attempts to move that direction and got little but scorn and disregard for it–some of which I’ve joined in because there’s no point to sitting down to strike a bargain with someone who has no intention to strike one. Sometimes you only get to a peace because enough people come to see the costs of war without end–the hard way. In any event, it’s not up to Presidents or four-year election cycles to accomplish that work–either a new covenant or a slow-motion civil war. It’s up to us.

Posted in Politics | 5 Comments

What Won’t Change (I)

Whatever happens in the U.S. elections tomorrow, two fundamental things will not change.

First, the national security policy of the United States is unlikely to change in any systematic or meaningful ways, meaning both the approach of the government towards civil liberties and its military posture abroad.

The appointment of particular individuals to specific executive offices by either possible President may make for slight differences of emphasis in the professionalism, competence and interpretation of the consensus policies of the American government. That’s about it.

This is not a Naderite assertion that there is no difference between the two parties. This is not about parties any more. If by some bizarre fluke Jill Stein or Ron Paul were elected tomorrow night, they’d very likely be forced to continue almost all of the security policies of the Bush and Obama White Houses. They could give orders to the contrary, but they’d be overriden actively by Congress and possibly the federal judiciary as well. They’d be ignored or circumvented by the military, intelligence services, foreign policy professionals and rank-and-file law enforcement. They’d be shouted down by the numerous popular and local constituencies that actively depend for their livelihood on security spending–arguably even liberals should hesitate to massively redirect the flow of that money for the same reason that they argue against austerity in general. Any given President might succeed in ending one wasteful war–or might foolishly rush into one. But the basic tenor of American policy almost cannot be moved.

The last time it was about parties or political factions was in the aftermath of 9/11, when I do believe there was a choice that could have been made and might have been made differently by a Democratic administration. I say might have been not just because of the difference in the personalities of the important executive authorities of the Bush White House and the probable Gore White House, but because I think for various reasons the Democratic Party elite of the early 2000s would have kept the harder authoritarian kinds of neoliberal folly further from political influence (but not entirely cut them out of the loop).

It’s not about parties in part because of what was done after 9/11 and in part because of what was done in the two decades before 9/11, by both Democrats and Republicans, and by people who aren’t particularly interested in party politics. The current direction of the American hegemony was built not just by massive increases in spending but also by lobbing punitive cruise missiles at the Sudan or Afghanistan based on weak intelligence for the sake of security theater and then asserting a unilateral right to do so. The only real feint in the direction of some more pragmatic, multilateral kind of neoimperial hegemony was curiously enough from the first Bush Administration, whose Gulf War now seems less like a prelude and more like the road not taken.

Where the foundation hardened into a set form was with the high-level sanctioning and legalization of torture, rendition, indefinite detention, assassination, domestic surveillance and first-strike cyberwarfare. The national security systems of most 20th and 21st Century nation-states have indulged in most or all of these activities to varying degrees, no less the United States. There is a huge difference, however, between doing them against the law, out of the sight of top-level authority, and reigning in or redirecting such efforts when directed to do so either by civilian leaders or in response to moral and political pressure. There is a big difference between an individual doing something they believe necessary but illegal, immoral and dangerous and having the President of the United States and his top officials actively affirm the systematic right to do any of these things at will and without any possibility of oversight or review.

That’s where we are now, whomever is elected tomorrow night. We are where some empires eventually arrive: enslaved to the perpetual threat of an ever-wider war with our frontiers and ourselves. As Frederick Cooper and Jane Burbank’s superb new history of empires in world history observes, not all empires crumble from their own contradictions: they are as varied in their stability and character as nations are (in part because there have been more of them in world history). But the U.S. is now tracing the contours of a familiar kind of structural crisis that tends to be resolved in one of two ways: the empire kills its own power by mindlessly racing far beyond what its resources allow, its savvier clients and rivals deliberately bleeding it dry by exploiting the helpless giant’s inability to exercise judicious restraint and pragmatic self-control. Or it turns ferociously inward into itself when its own core citizenry finally recognize that their wealth is being squandered at the frontier and their freedoms are being whittled away in the name of safety. I don’t see any political leadership capable of threading either of those needles before events provide a far more drastic and unpleasant resolution.

Posted in Politics | 1 Comment

The Whole World In Your Hands

My least favorite genre of online discourse, whether it’s on Facebook or email, is the hortatory appeal. Sometimes this comes before a petition or request for donation. More often, coming from the liberals and progressives who make up the majority of people in my networks, it is a sort of shaming exhortation, that there is some neglected object or story or problem to which we must now urgently call attention. The hortatory address to a network generally exempts the people reading the message from the accusation: because we are reading it, we are presumed to be paying attention to what our fellow citizens (in our towns, our states, our country, our culture, our world) are ignoring. With that presumption, we are also discouraged from disagreeing or questioning the appeal: it is offered as common sense, preceded by an invisible “of course”.

Sometimes I’m perfectly happy with the message or idea behind the appeal. If Glenn Greenwald pops up in my feed asking why progressives give the current administration a pass for behavior that they were stirred to fury over in the previous administration, I’ll look over any hortatory rhetoric involved because I think it’s a fair point. The blindspots that hortatory shout-outs address are often that way: strange absences in our collective thinking, distasteful contradictions. The energy spent worrying about the costs of mitigating natural disasters or maintaining infrastructure seems magnitudes greater by far than the minimal energy spent worrying about the massive costs of military action and security aimed at countering terrorism.

But the shout-outs usually imply that these blindspots are the result of malicious action by the powers-that-be, that to call attention to them is to reveal a secret, to midwife an epiphany. Often there are far deeper foundations embedded in our culture, our hearts, our minds. Sometimes our blindspots make some kind of sense–and sometimes the exhortation is calling for a kind of imagination or selfhood that really doesn’t make much sense or have much plausibility once you stop to consider it.

I’ll give the example that was on my mind earlier this week, enough to trigger a small rant on Facebook. Friends passed on several messages pointing out that Hurricane Sandy caused devastation in the Caribbean too, that there were other tropical storms elsewhere in the world that had or were at that moment causing devastation, and that these events were largely absent from our national conversation and our collective expression of sympathy.

It’s always at least empirically right to point out the uneven attention of the American mass media on this and many other issues. Sometimes those asymmetries are profoundly consequential: some crimes are in the imagination of the media spectacular news when they happen to or are committed by people who aren’t “supposed to be” criminals or victims, and ordinary or banal otherwise, and that in turn shapes systematic outcomes in our criminal justice system. And at least some of the worst of this unevenness is peculiarly pronounced in US mass media. European mass media, for example, pay much more systematic attention to the news of the world as a whole than US media do, and that also surely has consequences for American policy and American politics.

But step back for a moment and consider the self that we are being exhorted to have in paying an even, distributed attention to the whole of Sandy, or to all disasters everywhere. This is the selfhood of liberalism: cool, rational, objective and striving to universality even in its tears, its pangs of conscience, its allowances of sympathy. What you feel for a neighbor, a countryman, a familiar place, it’s suggested, you ought to feel for the stranger undergoing a similar crisis, particularly if the stranger is more vulnerable to its devastation.

The late George McGovern starred once in a wonderful sketch on Saturday Night Live when he hosted the show. The premise of the sketch was that McGovern had been elected President and ushered in a utopia. At one moment, an aide bursts in with news of an emergency: a child had gone to bed hungry last night somewhere in the world. McGovern springs up in outrage and demands that his officials get out there and fix the situation this instant.

What that sketch (and McGovern himself, who was clearly having a ball during it) took as parody, some of the more earnest hortatory calls treat a similar call to arms with all the earnestness of a starched-collar missionary lecturing to the heathen. Here is where liberal cosmopolitanism, made into a passion of dispassion, becomes less the deeply-felt ethical project that Anthony Appiah has outlined and more a backdoor conferring of privilege upon those anointed as cosmopolitans. Even though other societies, other mass media, other local cultures may be less parochial than Americans often are, just about everybody cares first, second and last about the local, the known, the familial and familiar. There isn’t a town, neighborhood or country on Earth where folks would put aside worrying about the neighbor or friend or countryman’s house that fell in an earthquake simply so they could devote an exactly equal measure of feeling for a stranger’s house that fell in another earthquake a thousand miles away. Human hearts are big, with room for deep wells of compassion, and the common experience of suffering and crisis can create abiding bonds between strangers. But we get there just fine in our own way and time.

The urgent demand that we should always feel at all times that equality of sympathy is another way of saying: we should be better than anyone else. Better than everyone else. It’s the humanitarian equivalent of wanting a Ferrari while others drive Yugos.

Posted in Politics | 4 Comments

A Month of Blogging, Ten Years On

So as often happens with this blog, I get busy and other things occupy my attention. It’s all there in the title, folks.

What also happens is I store up a lot of things I want to talk about in this space, so I’m going to try and unload a lot of my mental warehouse over the course of this month.

It’s a good month for it: ten years ago, I began this blog, publishing it first as a handcrafted HTML page with no comments, then moving to WordPress in 2005.

The blog was a replacement for some static web pages I maintained at Swarthmore from about 1999 to 2002, which included a handful of essays, an early set of digital syllabi and an inexplicably popular set of restaurant reviews. It was also a replacement for my participation in several virtual communities, most notably Howard Rheingold‘s Brainstorms. I’d found those experiences to be life-changing in the way that they rewrote the structure of my intellectual networks and showed me how my writing and thinking could be enriched by exposure to unexpected voices and new circuits of conversation. At some point, I started to find the closed structure of those communities a bit stifling, however: I wanted to jump into a bigger digital public sphere and find even more “strange attractors”, as well as seek out other academics with similar interests in building a new transdepartmental space for reworking the substance and practice of scholarly life. I had Justin Hall’s example near at hand to inspire me (and occasionally warn me about what the physical and cultural challenges of a life lived online might entail.)

I came into blogging at a time when there were a few dominant voices like Instapundit, and the overall space was small enough to have a generalized sense of the whole cultural ecosystem (and your own place within it). I started around the same moment when many others were doing so. Some of those early participants are still plugging away at blogs, others have moved on to other kinds of digital (and analog) writing: blogs grew, platforms metamorphosed and absorbed or superceded some of their cousins and cognates, and then slowly receded or were domesticated within larger publishing spaces. But one of the bad things about the drive for the next new thing is the quickness with which we assume that a new form of digital expression always overthrows its parents, like Zeus tossing Cronos into Tartarus. AOL is still there, LiveJournal is still around–and blogging as a synonym for short-form writing published digitally still has a significant place in the world of scholarship and in wider public reading and conversations.

I resolved at the beginning to write about what interested me at the moment, to live up (or down) to my title. To write as long or as short as I wished (usually the former: John Holbo and I often seemed a decade ago to be in a contest for digital long-windedness). The voice I consciously crafted might be called “remotely personal” and “conversationally formal”. I rarely wanted to talk as directly about my feelings or life as some early academic bloggers, especially many female bloggers, did, but neither did I want to be as formal and third-person as the blogs most concerned with scholarly reputation were. I wanted to write as I thought, to compose on the fly, but not to sound too sloppy or immediate. I wanted to be judicious, fair, to build bridges, to be mindful that I was speaking in a public space, under my own name and from within my own institution.

I achieved those goals, I think. (And there is an example of this prose styling: qualifying modifiers designed to soften claims, express uncertainty, hedge.) At some cost. The voice I crafted here was less urgent, more stilted, and considerably less humorous than I think I am in everyday conversation. When I wanted to be glib, cruel, polemical, or simplistic, I found it harder to satisfy those impulses here. Only I probably remember just how many times I’ve repeated certain claims at Easily Distracted, but my readers do remember my tonal and intellectual commitments and have been quick to call me out when I’ve strayed from the path I laid out for myself. That’s a great gift from my readers, a respectful sign of attention that I don’t fully deserve. I’m often haunted by something that a very angry Brainstorms participant said to me just before he was banned from the community (he had parting words for many): that I constantly strive to position myself as smarter than other people, but that I somehow want them to like me for it anyway. It stung because there’s some truth to it, and because I know that attitude limits not just how well I can connect to a room, a community, my society but also my capacity for understanding everything I want to know. The cliche that the older one gets, the more you know how much you do not know is repeated so often because it is true.

And yet, perhaps because that’s how I feel, I can’t give up speaking about all the things that my distracted attention falls upon. Over time, I’ve diffused my speech out into more spaces. Twitter has been good for me: I don’t know why I avoided it as long as I did. Facebook is not so good for me, I think: I like being in touch but I’m so driven to start discussions, debate or disagree, in ways that I think are inappropriate to its emergent culture. Flickr and now 500px have been really interesting communities for me to explore, though 500px’s lack of any space for words and its minimalist infrastructure is driving me nuts. I keep a very small number of lightly pseudonymous investments going–Yelp reviews (still talking about restaurants) and a virtual community here and there, though I’m very turned off by virtual worlds these days, enough that I’m going to write a book (I hope) about that reaction.

Blogging still works for me. In the next month, I’m hoping to demonstrate a couple of new features or interests at this blog. I hope, for however long, whomever you are to be reading this, that it will continue to work for you too. I don’t publish to maximize attention or make ad revenue, but it would seem far less satisfying if this was little more than a private notebook sent adrift into an indifferent future.

Posted in Academia, Blogging | 11 Comments

Leisure and the Liberal Arts

Johann Neem argues at Inside Higher Education that the liberal arts have no economic value, that they are intrinsically tied to the achievement of a free, affluent society that is relieved of the burdens of scarcity and open to the fulfillments of leisure. This argument frustrates me in several respects (beyond the degree to which it commits rhetorical self-immolation in the present political dispensation in the United States).

1. I hate the idea that marking something off as a ‘public good’ sprinkles magical affluent-society fairy dust on it and relieves us of the burden of arguing competitively in relation to other public goods against a limited base of resources. There are other scarcity-based considerations beyond neoclassical economics to keep in play–sustainability isn’t just for environmentalists. Neem acknowledges that we’re not in the same affluent circumstances today as some misty GI-Bill public-goods-laden moment in the recent past but I think he overlooks that we never were. This sort of framing of the liberal arts as a contemplative refuge from the world, including its material circumstances, has a deep history–and a deep history of being in denial about the material and economic predicates of its own existence and about the ways in which the cultural and social capital of a liberal education were just as much aimed at securing a material and economic existence in the world as the education of bricklayers and accountants.

2. I hate the binarism here–that it’s either all market/entrepreneur/econocentrism/jobs or none at all. Excluded middles, fuzzy states, etc. Leisure, play, happiness, a holistic personhood, are worth rehabilitating as objectives of a sane, satisfied, wealthy, successful society. We don’t have to go the whole hardcore Huizinga route of insisting that play is never never ever practical, worldly or useful. We don’t have to rob an entrepreneur of their humanness or citizenship by the fact that they are involved with filthy lucre–or for that matter, accept the further implication that anyone who does narrate their own encounter with the “liberal arts” as having an instrumental outcome has betrayed that education, that the person so educated must always describe their experience as having no ends, providing no concrete usefulness, always outside the world. Among other things, this exempts the liberal arts from encountering on its own ground one of its persistent challengers, the argument that a liberal education is best derived from practice, experience, usage, materiality, that a practical education is or can be the very best kind of liberal education. This debate has persistently recurred at every moment that proponents of the liberal arts have tried to describe it as outside the world, above material or economic concerns, improvident or useless, contemplative or monastic, and a liberal arts education even in Neem’s terms cannot possibly sidestep or exclude this challenge by ruling it out of bounds without betraying its own deepest convictions. Meaning, learn to live with the person who asks, “What good is this learning?”, “what is its economic (or other) value?”, “does any of this actually work out in lived experience?” because they are asking questions that the liberal arts ought to ask of itself without even being prompted to do so. This is another Cartesian deathtrap we have to get ourselves out of: there is no salvation to be found in retreating back into the refined world of the quadrivium and sending all the dreary tradespeople off to vocational schools.

3. Arguing for restoring the public sphere and ‘freedom’ in this sense via focusing narrowly on education and the liberal arts looks self-interested (monetarily and otherwise) coming from academics. Because it’s too specific, too institutional, and too incurious about the possibility of other practices, institutions and dispositions that might get to the same end objective. If this is what we want, we need to be relentlessly looking at the big picture and stop seeming to just want to keep our own paychecks and practices intact in their current form. Maybe the liberal arts and their freedoms will thrive in some other habitus or institution yet to be, if that’s the thing to care about. Maybe if what we’re concerned with is the erosion of the idea of the public, then that is what we should be caring about first, second and last, with liberal arts education only one small possible component of a renovation of that great idea.

Posted in Academia | 2 Comments

On Swarthmore’s Sorority

There’s been a running discussion at Swarthmore for a year now, mostly among the students and some between the students and the administration, about the effort to get a sorority established at the college. National media recently reported on the decision to allow the group to proceed and thereby overturn a ban on sororities that was enacted in response to a student-led campaign in 1933. (The campaign was substantially a response to the anti-Semitism of the sororities of that time.) One of the things that some outsiders haven’t picked up on is that fraternities were not banned in this decision, and that we have them today at the college, which often surprises visitors. Our fraternities are non-residential and their membership is a relatively small proportion of the male students, though they do host a lot of the weekend parties. That’s the backdrop of the administration’s decision to allow the sorority to go forward: if nothing else, there’s a Title IX issue that can’t be finessed–if there are fraternities and some women want a sorority, that’s pretty much the end of the matter.

In the world of higher education as a whole, I think it’s pretty clear that Greek life is a significant source at best of some dumb, destructive behavior and at worst an incubator of sexual violence and thuggery. “Community service” for some fraternities and sororities at large universities is a cynical fig leaf covering butt-chugging, hazing and bullying. But I’ve certainly seen the other side of Greek life at times–not so much the alleged community service, but more the way the organizations can build strong social ties, mutual support, sustained attention to life-long friendship. Some of them have very powerful approaches to making and modeling community and intimacy. Plus the bad side of Greek life is hard to wrestle to the ground without coming off like a 21st Century Dean Vernon Wormer: it feeds off of transgression, seeks out outrage and disdain.

At Swarthmore, I’ve known and taught many of the men who belong to the fraternities and they’ve largely seemed good folks to me. I’ve never been anywhere near their parties or events and it would neither be my job nor my preference to know anything about what goes on there or in almost any other similar aspect of weekend student recreation. (I did come down to watch and photograph a part of the famous Pterodactyl Hunt this year, which has nothing to do with fraternities and is in any event mind-blowingly awesome, but also because my 11-year old was dying to see what it looked like.) I suppose if I had my druthers we might not have fraternities or a sorority but mostly I don’t think it’s my business whether we have them or not, nor is it an issue I want to worry much about as a faculty member.

I am a bit worried by some of the opposition among students to the sorority. Not so much because of what it means for the prospects of the actual sorority but because of what it suggests about how we continue to fall short of some of our general institutional aspirations for achieving a community, some of which live inside the curriculum and weigh on our teaching.

The first thing I’d suggest to the opponents with strong feelings is that I’d put even money on the sorority not surviving past the graduation of the students who want to start it. Most of the groups and projects that Swarthmore students start are driven by the energy and commitment of their founders, and very few of those projects are built in a way that this energy is sustainable and transferable to the next group of students to enroll.

Behind that point is lurking a more important principle. When a group or a project does survive, that means something. Either it suggests that out there in the wider world, there are influential sponsors or examples that continue to give the project some long-term legs, or that there is a recurrent desire over time within the student body for the project. In both cases, that’s a significant social fact that requires serious appraisal.

I’ve been a bit dismayed that some of the student opponents of the sorority invoke distant objects and generalizations as sufficient reason for their opposition. Those are ok as impressionistic sketches of your intuitions or feelings (as mine are above) or snark but if offered as a justification for action, for decision-making, you have to up your game a bit. Specifically, you have to stop looking to the generalized horizon (say, the “mainstream”) and get real about the specific, tangible human beings who are right in front of you.

Anything that real people do in the world is by definition interesting. By ‘interesting’, I mean worthy of the kind of investigation that puts curiosity and honesty well before judgment. Judgment may come, but only after you’ve done some work. Anything that real people do in your own community, neighborhood, or other immediate social world is interesting in that sense twice over. It’s easier (at least in basic material terms) to ask, with unfeigned curiosity, “So what’s up with that?” when you see someone every day. If you’re going to use tropes of community, it’s morally important to be interested in its entirety. And the practical political dangers of keeping your distance and relying on haughty generalizations within a community are very real: that’s the quickest way to rouse an opposition that will then be right on your doorstep rather than at a safe distance.

So if you don’t like the idea of a sorority (or the idea of existing fraternities) at Swarthmore, I’d say the first thing to do is ask, with openness, curiosity, and generosity, with no pressing need to get to some predetermined end or conclusion, “So why do the people who want them want them?” And by that I mean, ask them. And listen.

This is the space where college communities like Swarthmore can fall pretty flat in their aspirations to diversity. We have students who will go all around the world to meet, live among and humbly try to understand a community of people whose history, culture and material conditions are very different from their own previous experience. Those same students can balk at understanding or negotiating the immediate presence of anyone whose unfamiliarity is not so customarily different, who is not an expected or presupposed sort of “diverse” person.

Getting over that hesitation is all the more important for people who hope to push for progressive or radical social transformation. If you don’t understand, in fact appreciate, a person or group that you believe should change (particularly if you think they should change to be more like you), you’ll never persuade them. Which leaves you only one option: to compel them. I think that is precisely why some of our students persistently look to the college administration (essentially the local version of the state) to accomplish their political or social goals: because they despair of, or never begin, the work of understanding and persuasion. At that point the cupboard of options is bare. Nothing is left but the imposition of rules, strictures, mandatory trainings, bannings, prohibitions.

This impulse is a potentially disastrous cul-de-sac for a genuinely progressive politics. If you have to make that move, wield that big stick, you better be sure that you have an actual big stick in hand and that the need for such a move is overwhelmingly urgent, gut-wrenchingly important, viscerally documented.

I was listening to an NPR story yesterday about the continuing problem of neo-Nazism in Germany, and a phrase in the story really hit me. The reporter said that political leaders were frustrated and surprised at the persistence of neo-Nazism “despite sixty years of educational effort”. What I thought was, just substitute the words because of and you’d be close to an explanation. The German state, any state, is going to create its own margins and exclusions. Whatever that state chooses to ban–or prescribe–will be an irresistable hermeneutical beacon to those margins. Post-1950 secular postcolonial states in the Islamic world virtually recommended Islamic fundamentalism as the privileged voice of opposition to the corruption and fecklessness of their rule precisely by stressing their secular character. Robert Darnton, in The Forbidden Best-Sellers of Pre-Revolutionary France, argues that as the ancien regime of France responded to the spread of print culture with a more and more assertive regime of censorship, it effectively recommended the targets of its censorship to readers. The state’s attention was a sign that a work was interesting. Darnton also argues that the state helped to seal its own fate by showing itself to be both helpless to stop readers from reading and by revealing its prohibitions to be silly or self-interested.

Asking a college administration to maintain a ban on sororities through evading or finessing a statutory requirement that has been a powerful tool for enforcing gender equity might be a similarly corrosive or counterproductive move. Particularly if it’s largely to avoid a difficult conversation in your own immediate community about why people unlike yourself are unlike yourself and why they want what you do not want.

Posted in Academia, Swarthmore | 23 Comments

Commentary on Jonathan Haidt, The Righteous Mind

We had the first of four symposia on Jonathan Haidt’s new book The Righteous Mind last night at Swarthmore. The hope is that we can demonstrate the distinctive advantages that a “liberal arts” approach can yield when many different scholars with different perspectives focus on the same object and join in conversation with each other. I thought we made some good strides towards that goal. Based on last night I’d say it’s going to be a conversation about different models of human agency and subjectivity, culture and sociality, morality and religion, and politics and political outcomes, with Haidt as the stimulus.

Last night’s theme was “the limits of reason” in relationship to morality. Haidt’s argument here is a largely consistent with the emerging consensus in much neuroscience, cognitive science, behavioral economics and evolutionary psychology that reason or consciousness are largely post-facto “storytelling” about our actions and practices, that most of what we do as people is “quick” and based on intuitive or subsurface thinking. The storytelling we do in our consciousness can complete a recursive feedback loop back into our intuitions, but in Haidt’s view, the storytelling or reasoning part of our minds does not govern most of what we think, do and say about morality and politics. (He uses the metaphor of an elephant and a rider: our intuitive minds are the elephant, our conscious reasoning is a rider who creates narratives about why the elephant went where it did, but might occasionally actually steer the elephant in one direction or another.)

By the way, there’s an interesting discussion of Haidt’s book going on at the New York Times Stone Blog at the moment that’s very germane to what we’re doing at Swarthmore.

I chose to talk about the limits of Haidt’s arguments about reason, intuition, morality and politics when they’re seen in terms of a “much bigger N” of human societies and human personhood–both non-Western societies today and the entire set of all premodern cultures. So here are my remarks, more or less as I gave them. (Later on in the discussion that followed, I got really tangled up at one point because I was trying to talk about both intuitions and institutions: I think next time I’m going to try and use a different word for one of those concepts to keep myself from saying one when I mean the other.)

—————————

1. What I’m NOT going to do: make the conventional complaint that Haidt doesn’t have non-Western or premodern examples. Because, especially in the first half of the book, he does—and he has taken the need to do research outside of the United States far more seriously than most social or evolutionary psychologists. A major point of the book, in fact, is understanding the geographic, temporal and socioeconomic limits to what he calls “WEIRD” moral preferences (Western, educated, industrialized, rich, democratic).

2. Three things I think he could learn from an even more extensive consideration of a “bigger N” of non-Western and premodern examples, however, in escalating terms of the degree to which they unsettle his argument.

a. In narrow terms, the argument of the second half of his book could be tested against a broader range of contemporary (and possibly past) examples. The argument is roughly as follows: that contemporary American conservatism is in his view more successful politically today because it appeals to the full range of intuitive moral “taste buds”, as he calls them, which are:

[care/harm; fairness/cheating; loyalty/betrayal; authority/subversion; sanctity/degradation; liberty/oppression]

whereas liberal politics, he argues, rests on a strongly exclusive appeal to a much smaller repertoire of moral intuitions.

The next faculty panel at Swarthmore is going to talk about the extent to which they find this an accurate representation of the current state of U.S. politics. But I certainly think Haidt would benefit from asking comparatively: do political factions or movements elsewhere in the world (or in the past) achieve a greater degree of success in some respect (mobilization, legitimacy, etc.) when they speak to the fullest possible range of the moral intuitions that Haidt describes? It’s a very testable hypothesis, if you can define “political success” and measure with some rigor whether the followers of a party or movement respond strongly to the particular intuitions that Haidt has identified.

It’s possible that a “full palate” does correlate with political success, but I think the opposite is equally possible, maybe even more probable as I think on present and past political movements, that there are many successful political or social movements or parties that specialize in intense appeals to one or two moral “taste buds”.

Either because some of the moral intuitions that Haidt describes might mobilize smaller populations but mobilize them far more ardently or intensely (which is sometimes all you need to control political outcomes) or because most people, especially outside the WEIRD world, are very strongly satisfied by appeals to a very small subset of moral intuitions in the same way that a chocolate bar can appeal to most people just by being sweet and a touch bitter—it doesn’t get more appealing by adding sea salt and bacon, though it may gain more intense devotion from a smaller number of aficionados.

Haidt’s arguments about political and religious outcomes really require a huge “N” to be satisfying—the American exceptionalism of the second half of his book actually doesn’t pay off the universalism and attention to comparative analysis of the first half very well.

b. More ambitiously by far would be to consider how to read or interpret the nature of reason, morality and intuition on a bigger scale of comparison across time and space.

Haidt’s discussion of WEIRD morality makes clear that he’d welcome at least a modest version of this ambition.

He sees WEIRD morality as having a limited distribution not just in the contemporary world but as having a specific point of historical genesis, in the advent of industrialization, the rise of Western Europe, and the creation of modern social and political institutions after 1750.

Just in Haidt’s own terms, that means that major (and possibly minor) events can alter the distribution and influence of innate cognitive dispositions in human populations and the political and social practices that rest upon such distributions.

I don’t think his framework can offer much of a causal explanation for those events or exactly how they changed the distribution of cognitive preferences in human populations, but that’s ok, three centuries of frantic attempts to explain the causality of modernity by intellectuals across the disciplinary and philosophical spectrum haven’t led to any consensus on that point.

But why stop at one major event? If discrete events can change the distribution, intensity and expression of moral intuitions as Haidt describes them, then even relatively trivial or highly local events might be an important source of contingent political and cultural outcomes. Hold that thought—I’ll return to it shortly.

For the moment, let me point out that as soon as Haidt opens this door, two other kinds of variation come flying in. The first is the possibility that at some point in the past or in some place in the present, there are other “taste buds” in the cognitive palette or there is some other form of reason involved in mediating moral intuition. (In his NY Times post, Haidt makes more room for some form of ‘reason’ to modify moral intuitions than he does in the book.)

Let me briefly mention a famous debate between two anthropologists to get at this point: Obeyesekere’s The Apotheosis of Captain Cook and Sahlins’ How Natives Think, About Captain Cook For Example.

Obeyesekere said (crudely summarized): the Hawaiians’ couldn’t have thought Captain Cook was a god: that’s not rational; this is just a Western trope frequently used to organize and justify imperial conquest.

Sahlins said (equally crudely summarized): non-Western societies have had and still have their own forms of reason embedded in their distinct histories and cultures. We have to try and understand those forms of reason in their own terms to the extent to which we are able to do so.

There are problems with both books, but I’m going to go with Sahlins for the moment. What would it mean to try and think about Haidt’s analysis with much more attention to the specifics of some different ideas of reason, morality and intuition, past and present?

I’ll use the example of thinking about ‘invisible powers’, health, ‘witchcraft’, etc. in southern African societies in the last two centuries or so to illustrate the point.

i. Rough explanation of these dense and complicated ideas and assumptions: illness and misfortune have human agency behind them, but only indirectly so—people act against each other out of spite, jealousy, anger or the desire to enforce reciprocity, but it is believed that such action takes place through spiritual proxies (who are themselves often thought to be formerly human—ancestral spirits, spirits of unsettled migrants or indigenes, etc.). When you’re sick, someone else is responsible, but you might also be at fault because you’ve done something to offend or trespass. Managing health and welfare, including events that Westerners would usually characterize in naturalistic terms, is a matter of managing your social relations but also protecting yourself against vulnerability to malevolence and evil that uses spiritual or invisible power.

ii. These ideas could be plausibly translated into Haidt’s moral intuitions—fairness, loyalty and care are wrapped up in there somewhere. A lot of witchcraft discourse is about the maintenance of reciprocity.

iii. But the specific meanings and reasoning that many southern Africans use to talk about and interpret those intuitions produces strikingly different practices in everyday life. E.g., that they might have the same source intuitions is not really the interesting story here. The interesting story is about the content of different cultures, it’s about variation. The interesting story is, “Why were there episodes of witchfinding in northern South Africa after Mandela’s election in the mid-1990s” vs., say, “Why do majorities of voters vote against gay marriage initiatives in the United States in the last decade?” Talking about that comparison in terms of a universal underlying set of cognitive dispositions is at the very least overlooking what seems to matter most to the actual people involved in those actual decisions. Even if that’s the rider talking rather than the elephant moving. But what if that variation means that these are actually different animals altogether underneath the rider? Sometimes being ill or fearing illness in rural southern Africa occasions most of the same practices and outcomes, relative to resources, that it does right here in Swarthmore, but sometimes it’s a radically different experience with radically different outcomes.

c. This leads to my most far-reaching and unsettling challenge to Haidt.

There might be a way to usefully work out the relationship between universal cognitive or intuitive dispositions on morality and particular cultural and social discourses, expressions and practices about morality and politics across time and space. (and to decide which term in that relationship is most important or interesting for scholars to spend time describing or understanding).

But what if the content of morality in different cultures (and here I mean by that the practices and thinking that shape everyday life) is neither an outcome of underlying intuitions nor an outcome of philosophically or empirically rigorous reasoning that can be more and more perfected over time?

I mentioned earlier that Haidt acknowledges that the various changes associated with modernity made WEIRD morality more important and influential in human societies after 1750.

Throughout his book, he also takes note, sometimes implicitly, of the extent to which the specific content of prompts that provoke moral responses changes constantly, even within a given nation or society.

I’ll use an example of my own. When I was a kid, if I said that something sucked in the presence of my mother, she would visibly flinch and react with what I would say was intuitive moral disgust of the kind Haidt describes. This baffled me when I had no idea what that meant to her and many in her generation and I remained fairly indifferent even once I understood that she took it to be a very crass reference to oral sex.

The word stopped meaning that, but also even the referent became less likely to provoke a reaction from its transgression of a “sanctity” intuition in the wider culture. That’s not just WEIRDs vs. non-WEIRDs, it’s something more subtle about the way words, concepts, images and so on change fundamental aspects of their meanings, sometimes as rapidly as in a single week, a day, a conversation. A good metaphor (say, an elephant and its rider) can reorder or shift preferences in those that hear it–I think very possibly at the level of intuitive thought as well as conscious thought. (This is pretty much what theorists of “framing” argue, all the way back to Goffman, that a skilled persuader or performer can reorganize the underlying beliefs and thinking of his audience.)

We might look for strong or relatively invariant images, words or concepts that are more strongly tied to persistent cognitive intuitions. It is hard to believe that the image goatse (DO NOT GOOGLE THIS IF YOU HAVE NEVER SEEN IT) will ever stop transgressing against sanctity. But in fact historians of the body know that practices most Americans today take to be instinctively, deeply revolting were once common or are common elsewhere—ingesting or applying fluids from the bodies of medieval saints, for example.

Haidt knows that these changes take place, but I don’t think he can account for why there should constantly be both minor and major changes in the content of moral sentiment, intuition and practice.

The argument that such changes stem from progress in either (or both) our philosophical and empirical understanding of what is moral has more familiar strengths and limitations.

What if some changes in the content of moral sentiment and practice are just an epiphenomenon, an emergent consequence of complex social structures?

Think of it this way: in Haidt’s sort of framework, early in human history, cognitive dispositions towards individual and group morality would be the outcome of evolutionary processes. (The “era of evolutionary adaptiveness” in evolutionary psychology.) At some point, those dispositions would have influenced the structure and content of the early social institutions characteristic of sedentary, agricultural and trading societies.

Let’s take one of the characteristic examples of such an institution: law. Which in those early societies in the Mediterranean and Near East clearly had a very dense, complicated relationship to morality, religion and politics.

If one of the outcomes of that relationship was a code—the Ten Commandments, Hammurabi’s code, Solon’s reforms—then that marks a point at which the expressive outcome of cognitive dispositions became something external to and somewhat independent of human cognition but which governed moral practices. Not necessarily because they were rational or intuitive. Just because, at some point. Don’t wear this color; don’t eat that thing; don’t make that gesture; don’t go to that place. Don’t marry that kind of person, don’t talk to that other kind.

Practices engender practices, sometimes without routing through intuition or reason.

Institutions become an environment to which some people adapt—e.g., some institutions create their own moral environments and grant power to certain agents to maintain the fitness landscape they establish. Law is a fantastic example of this. Inspector Javert no longer has his own moral intuitions in Les Miserables, save perhaps an overriding loyalty to the idea and institution of law. In Melville’s book Billy Budd, Captain Vere has to punish Billy Budd even though he knows Billy is innocent, because there’s laws and rules that are outside of him that force him to act in a particular way. Law, punishment, prison and so on are not really derived directly from the moral reason or moral intuition of other human actors. Institutions are shaped most crucially by the history of institutions. They’re path-dependent. They have their own form of non-human agency over the moral and political decisions that human beings make.

You could say that when sociopolitical institutions are too alienated from underlying cognitive preferences (or rigorous reasoning about preferable outcomes), they run the risk of sparking revolt or spurring reform—that intuitions (or reason) are a baseline or foundation. But that still means that institutions that address and regulate morality (law, churches, expressive culture, etc.) can become semi-independent of human will and thought even if they once upon a time emerged from them.

Those institutions change in ways that are caused neither by deep cognitive dispositions nor reasoning and evidence. My mother would have gotten a ruler across her hand from a teacher for saying something sucked, I got only an increasingly powerless motherly scolding or a red mark on a school essay, and my daughter gets nothing at all.

Sometimes singular events and stories in the public culture of a society—trials, scandals, performances, speeches, battles, traumas—are enough to shift the way that institutions interact with the content of moral life in everyday practice. And most such events are unplanned, unrehearsed, uncontrollable—full of contingency.

If in some sense the enforcement or regulation of our moral lives by our institutions can turn on singular, unplanned acts and events—and thus the expression of our moral intuitions or moral reasoning can change—then what we’re riding is less another animal with a mind of its own and more a sled hurtling down an endless, unpredictable slope in a moonlit winter night. It might be fun, it might be something we can steer now and again—but who knows when a bump might toss us skyward—or a tree halt our flight?

Posted in Academia, Generalist's Work, Oh Not Again He's Going to Tell Us It's a Complex System, Politics, Swarthmore | 1 Comment

Tweet Away

I refuse to use the hashtag, but the bubbling-up of a long-standing conversation about live communications from academic conferences over the last three days has been interesting to read.

While I can’t disagree with Kathleen Fitzpatrick’s pragmatic advice to concede to the wishes of a presenter who doesn’t want to be tweeted or blogged, I also can’t even begin to understand how a scholar could envision an ordinary conference presentation as private or confidential. Almost every problem laid at the doorstep of conference tweeting or blogging was no less an issue twenty years ago. Worried about being “scooped”? Well, we did too back in the late 1980s, with as much or little reason. Worried that a tweet or a blog post reduces and simplifies what you said at a conference session? Well, most of the people who attended your inartful, dull excerpting of a longer chapter or journal article in 1988 misunderstood, misrepresented or truncated what you said; most of the people who saw your poster or listened to your roundtable got the wrong idea. The idea that we live in an era of neoliberal acceleration and superficiality, propelled by online discourse, gives far too much credit to the proposition that back in the day, conference-going academics thoughtfully pondered and deeply read all the work that their colleagues placed before them.

Sure, there have always been exceptions where deep reading and highly focused conversations were the rule, say in small workshops of 10-20 invited contributors, but those meetings still happen, and mostly people don’t tweet from them, because they’re too involved in a constant way in the conversation. Live blogging and tweeting is about turning the passive experience of your average large meeting’s conference panel into an active, thoughtful experience, benefitting both the presenter and the listeners.

Brian Leiter is welcome, at any rate, to convene a dour room full of silently raptured intellects listening to the immortal prose of a fellow scholar. My chief pleasure at such a meeting would be getting kicked out, saved by the tweet.

Posted in Academia, Information Technology and Information Literacy | 9 Comments

The Frenzy

I like the idea of “entrepreneurship” a lot when it describes the compression of several complicated things into one concept or practice. The first would be a structured kind of practical creativity, a purposeful or directed path to having and expressing a “good idea”. The second is a relationship between individual creativity and collective action, a known recipe for scaling up the ideas of a very small number of people to an organization that can make and do something tangible and material. The third is a concern for organizational sustainability: an entrepreneur has to have a plan for how their idea and their organization will bring in enough resources to grow, thrive and live on into the future. The fourth is, or ought to be, a kind of humility about the conditions required for success–an entrepreneur should know that their ideas have to undergo a genuine testing in markets and survive the whims and reactions of potential customers, unless the entrepreneur’s idea is based on rent-seeking or parasitism of some kind.

Entrepreneurial action can represent the best social and imaginative potential of modern liberal societies. It’s also a great way to focus and challenge any new initiative or project. Do you want to mobilize groups, sustain collective action? Then it’s totally fair to ask, “With what resources? With what costs or liabilities? With what kind of plan for organizational and financial sustainability?” Do you have a great creative vision, or some change in material practices you’d like to encourage? Thinking “entrepreneurially” is a great filter or structure for approaching those aspirations.

What I do not like about “entrepreneurship” is when it starts to collapse into itself, when it’s an alibi for a gold-rush approach to life and aspiration, when it’s part of a frenzy.

Businesses fail, often from the very start. That’s not news: it’s what every MBA program in the world builds its courses around. Sometimes a good idea fails because the time isn’t right: too late or too early. Sometimes an idea fails because it’s sabotaged by the wrong mix of people working together or because there’s a stubborn technical problem that can’t be overcome. Sometimes it’s underfinanced. Sometimes a rival sabotages it. Sometimes there’s just something quaint and sweet and yet lightly pitiable about how wrong-footed the whole thing is: we’ve all seen a certain-to-fail store, restaurant or a product that made sense only to the person who owned it. Failure is not a reason to dislike entrepreneurial projects–though incipient failure is what sometimes curdles an honest try into dishonest rent-seeking, particularly when there’s a big enough pool of naive or powerfully-connected investment capital backing the business venture.

What’s bad is when someone sets out to start a business the same way a shark sets out to find what’s bleeding in the water. Not because they have an idea for a better mousetrap, or see a way to better sell someone else’s better mousetrap, but just because there’s money to be made, because other people are making money, because there’s gold or oil being dug out of the ground somewhere. I dislike it when I see someone who has a primal hunger to get in that game because either they’re someone else’s mark or they’re sooner or later going to be treating any possible customer like a mark. Or both.

Case in point. We’re in the middle of another dot.com frenzy at the moment. It’s not as big an investment bubble as the last time around, but there is a lot of hucksterism, almost as many Pets.com-style Potemkin-village startups trying to sell their new social media next-Facebook next-Twitter gimmickry. Even when the core idea is not really a bad one per se, as in the case of Pinterest , it’s hard to summon much enthusiasm for services or sites that are just leveraging some of the core functionalities of existing social media into a slightly new interface. Sure, maybe you’re the guy who will get Instagram-level lucky and prop up a shell of something that worries Facebook or other stumbling giants enough that they drop a load of their cash hoard on you to just add one more bauble to their overstuffed Christmas-tree like interfaces. Most of the time, you’ll just burn some investors’ money and then sell off whatever marginal value is left in the project for bargain-basement prices. Because when you don’t have a real idea, or worse yet, you just have an idea that other people have had already and you’re just putting a new coat of paint on it, there’s no other lesson to learn out of failure except “next time have a good idea”.

Let me give an example of the problem that’s come to my attention in the past week. I’m singling out this company not because they are unique, but because they are an example of a common, recurrent pattern. Lore.com, formerly Coursekit, has started pushing hard for campus adoptions of their product, which is basically a course-management system with a sideline of asynchronous forums oriented towards campus life.

Forgive me my slightly douchebaggish irritation here. I honestly don’t think the young folks who are trying to sell Lore really know how many times some of us have been approached since 1999 by companies with almost the same business model and the same strategies for making themselves look like more than they really are. But Lore has it even harder than some of the dot.com start-ups because this is a crowded space now filled with not just long-standing course-management systems but a bunch of big, new MOOC companies like Coursera.

Lore is little more than a new interface for the product idea behind Blackboard or Moodle. The main distinctive twist that I can see is not in the product but in the marketing: the company is borrowing an old technique used more often by certain non-profits like the PIRGs and political groups, and trying to enlist students as their salespeople–it’s a technique as old as getting comic-book readers to sell Grit. And of course they’ve been careful to do whatever’s required to get themselves to the top of a Google search, so well done there.

There’s a lot of the standard here-we-go-again issues with newcomers–a TOS that gives Lore.com the right to reuse or display content contributed by instructors or students, though at least they don’t try to claim perpetual copyright over uploaded material like some other start-ups operating in this domain. The interface? Eh, well, you don’t have to go very far to outdo Blackboard or Moodle in that regard, that’s like showing up at a dog show and beating out a hairless chihuahua who has a skin condition with some mutt you picked up at the shelter. The Lore interface is prettier but in its own way busy and intrusive–there’s an overlay layer in the tutorial that is very difficult to get to go away so you can see the underlying workspace. Also, I logged in with Facebook instead of a separate account and noticed after creating a sample class that you can’t delete a course you create that way because you don’t have a local password. (Nor of course is there a way to automatically revoke Lore’s stored data from a Facebook login if you revoke the access from within Facebook.) Nothing particularly extraordinary in the world of social media, where subtle and unsubtle ways of holding onto and storing up both content and data from former users are pretty common.

Another thing that’s drearily typical is the attempt to appear much bigger and more established than you in fact actually are. Lore is used at 600 schools, says the front page, but the details of this use aren’t hotlinked from the university and college seals that appear on the front page or anywhere else. Go to “learn more about teaching at Lore” and there’s some slides at the right that look like examples of Lore in use. Whoops, not linked either! Too bad, I wanted to see what “Learnign [sic] Python the Hard Way” was all about. Presentation.ppt would also be engaging, I’m sure. The generic quote from “Junior, Princeton University” is certainly convincing. You can see Navigating the Universe taught by “Buzz Aldrin”, which turns out to be just another sort of short demonstration not-really-a-course. (If you can get the overlay about all your teaching tools to go away.) But hey, instructors at 600 schools! I wonder who they are? Maybe some of my colleagues! How odd that their courses aren’t linked all over the place on the Lore site or even from the home sites and pages of the hosting departments and institutions. Could all of them be private courses, except perhaps the one that angel investor Peter Thiel taught using Lore at Stanford last spring? But how odd to rush to use a new product that trumpets its interoperability with social media and then make your courses private.

Looking at the blog offers reassurance. Lore is now adding features that are standards in social media and in existing CMS: revolutionary! Read down and you’ll see that the founder thinks that the market opportunity for Lore is reinventing how the Internet and education can work together. It’s a good thing that nobody else seems to be thinking about that! Or so you’d guess, given that the Lore blog (or anything else at the site) seems blissfully unconnected to or unaware of the numerous organizations, scholars, institutions and projects that are and have been concerned with that rather large question for a long time. I’m old-fashioned, I guess: I always thought the best demonstration of the virtues of online interaction and social media were the density and richness of the way that they linked to each other. It’s odd to trumpet the advantages of a dialogic medium without doing anything besides talking about how great you are.

So at this point I’m still looking for an archive or collection or a page o’links that would show me all those instructors with all their courses at all those institutions. Hey, the CEO is taking and teaching some Lore courses and they’re actually linked, you can click on them! So let’s see: he’s teaching a course on Lore itself. Which is empty. A course for the “campus founders”! Which is a single marketing blurb from Fall 2011 that ends “here’s what that entails” and leaves you in suspense about the entailing. “Contemporary Ethics and Foundations of Decision Theory”! There’s a real course. Taught by a dead philosopher at the University of Reddit. Is that one of the 600 institutions? The dead philosopher actually appears to be a live student, judging from the YouTube lectures. The course did “peak” my interest enough that I looked at a reading, an essay by the philosopher S. N. Balagangadhara at his blog. Which is a decent find. But at best all I’m seeing here in this case is a sample course that doesn’t appear to offer any advantages over Coursera or its rapidly multiplying horde of siblings and competitors. What else is Joe taking? A “course” that consists of a few Stanford 1st year students talking to each other about how it’s going so far this year. I wonder if that counts as one of the courses at “600 institutions” taught by an “instructor”? And he’s participating in the group (not course) Art, Design and Computer Science, whose sole activity to date is the statement by its founder that the group mission is to “make cool shit”.

(By the way, I’m guessing that if you want to witness this stuff as I saw it in a morning’s exploration of the site, you’ll have to hurry, because in my experience, when you poke through the hollow surface of a new social media service that’s trolling for the customers who will create the content that the founders can’t be arsed to create for themselves, you usually stimulate a round of frantic attempts to throw up something a bit more real to fill up the empty links and galleries.)

I’m guessing that the main hope here is getting bought out by whichever MOOC company manages to come out on top in the current feeding frenzy. What frustrates me is that some bright and capable people are wasting their time chasing a buyout when they could actually be making some cool shit. To make cool shit at the intersection of digital technology, social media and higher education means actually going out and finding out what specific cool shit needs to be made, being humble enough to wait for the genuinely good idea, and starting small enough that there’s a chance of actually not only having the good idea but bringing it into being. That also probably means to stop borrowing the flawed intellectual property and organizational DNA ripped out of the last round of Facebookery as well as the ambition to be the Third Coming of Mark Zuckerberg. That script and structure has given a bunch of hopefuls the wrong idea about how you succeed in business. You want to make the product that catches on, figure out what the real needs are, learn about the customers you’re seeking, match up what can be done with what ought to be done.

I’ll put up a sequel to this post a bit later this afternoon with a bunch of real needs, real ideas, available for free to any budding entrepreneur interested in higher education and digital technology. One warning in advance, though: if some of them haven’t been taken up yet, that’s because they either demand an actual understanding of existing colleges and universities, or because they call for a completely different business plan than the standard social media underpants-gnome trick of “luring people in to make content for us, trap their content inside our interface, try to monetize the content our users have made without pissing them off so much that they pull their content or stop making content”. Lore isn’t the only start-up that might want to think beyond the limits of that model: you could easily rattle of a list of sixty or seventy gold-rush companies that aren’t much more than a name, an interface and a bunch of ambitious young folks chasing a payout.

Posted in Academia, Digital Humanities, Information Technology and Information Literacy | 5 Comments

Better Pedagogy, Less Cheating: Three Ideas

So Stuyvesant High turns out to have a cheating problem–or perhaps all selective high schools do? If the high schools do, I’m sure the colleges and universities that receive their graduates do as well. And so in turn do the workplaces that hire the graduates fed to them by this system. Not a startling conclusion if you’ve read Christopher Hayes’ Twilight of the Elites. The NYT article suggests that skilled, systematic cheaters often rationalize their behavior by arguing either that everyone does it (which Hayes would argue is a structural inevitability in social hierarchies that justify stratification via meritocratic distinction) or that cheating is the only way to temporarily distinguish oneself amid uniform excellence, and when you’re done with the test, the class, the moment, you will have earned your place in a college or a job and can prove your genuine merit. As Hayes notes, that moment never comes, the cheater is never at rest, at home, able to show their true quality independent of silly tests and bullshit obstacles. The whole of life becomes a bullshit obstacle, and the search for the edge, the advantage, the trick becomes perpetual. Which doesn’t just hollow out the person, it contributes to the entire socioeconomic system dropping into an ever-accelerating pursuit of short-term gain at the cost of long-term sustainability.

The first problem with narrowly setting out to foil cheaters is that if students or employees no longer believe that tests measure anything important, simple anti-cheating techniques become another petty annoyance–particularly if they think that the testers or bosses are using tests as a crude rationing device or screening mechanism, a way to avoid grappling with difficult or nuanced evaluations. Simple tricks are equally simply defeated, and each one of them just increases the sense that testing is a sadistic and cynical exercise.

Most selective institutions, whether K-12 or higher education, promise highly individualized instruction that adapts to the learning styles, aspirations and personal distinctiveness of every pupil. But the time required to make those promises real when it comes to assessment of students is often badly short-changed in preference to covering as much content as possible in order to enable students to move on to the next subject, the next part of a sequence, the next big lump of content to force-feed the students. What makes many systems of cheating effective is the combination of highly standardized content plus some form of standardized test or assessment. The easiest ways to impede cheating turn on the delivery of distinctive, personalized instruction, which has its own pedagogical justification quite aside from making it harder to cheat. Note, of course, that this strategy is very difficult to adopt in an environment where legislators keep upping the ante on the use of dumb standardized testing for evaluating teachers: the consequence is less and less effective teaching combined with a massive increase in both individual and collective cheating.

Consider the following ideas instead:

1) Don’t use standardized textbooks or teach to the “common denominator” of knowledge about a particular subject or discipline. Build a class intended to teach standard knowledge around a distinctive case, situation, or application of that knowledge, and change the situation or case each time the class is taught. One of my Swarthmore colleagues has taught a fantastic course of this kind in statistics that has students learning basic statistics through delivering statistical studies to local non-profit or community groups based on what those groups would like to know. My colleagues in Biology do a lot of customization on their introductory sequence. This is not just an approach for higher education: it could absolutely be adopted widely in K-12 education. The catch is that this takes constant work for teachers, it takes teachers who are confident in their own understanding of the subject matter and can adapt it to changing circumstances and cases, and it absolutely cannot happen in an environment where highly standardized testing is frequently imposed from above. If your class isn’t like any other class on the subject (even when it’s covering some shared or common knowledge), and your essay questions and problem sets aren’t like any others, and you don’t reuse them, it’s going to be pretty hard to cheat beyond the most local scale (a single class in a single semester), and maybe not even then.

2) Assess individual students on a continuous, spontaneous basis, and weight grades heavily on their demonstrated ability to recall, apply and repurpose what they are learning. Every teacher knows that a formal, scheduled test is really a fairly poor way to understand what someone really knows, and how much they are able to use that knowledge. If you wanted to know whether someone was a good driver, for example, what would you rather have? Five weeks worth of surreptitious webcam recordings of them behind the wheel or a multiple-choice test asking them about the laws and formal rules governing driving? How many times have you seen a person who can pass any formal test and yet can’t use anything they supposedly know? Or inversely, a person who doesn’t do that well on formal tests but who can make very effective use of what the test is trying to measure? I see both fairly often. Tests, multiple-choice or otherwise, aren’t how we use what we know in any other real-life context. We use standard tests because ongoing, constant assessment seems too labor-intensive, and because we believe that you can’t use knowledge until you’ve consumed a sufficiently large baseline amount of information and acquired a sufficiently large baseline of skill. The former might be true in large, poorly financed educational institutions, but it shouldn’t be true at wealthy, selective schools. The latter I think is demonstrably not true: you can put into practice what you’re learning from the very first moment that you are learning it, and be assessed continuously based on how well you apply what you know and how much you improve in your application over time.

3) If these two approaches seem impractical, even a standardized-testing approach can work better with a modest amount of individuation. Let’s say you’re teaching high school biology in a standardized curriculum and you have a final exam that’s a mix of multiple choice and problem sets of some kind. Have all the teachers in that system work together to generate a test bank of 2,000 questions. Change it each year–toss out 200 or 300 of those questions and add 200 or 300 replacements. Have every single test for every single student be randomly generated from the test bank. Every student gets a test with their name on it at the top that is not the same as any other student’s test. Match each student’s test to each individually generated answer key. That surely kills off some common cheating techniques (looking at another student’s test, photographing a test and sharing it with others, memorizing the same identification question, etc.) If you object by saying, “Well, what if the students get a hold of all 2,000 questions in the test bank and memorize or prepare for all of them?”, I say, “That would be mission accomplished, then: what’s the difference between a student who has memorized 2,000 questions worth of content you’re going to test them for and a student who just happens to know all that content well enough to answer any questions from that 2,000 you might randomly choose to ask of them?” You can’t exactly scribble the answers to 2,000 questions on your body, and if you don’t know which of thirty or fifty or one hundred variant versions of the test you’re going to get, then the only way you can prepare is to actually learn the content.

Posted in Academia, Politics, Swarthmore | 5 Comments