Peforming the Role

The short summary of the way that UIUC’s administrative and board leadership (and some of their closest faculty supporters) handled their reaction to Steven Salaita is that they screwed up and that serious professional consequences are completely appropriate.

And not just that they screwed up in “handling the fallout”, as if this is a question merely of public relations tactics. They screwed up substantively, philosophically, in terms of fundamentals. The archive of emails now available for critical examination document that error and how pervasive and systematic it was. Chris Kennedy’s interventions in particular are almost textbook examples of what academic freedom as an ideal is meant to prevent: a prejudicial, ideologically-derived attempt to target particular individual scholars using ad hoc standards that are not (and should not be) imposed on the rest of the faculty.

Until Steven Salaita himself says that he’s satisfied with whatever settlement UIUC offers, whether that is rehiring him or some other compensation, I would urge other academics to continue refusing to do service for UIUC as an institution. I know that imposes a burden on the many great faculty at UIUC by isolating them but I think it’s important to keep the pressure on. UIUC has more work to do in any event than settling with Salaita. And it’s not just UIUC that has these problems.

I do have two modest reservations about some of the responses to the email releases by academic critics. The first is that I don’t know that we should exult overly much about the release of the emails. UIUC’s leadership is ultimately responsible for creating the circumstances in which the release had to be sought through legal means, and thus is ultimately to blame for whatever larger consequences this might have. But the use of legal mechanisms to probe into the professional communications of faculty and staff at public universities has already been abused for political ends in the last decade and I fear this is only going to recommend that tactic further. We shouldn’t be too blithe about telling colleagues at public universities that they’ll just have to meet in person more, use the phone more, stick to their personal accounts more, and so on. That creates yet another kind of large-scale structural inequity for public institutions in a landscape increasingly full of such inequities. The acceleration of many work processes through electronic communication is a mixed blessing, but I personally have no longing at all for laboriously printing out recommendation letters, grant applications, dossiers, and many other kinds of professional labors that I handle at least partly through email. I also find it very valuable to get quick takes on institutional questions from colleagues via email and yes, sometimes to exchange cathartic observations about the week’s business with trusted colleagues.

The second reservation is more complicated, and has to do with the hostile commentary being directed at Phyllis Wise’s faculty confidants and to some extent Wise herself. I’m struggling to figure out how to express this feeling, because there’s a lot of inchoate things bundled inside of it. The place to start might be this: I think some of my colleagues across the country are potentially contributing to the creation of the distanced, professionalized, managerial administrations that they say that they despise, and they’re doing it in part through half-voiced expectations about what an ideal administrator might be like.

Occasionally folks in my social media feeds articulate a belief in faculty governance that has a sort of unexamined wash of nostalgia in it. That we had it all in the good old days and lost it, either to some kind of ‘stab in the back’ or through our own inattention or mistakes. (‘Stab in the back’ narratives generally worry me no matter what the circumstances, because they usually inform a politics that’s one part ressentiment and one part scapegoating.) Sometimes the same folks believe that if only faculty were in charge of everything (whether that’s “once again” or “for the first time”) the university would be working again as it ought to.

Now when I push some on that sentiment, it’s usually not hard to get the same critics to concede that there are a host of specialized professional jobs that have to be done in contemporary universities which can’t be done just by any old Ph.D-holding person who walks in the door. So the conversation refocuses. Who’s the problem, in this view? Basically the upper leadership hierarchy, especially at large corporatized universities that have added numerous vice-presidential positions to their administrations in the last decade. These are the administrators that faculty critics believe either are managing portfolios that no one needs managed or that are exercising forms of leadership that faculty are capable of leading on their own through their traditional structures of governance.

I agree completely that many institutions, especially large universities, have created administrative positions that are redundant or unnecessary. I’m not sure I agree with the idea that administrative leadership per se is largely unnecessary, nor do I think even many critical faculty really believe that–and it shows in some of the contradictory edges around the critical response to the Salaita affair.

First, you don’t have to go very far into the discussions and debates on social media about UIUC to find that faculty who believe in the sufficiency of faculty leadership don’t actually trust many other faculty to participate in governance or leadership. Most notably, there’s an undercurrent of debate about why many STEM faculty at UIUC either endorsed the administrative leadership or were indifferent to the issue–and one common explanation is that STEM faculty are already in thrall to the corporatist university or have actively connived in its making. Which means suddenly that the putatively capable-of-self-governance-faculty have been pared down to “just the humanists and social scientists, and maybe not even all of the folks in the latter group”. Which is sort of like saying that you believe in democracy as long as it’s just the people who share your politics who get to vote. Additionally, there’s a lot of contempt directed at the faculty who were exchanging emails with Wise, who are seen as collusive. But any self-governing faculty is going to have people whose genuinely held views of institutional policy are going to resemble the positions now commonly taken by administrative leaders. If Nicolas Burbules had no vice-chancellor to seek favor from, it’s possible that he (or someone like him) would still think as they think and drive deliberation in that direction. Certainly there will be Cary Nelsons on every faculty, aggressively expressing their views in every forum and meeting and doing in governance what Internet trolls often do in online discussions, which is driving the terms of the conversation towards more extreme or narcissistic terms.

Ultimately I think that the people who believe we can do it all on our own know that sooner or later we would all be desperate to delegate some of the responsibility for institutional leadership to appointed individuals, to not have to sit in shared deliberative session and endure an endless plague of Nelsons trying to cat-herd us towards whatever precipice they favor. In a sense, I think every faculty member who has held any sort of administrative responsibility is familiar with exactly how this works: colleagues who believe they should have a say in everything also want someone else to handle all the tedium of acting on all the contradictory imperatives that emerge out of deliberative process.

Moreover, most of us turn out to want at least some of the sausage-making involved in the life of an academic institution to happen with some kind of confidentiality. Even the most radical demands for transparency (and I’m usually one of those inclined to such) balk at doing everything out in the open. Tenure cases are only one part of a larger landscape of necessary judgment and assessment of the professionalism and practice of other professionals in a university. That’s what believing in self-governance means! Professionals often assert that only they can judge other professionals, that this is a prerogative of their training. Ok, but if that means, “And by the way, everybody who has the necessary minimal qualifications to be a professional is definitionally ok in our eyes for life, and everything we’re presently doing is exactly what we should go on doing forever”, then that’s doing it wrong. Even if we banished the spectre of neoliberal austerity, we’d still need to ask, “Are we doing what we should be doing? Are there things we should stop doing?” We’d still need to think about whether there are changes worth pursuing–say, the academic equivalent of Atul Gawande’s “checklist” reform in hospitals. At least the initial stage of many of those conversations is not something I want to be broadcasting to the largest possible audience in the most indiscriminate way. That too is something that I think we turn to “administration” or something like it to accomplish.

I think here is also where Wise’s critics occasionally end up with some strangely unreal implicit expectations of administrative decorum, a vision of leadership performativity that implicitly envisions administrators as more distant, more isolated, less human than the rest of us. For one, I almost feel as if people are expecting Wise to have had discretionary agency where I’m not sure she did or could–where I don’t know that any of us, faculty or administration, do. I think it’s reasonable to have expected Wise to tell Kennedy, for example, that his desired intervention into the Salaita case was unwise and unwelcome and that she would not do it. I don’t think it’s reasonable to expect, as I feel I’ve seen people expect, that she should have excoriated him or confronted him. I think we somehow expect that administrative leaders should be unfailingly polite, deferential, patient, and solicitious when we’re the ones talking with them and bold, confrontational, and aggressive when they’re talking to anyone else. We seem to expect administrative leaders to escape structural traps that we cannot imagine a way to escape from. There’s a lot of Catch-22 going on here.

We as faculty all have confidants, people we can talk to who help us work through our choices and our feelings. I would guess that most of us turn to people who are going to make us feel better, support us, reassure us. Ideally we should also have friends or trusted colleagues who will be honest with us, who will tell us when we’re making mistakes, but there are days when I suspect even the most iron-willed and psychologically robust person is not not looking for that.

And that’s just when we’re rank-and-file people. Imagine anyone in the role that Wise plays, anyone at all. Pick someone with your exact convictions. Pick yourself. Are we really expecting that the person in that role ought to listen judiciously, patiently and indiscriminately to every single person on their faculty with perfect equity and equanimity? We seem to desire leaders who are able say bluntly what we ourselves cannot or would not say and to mobilize institutional power with executive force in ways that we cannot and also desire leaders whose job it is to serve as a kind of infinitely passive psychic dumping ground, to receive every grievance and grudge within the institution without blinking. To decide what we know we can’t decide and to have never decided any such thing and to disavow any intent to make such decisions. To me that’s another kind of managerialism: the administrator as something other than fully human, needing to perform a professionalism that removes rather than connects them.

Posted in Academia | 1 Comment

Yes, We Have “No Irish Need Apply”

Just came across news of the publication of Rebecca Fried’s excellent article “No Irish Need Deny: Evidence for the Historicity
of NINA Restrictions in Advertisements and Signs”, Journal of Social History, 10:1093, 2015, from @seth_denbo on Twitter.

First, the background to this article. Fried’s essay is a refutation of a 2002 article by the historian Richard Jensen that claimed that “No Irish Need Apply” signs were rare to nonexistent in 19th Century America, that Irish-American collective memory of such signs (and the employment discrimination they documented) was largely an invented tradition tied to more recent ideological and intersubjective needs, and that the Know-Nothings were not really nativists who advocated employment (and other) discrimination against Irish (or other) immigrants.

Fried is a high school student at Sidwell Friends. And her essay is just as comprehensive a refutation of Jensen’s original as you could ever hope to see. History may be subject to a much wider range of interpretation than physics, but sometimes claims about the past can be as subject to indisputable falsification.

So my thoughts on Fried’s article.


2) This does really raise questions, yet again, about peer review. 2003 and 2015 are different kinds of research environments, I concede. Checking Jensen’s arguments then would have required much more work of a peer reviewer than more recently, but I feel as if someone should have been able to buck the contrarian force of Jensen’s essay and poked around a bit to see if the starkness of his arguments held up against the evidence.

3) Whether as a peer reviewer or scholar in the field, I think two conceptual red flags in Jensen’s essay would have made me wary on first encounter. The first is the relative instrumentalism of his reading of popular memory, subjectivity and identity politics. I feel as if most of the discipline has long since moved past relatively crude cries of “invented tradition” as a rebuke to more contemporary politics or expressions of identity to an assumption that if communities “remember” something about themselves, those beliefs are not arbitrary or based on nothing more than the exigencies of the recent past.

4) The second red flag, and the one that Fried targets very precisely and with great presence of mind in her exchanges with Jensen, is his understanding of what constitutes evidence of presence and the intensity of his claims about commonality. In the Long Island Wins column linked to above, Jensen is quoted as defending himself against Fried by moving the goalposts a bit from “there is no evidence of ‘No Irish Need Apply'” to “The signs were more rare than later Irish-Americans believed they were”. The second claim is the more typical sort of qualified scholarly interpretation that most academic historians offer–easy to modify on further evidence, and even possible to concede in the face of further research. But when you stake yourself on “there was nothing or almost nothing of this kind”, that’s a claim that is only going to hold up if you’ve looked at almost everything.

I often tell students who are preparing grant proposals to never ever claim that there is “no scholarship” on a particular subject, or that there are “no attempts” to address a particular policy issue in a particular community or country. They’re almost certainly wrong when they claim it, and at this point in time, it takes only a casual attempt by an evaluator to prove that they’re wrong.

But it’s not just that Jensen is making what amounts to an extraordinary claim of absence, it is that his understanding of what presence would mean or not mean, and the crudity of his attempt to quantify presence, that is an issue. There may be many sentiments in circulation in a given cultural moment that leave few formal textual or material signs for historians to find later on. Perhaps I’m more sensitive to this methodological point because my primary field is modern Africa, where the relative absence of how Africans thought, felt and practiced from colonial archives is so much of a given that everyone in that field knows to not overread what is in the archive and not overread what is not in the archive. But I can only excuse Jensen so far on this point, given how many Americanists are subtle and sensitive in their readings of archives. Meaning, that even if Jensen had been right that “No Irish Need Apply” signs (in ads, in doors, or wherever) were very rare, a later collective memory that they were common might simply have been a transposition of things commonly said or even done into something more compressed and concrete. Histories of racism and discrimination are often histories of “things not seen”.

But of course as Fried demonstrates comprehensively, that’s not the case here: the signage and the sentiment were in fact common at a particular moment in American history. Jensen’s rear-guard defense that an Irish immigrant male might only see such a sentiment once or twice a year isn’t just wrong, it really raises questions about his understanding of what an argument about “commonality” in any field of history should entail. As Fried beautifully says in her response, “The surprise is that there are so many surviving examples of ephemeral postings rather than so few”. She understands what he doesn’t: that what you find in an archive, any archive, is only a subset of what was once seen and read and said, a sample. A comparison might be to how you do population surveys of organisms in a particular area. You sample from smaller areas and multiply up. If even a small number of ads with “No Irish Need Apply” were in newspapers in a particular decade, the normal assumption for a historian would be that the sentiment was found in many other contexts, some of which leave no archival trace. To argue otherwise–that the sentiment was unique to particular newspapers in highly particular contexts–is also an extraordinary argument requiring very careful attention to the history of print culture, to the history of popular expression, to the history of cultural circulation, and so on.

Short version: commonality arguments are hard and need to be approached with care. They’re much harder when they’re made as arguments about rarity or absence.

5) I think this whole exchange is on one hand tremendously encouraging as a case of how historical scholarship really can have a progressive tendency, to get closer to the truth over time–and it’s encouraging that our structures of participation in scholarship remain porous enough that a confident and intelligent 9th grader can participate in the achievement of that progress as an equal.

On the other hand, it shows why we all have to think really carefully about professional standards if we want to maintain any status at all for scholarly expertise in a crowdsourced world. I’ve said before that contemporary scholars sometimes pine for the world before the Internet because they felt safe that any mistakes they make in their scholarship would have limited impact. If your work was only read by the fifty or so specialists in your own field, and over a period of twenty or thirty years was slowly modified, altered or overturned, that was a stately and respectable sort of process and it limited the harm (if also the benefit) of any bolder or more striking claims you might make. But Jensen’s 2002 article has been cited and used heavily by online sources, most persistently in debates at, but also at sites like History Myths Debunked.

For all the negativity directed at academia in contemporary public debate, some surveys still show that the public at large trusts and admires professors. That’s an important asset in our lives and we have serious collective interest in preserving it. This is the flip side of academic freedom: it really does require some kind of responsibility, much as that requirement has been subject to abuse by unscrupulous administrations in the last two years or so. We do need to think about how our work circulates and how it invites use, and we do need to be consistently better than “the crowd” when we are making strong claims based on research that we supposedly used our professional craft to pursue. It’s good that our craft is sufficiently transparent and transferrable that an exceptional and intelligent young person can use it better than a professional of long standing. That happens in science, in mathematics, and other disciplines. It’s maybe not so good that for more than ten years, Jensen’s original claims were cited confidently as the last word of an authenticated expert by people who relied on that expertise.

Posted in Academia, Oath for Experts, Production of History | 14 Comments

All Grasshoppers, No Ants

It would be convenient to think that Gawker Media‘s flaming car-wreck failure at the end of last week was the kind of mistake of individual judgment that can be fixed by a few resignations, a few pledges to do better, a few new rules or procedures.

Or to think that the problem is just Gawker, its history and culture as an online publication. There’s something to that: Gawker writers and editors have often cultivated a particularly noxious mix of preening self-righteousness, inconsistent to nonexistent quality control, a lack of interest in independent research and verification, motiveless cruelty and gutless double-standards in the face of criticism. All of which were on display over the weekend in the tweets of Gawker writers, in the appallingly tone-deaf decision by the writing staff to make their only statement a defense of their union rights against a decision by senior managers to pull the offending article, and in the decision to bury thousands of critical comments by readers and feature a miniscule number of friendly or neutral comments.

Gawker’s writers and editors, and for that matter all of Gawker Media, are only an extreme example of a general problem that is simultaneously particular to social media and widespread through the zeitgeist of our contemporary moment. It’s a problem that appears in protests, in tweets and blogs, in political campaigns right and left, in performances and press conferences, in corporate start-ups and tiny non-profits.

All of that, all of our new world with such people in it, crackles with so much beautiful energy and invention, with the glitter of things once thought impossible and things we never knew could be. Every day makes us witness to some new truth about how life is lived by people all around the world–intimate, delicate truths full of heartbreaking wonder; terrible, blasphemous truths about evils known and unsuspected; furious truths about our failures and blindness. More voices, more possibilities, more genres and forms and styles. Even at Gawker! They’ve often published interesting writing, helped to circulate and empower passionate calls to action, and intelligently curated our viral attention.

So what is the problem? I’m tempted to call it nihilism, but that’s too self-conscious and too philosophically coherent a label. I’m tempted to call it anarchism, but then I might rather approve rather than criticize. I might call it rugged individualism, or quote Aleister Crowley about the whole of the law being do as thou wilt. And again I might rather approve than criticize.

It’s not any of that, because across the whole kaleidoscopic expanse of this tumbling moment in time, there’s not enough of any of that. I wish we had more free spirits and gonzo originals calling it like they see it, I wish we had more raging people who just want the whole corrupt mess to fall down, I wish we had more people who just want to tend their own gardens as they will and leave the rest to people who care.

What we have instead–Gawker will do as a particularly stomach-churning example, but there are so many more–is a great many people who in various contexts know how to bid for our collective attention and even how to hold it for the moments where it turns their way, but not what to do with it. Not even to want to do anything with it. What we have is an inability to build and make, or to defend what we’ve already built and made.

What we have is a reflexive attachment to arguing always from the margins, as if a proclamation of marginality is an argument, and as if that argument entitles its author to as much attention as they can claim but never to any responsibility for doing anything with that attention.

What we have is contempt for anybody trying to keep institutions running, anybody trying to defend what’s already been achieved or to maintain a steady course towards the farther horizons of a long-term future. What we have is a notion that anyone responsible for any institution or group is “powerful” and therefore always contemptible. Hence not wanting to build things or be responsible. Everyone wants to grab the steering wheel for a moment or two but no one wants to drive anywhere or look at a map, just to make vroom-vroom noises and honk the horn.

Everyone’s sure that speech acts and cultural work have power but no one wants to use power in a sustained way to create and make, because to have power persistently, in even a small measure, is to surrender the ability to shine a virtuous light on one’s own perfected exclusion from power.

Gawker writers want to hold other writers and speakers accountable for bad writing and unethical conduct. They want to scorn Reddit for its inability to hold its community to higher standards. But they don’t want to build a system for good writing, they don’t want to articulate a code of ethical conduct, they don’t want to invest their own time and care to cultivate a better community. They don’t want to be institutions. They want to sit inside a kind of panopticon that has crudely painted over its entrance, “Marginality Clubhouse”, a place from which they can always hold others accountable and never be seen themselves. Gawker writers want to always be “punching up”, mostly so they don’t have to admit what they really want is simply to punch. To hurt someone is a great way to get attention. If there’s no bleeder to lead, then make someone bleed.

It’s not just them. Did you get caught doing something wrong in the last five years? What do you do? You get up and do what Gawker Media writer Natasha Vargas-Cooper has done several times, doing it once again this weekend in a tweet: whomever you wronged deserved it anyway, you’re sorry if someone else is flawed enough to take offense, and by the way, you’re a victim or marginalized and not someone speaking from an institution or defending a profession. Tea Party members and GamerGate posters do the same thing: both of their discursive cultures are full of proclamations of marginality and persecution. The buck stops somewhere else. You don’t make or build, you don’t have hard responsibilities of your own.

You think people who do make and build and defend what’s made and built are good for one thing: bleeding when you hit them and getting you attention when you do it. They’re easy to hit because they have to stand still at the site of their making.

This could be simply a complaint about individuals failing to accept responsibility for power–even with small power comes small responsibility. But it’s more than that. In many cases, this relentless repositioning to virtuous marginality for the sake of rhetorical and argumentative advantage creates a dangerous kind of consciousness or self-perception that puts every political and social victory, small and large, at risk. In the wake of the Supreme Court’s marriage decision, a lot of the progressive conversation I saw across social media held a celebratory or thankful tone for only a short time. Then in some cases it moved on productively to the next work that needs doing with that same kind of legal and political power, to more building. But in other cases, it reset to marginality, to looking for the next outrage to spark a ten-minute Twitter frenzy about an injustice, always trying to find a way back to a virtuous outside untainted by power or responsibility, always without any specific share in or responsibility for what’s wrong in the world. If that’s acknowledged, it’s not in terms of specific things or actions that could be done right or wrong, better or worse, just in generalized and abstract invocations of “privilege” or “complicity”, of the ubiquity of sin in an always-fallen world.

On some things, we are now the center, and we have to defend what’s good in the world we have knowing that we are there in the middle of things, in that position and no other. To assume responsibility for what we value and what we do and to ensure that the benefits of what we make are shared. To invite as many under our roof as can fit and then invite some more after that. To build better and build more.

What is happening across the whole span of our zeitgeist is that we’ve lost the ability to make anything like a foundational argument that binds its maker as surely as it does others. And yet many of us want to retain the firm footing that foundations give in order to claim moral and political authority.

This is why I say nihilism would be better: at least the nihilist has jumped off into empty space to see what can be found when you no longer want to keep the ground beneath your feet. At least the anarchist is sure nothing of worth can be built on the foundations we have. At least the free spirit is dancing lightly across the floor.

So Gawker wants everyone else to have ethics, but couldn’t describe for a moment what its own ethical obligations are and why they should be so. Gawker hates the lack of compassion shown by others, but not because it has anything like a consistent view about why cruelty is wrong. Gawker thinks stories should be accurate, unless they have to do the heavy lifting to make them so.

They are in this pattern of desires typical, and it’s not a simple matter of hypocrisy. It is more a case of the relentless a la carte -ification of our lives, that we speak and demand and act based on felt commitments and beliefs that have the half-life of an element created in a particle accelerator, blooming into full life and falling apart seconds later.

To stand still for longer is to assume responsibility for power (small or large), to risk that someone will ask you to help defend the castle or raise the barn. That you might have to live and work slowly for a goal that may be for the benefit of others in the future, or for some thing that is bigger than any human being to flourish. To be bound to some ethic or code, to sometimes stand against your own desires or preferences.

Sometimes to not punch but instead to hold still while someone punches you, knowing that you’re surrounded by people who will buoy you up and heal your wounds and stand with you to hold the line, because you were there for them yesterday and you will be there with them tomorrow.

Posted in Blogging, Cleaning Out the Augean Stables, Information Technology and Information Literacy, Politics, Popular Culture | 8 Comments

The Production of Stigma

Since Swarthmore seems likely to be stuck debating or struggling over divestment for at least another year or more, I remain interested in trying to push at the central weakness of the pro-divestment argument.

The major argument of many divestment advocates is that divestment by higher education and other large civic organizations will cumulatively stigmatize fossil fuel producers within public culture. More than a few divestment advocates find it hard to stay “on message” with this idea, and often invoke instead tropes of purity or imply that divestment will produce direct economic pressure on fossil fuel companies by devaluing their shares, but when pressed, the movement generally underscores the stigma concept as their key strategic insight.

I’ve complained before that I think this entire argument is a distraction from other kinds of tactics that might produce more meaningful political and social pressure on fossil fuel producers as well as produce a direct impact on climate change itself. The response of many advocates is that institutions can both divest and pursue other kinds of tactics and work to reduce their own consumption of fossil fuels. For that to be true, divestment advocates would have to stop being scornful or disinterested when other tactics or strategies are being formulated. But let me stop with my own distractedness right here and just hone on one major question: are there good historical examples of the production of stigma from direct political or social action which in turn forced the stigmatized institutions or actors to behave differently, or led to general changes in public outlook that marginalized or disempowered the stigmatized? If so, how closely do those examples resemble the current divestment movement?

This takes asking as prologue: what do we mean by stigma? I suppose you could take stigma as accomplished if people, actions, things or institutions, are treated as moral and social pariahs. There needs to be a general social consensus that it is acceptable to mock, despise or shun the target of stigma. Stigma casts its targets out of the social order, and thus also requires ideologies of respectability. Stigma is categorical and even stereotypical, it relieves us of the burden of having to argue case-by-case about why something or someone is wrong. We bundle their wrongness into our common sense. As this definition probably underscores, stigma is a dangerous tool generally, and has far more often been a tool of oppression or domination than the other way around. That doesn’t necessarily mean that it has no purpose or legitimacy as a goal: stigmatizing racism or fascism, for example, not only seems useful but follows on generations of struggle that should serve as sufficient justification for pushing towards that objective.


1) Consumer boycotts such as the Nestle boycott, the boycotting of South African wine, or the boycotting of Israeli hummus. These campaigns I think by and large serve as good examples of successful direct action. It is possible to change how a proportion of the consuming public perceives a particular product through media campaigns of some kind or another. Some of those campaigns have been methodical and sustained, some of them have been the result of clever or viral strategies. Do they share more in common? I think so. First, most of them have involved products that it’s relatively easy to give up, generally single brands or types of a general commodity. Asking people to stop consuming chocolate, wine or hummus generally would have been a much harder sell. Second, most of these campaigns have involved petitionary addresses to the producer asking for a change in the producer’s behavior. That’s a bit more ambitious when it’s aimed at a state or a regime than when it’s aimed at the selling of infant formula, but in all cases, it is at least imaginable that the producer could try to respond positively to the boycott. Third, the stigma in these cases was mostly fairly limited to particular social groups or classes. When the intended stigma applied to a product that the most responsive social group didn’t consume, the campaign was not very successful. High-income liberals already didn’t drink very much Coors, for example. Fourth, successful cases of stigma creation were actually hard to undo or manage. The Nestle boycott has been cancelled and renewed multiple times and at this point I think is quite beyond the ability of organizers to actively manipulate or change as a result. I brought a South African wine to a party five years ago and the host frowned in concern because they couldn’t quite remember why they weren’t supposed to drink it, just that they were.

2) Tobacco. Tobacco has gone from being culturally omnipresent and generally legitimized to being conventionally loathed, tobacco producers have become commonly viewed as synonymous with dishonesty and the destructive pursuit of profit, and smokers have become marginalized, pitied and/or despised. This is probably the closest match to what the fossil fuel divestment movement might have in mind. The stigmatizing of tobacco has moved the tobacco industry from having strong political influence across the nation to being relatively vulnerable politically except in a handful of states. Defending the tobacco industry is almost synonymous with being grossly self-interested.

Because it’s a good model, it’s worth reviewing how it was accomplished.

First, a broad spectrum of campaigns targeting consumption and consumers of tobacco were integral to creating stigma, and most signally, those campaigns worked across many different cultural domains and communities. Public health and medicalization were the most powerful and earliest weapon in the stigma-producing arsenal, but there were many others along the way. Anti-tobacco campaigns brought pressure on consumers through domesticity and family life (the prenatal impact of smoking, the effects of secondhand smoke on family members); through trying to remove romanticized or positive images of smoking in popular culture; through underscoring how smoking made the appearance and smell of smokers unattractive; through emphasizing the pathos of addiction and early death from lung cancer.

Second, the anti-tobacco campaign did an effective job of exposing the manipulations and deceptiveness of the tobacco producers themselves and that exposure itself contributed to stigmatizing tobacco by pushing the companies involved to act more and more desperate, cynical and predatory. Big Tobacco stigmatized itself, and this reveals another dimension to the politics of stigma. In a public struggle, behavior that violates common or widely shared moral sentiments (in this case, about truth, honesty, care for others) makes it much easier to create stigma, even if that behavior doesn’t directly relate to the focus of the campaign. E.g., the point was to stigmatize the consumption of tobacco, but if its producers were unsympathetic moral actors, so much the better. This also requires the stigma-producing movement to appear morally superior or preferable to their targets, however.

3) Racism. The point here I think would be that stigma alone can only accomplish so much, and that the more general the target, the less potent it is as a political tool.

It’s true that the civil rights movement and its immediate aftermath did a great deal to make the open expression of racist sentiment disreputable, a shift which still holds to a large extent within American public culture. But only in very limited and particular ways, e.g., political actors and elites who want to make use of racial sentiments or mobilize on a more or less racist basis largely use various codes and ‘dog-whistles’ to accomplish their goals and hide behind plausible deniability. It is almost a case of James Scott’s “weapons of the weak”, only transferred to one subset of the powerful.

What’s worth noting here is the specific requirements for stigmatizing a widespread cultural or social phenomenon that resides in the everyday practices and consciousness of a large proportion of the population. Even the limited and tentative degree of stigma attached to overt racist sentiment required a very overt, aggressive use of the politics of respectability, especially invoking ideas about class and social mobility. It required building a general moral consensus about the harms of racist sentiment as well as formal structures of racial discrimination. To keep some sense of stigma in the air also has taken incessant amounts of public shaming and regular cultural mobilizations, even in the pre-digital culture of the 1970s-1990s.

4) Same-sex marriage, abortion, premarital sex, divorce, unmarried parenting, etc.

I cite these as examples of practices concerning sexuality, marriage, family, gender, etc. where “stigma” has been highly mobile over time and across social groups, moving in and out of general consensus, and also where “stigma” has been intensely felt and applied to real human beings with very real consequences. In every case, the development or falling away of stigma was also affected by some kind of deliberate social or political action, though many activists involved with these issues have tried to portray shifting sentiments as a natural byproduct of progress (or as a sign of deep-seated devotion to tradition).

Note again that stigma here is not merely spontaneous and purely social but is largely potent and powerful in everyday life because the practice in question also involves either state sanction or prohibition. However, when stigma enters the picture in any of these cases, it does through moral and emotional language and operates at the level of everyday social relations, not as a matter of dry debate over public policy.

A contrast here could be made to practices that have been in some sense “stigmatized” but did not involve substantial interaction with state authority as their cultural status shifted. Long hair on men, for example. There was still an enforcement mechanism in that case: a man who grew long hair prior to 1970 or so might have been terminated from his job, might have been denied service in a place of business, or might have been verbally or physically assaulted in some social situations. What I think the contrast shows is that individual (or even institutional) behavior can shift from stigmatized to legitimate (or vice-versa) more quickly if the state is not involved, and that the shift is more likely to be lasting. But also these tend to be less consequential or potent kinds of practices. Note that even in these instances, stigma and legitimacy operate through highly moralizing, visceral, emotional discourses.

5) Mental illness and alcoholism

Here are two examples of social issues where there has been an earnest attempt over many decades to destigmatize them via medicalization. Given that this effort has been at best only partially successful, what I take away from this is that if stigmatization takes hold, it’s very hard to undo. Shame and disgust are a very powerful social formation as well as individual psychological experience. If they’re imposed on a phenomenon whose persistence derives from very deep-seated structural roots, they do not stop or prevent that phenomenon but instead largely aggravate the suffering of individuals and groups who are entangled with it. Stigma may help those who do not suffer from the issue feel more secure or positive about themselves, e.g., the sober and the sane feel more self-righteous, more moral, more ‘normal’ via the enforcement of the stigma.

This is especially true if the stigma extends to or demands criminalization. Sex work might be an example of this, given that it is both stigmatized and usually criminalized. Neither does much to prevent sex work itself, but together they make the life of sex workers (and sometimes, but much more rarely, customers) more precarious.


To sum up, if a political struggle wants to use stigma as an instrument, it will need to accept the following as preconditions of success:

1. An embrace of “respectability” as an ideological formation which must make active use of some form of social division or cleavage, and an acceptance of moralizing rhetorics that accompany it. The problem here is that respectability is not an a la carte issue-driven coalition. For respectability to have real power, it has to mobilize across an entire social group, whether that’s class-based or otherwise. It has to operate as manners, as an unspoken everyday orientation towards life. It has to align and assemble assumptions about decency, fairness, righteousness, justice, goodness and attach them to places, people and practices in a somewhat consistent manner.

For campus divestment activists, the primary issue that the requirement to make use of respectability poses is two-fold. First, it requires some degree of investment in the cultural capital of the civic institutions being enlisted in the cause. You can’t exalt the trustworthiness and legitimacy of science, universities, churches, and so on simply when they’re endorsing divestment but otherwise scorn them as handmaidens of neoliberalism or as defenders of reactionary values. This is not just about being considerate to coalition partners: the point is that because the production of stigma requires operating within the register of respectability, to use it successfully a political struggle has to invest wholesale in the authority of respectable institutions. Second, divestment activists will have to pay more attention to large-scale forms of social consensus if they’re interested in using stigma as a weapon, meaning primarily that gestures that accentuate the radicalism or vanguardism of activists are self-defeating. Those moves only make sense in a politics that is attacking a settled consensus or that is seeking to mobilize a strongly radicalized class fraction, e.g., a politics that doesn’t care about being stigmatized rather than a politics hoping to confer stigma.

2. Moral language gains very little political traction when it is nakedly instrumental and temporary, for the most part. Yes, political leaders can get away with routinely violating the moral principles they otherwise attempt to enforce. David Vitter can be caught with his phone number in a prostitute’s contact list and still claim to be a defender of “traditional family values” on behalf of a highly conservative electorate. But even in these cases, the politician in question still has to agree that he ought to follow those values and perform as if he is sorry for failing to do so. You can’t deploy moralizing language and regard your own moral adherence to that language as a secondary or deferred priority.

To stigmatize successfully, you have to also at least pretend to represent the normative, respectable alternative. For divestment activists, this means that they have to stop treating challenges to their own consumption of fossil fuels as a purely malicious non-sequitur. It may well be so, in the sense that such challenges are usually made as provocations from opponents who are unlikely in any case to be swayed by the divestment argument (or indeed, by any environmental activism). But that’s because those opponents sense this is an area of legitimate vulnerability in relationship to the desired political objective. You cannot seek stigma without using moralizing language, and you cannot use moralizing language without at least performing (sincerely or otherwise) your own comparatively greater moral respectability.

What I think this means is that divestment activists will have to stop insisting that calls for attention to consumption ought to be deferred until after divestment is accomplished, or are at least simultaneous with divestment. In fact, I think they’re failing to understand that the moral authority that makes stigma take hold requires depends on a driving commitment to the control of fossil fuel consumption as a prior condition of the campaign’s success, and for that commitment to be visible in the lives of individuals within the movement as well as in institutions.

3. Following on this, stigma isn’t usually abstract. All the examples I can think of apply to and are strongly felt in the lives of individuals. For fossil fuels, that means one of two things: either stigma will eventually have to apply to the individual lives of consumers or it will have to apply to the individual lives of producers. The former strategy has risks that have long been discussed within the environmental movement: you can campaign to make people feel guilty about Nestle chocolate or South African wine, but stigmatizing individuals over whether they use air conditioning or fuel oil is a different political proposition. Shunning producers as individuals has a lot of appeal, in contrast, in that it creates a set of identifiable villains that lets everyone else feel righteous in contrast to. The move to stigmatize the wealthiest 1% has been one of the few things to even slightly restrain the political and social power of current oligarchs. There’s a danger to that approach too, precisely because most people are very familiar with the suffering that shame creates in its targets. Done carelessly, such a campaign creates more, not less, sympathy for its targets. “Which side are you on?” might be an example of being careless: if you’re dishing out stigma, the larger the group of individuals you’re potentially targeting, the more difficult it gets to really stigmatize. Stigma requires a strong majority, even a supermajority consensus, to have much power–if you’re not Amish, you could really give a shit what the Amish think about your use of technology. Stigma is a really strong and dangerous tool that may persist well after it was intended to and apply to targets it wasn’t mean to harm, and most people sense that. I’m not sure that divestment activists recognize what they’re proposing to work with, in contrast.

4. Eventually stigma will require the enlistment of the state to be really powerful and persistent. The problem here with the divestment movement is the chicken-and-egg logic that the campaign presently relies upon–that it will be the successful creation of stigma against the fossil fuel industry in the public sphere and in everyday life that will compel state action. But almost every example I can think of, good and bad, either started with the enlistment of some part of the government or mobilized state resources prior to stigma really taking hold at the popular level. What I think this suggests in part is that stigma requires a prior condition of political vulnerability in its targets–some degree of social or economic isolation. It may be that the fossil fuel industry is on the cusp of that vulnerability both because of general awareness of climate change and because of the growing economic viability of alternative energy producers. But that means again that divestment might be a distraction from producing a condition of stigma rather than a primary means of accomplishing it. E.g., that there are other things afoot that could benefit from activist support which are making fossil fuel producers vulnerable and creating at least the possibility of governmental action.

Posted in Politics | 3 Comments

The (Ab)Uses of Fantasy

Evidently I’m not alone in thinking that last week’s episode of Game of Thrones was a major disappointment. By this I (and other critics) do not mean that it was simply a case of poor craftmanship. Instead, it featured a corrosive error in judgment that raised questions about the entire work, both the TV show and the book. Game of Thrones has always been a high-wire act; this week the acrobat very nearly fell off.

In long-running conversations, I’ve generally supported both the violence that GoT is known for and the brutal view the show takes of social relations in its fantasy setting, particularly around gender. Complaints about its violence often (though not invariably) come from people whose understanding of high fantasy draws on a very particular domestication of the medieval and early modern European past that has some well-understood touchstones: a relentless focus on noble or aristocratic characters who float above and outside of their society; a construction of violence to either formal warfare or to courtly rivalry; a simplification (or outright banishment) of the political economy of the referent history; orientalist or colonial tropes of cultural and racial difference, often transposed onto exotic fantasy types or creatures; essentially modern ideas about personality, intersubjectivity, sexuality, family and so on smuggled into most of the interior of the characters.

These moves are not in and of themselves bad. Historical accuracy is not the job of fiction, fantasy or otherwise. But it is also possible that audiences start to confuse the fiction for the referent, or that the tropes do some kind of work in the present that’s obnoxious. That’s certainly why some fantasy writers like China Mieville, Phillip Pullman and George R.R. Martin have various objected to the high fantasy template that borrows most directly from Tolkien. It can lead to a misrecognition of the European past, to the sanctification of elitism in the present (by allowing elites to see themselves as nobility), and also simply to the reduction of creative possibility. If a fantasy writer is going to draw on history, there are histories outside of Europe–but also early modern and medieval Europe suggest other templates.

Martin is known to have drawn on the Wars of the Roses and the Hundred Years War (as did Shakespeare) and quite rightly points out when criticized about the violence in Game of Thrones that his books if anything are still less distressing than the historical reality. It’s a fair point on several levels–not just ‘accuracy’, but that the narrative motion of those histories has considerable dramatic possibility that Tolkienesque high fantasy simply can’t make use of. Game of Thrones is proof enough of that point!

But GoT is not Tuchman’s Distant Mirror nor any number of other works. A while back, Crooked Timber did a lovely seminar on Joanna Clarke’s novel Jonathan Strange and Mr. Norrell. Most of the commenters focused on the way in which the novel reprises the conflict between romantics and utilitarians in 19th Century Britain, and many asked: so what do you gain by telling that story as a fantasy rather than a history?

To my mind, you gain two things. The one is that there may be deeper and more emotional truths about how it felt to live and be in a past (or present) moment that you only gain by fiction, and that some of those in turn may only be achievable through fiction that amplifies or exaggerates through the use of fantasy. The second is that you gain the hope of contingency. It’s the second that matters to the last episode of Game of Thrones.

Historical fiction has trouble with “what if”? The more it uses fiction’s permission to be “what if”, the more it risks losing its historicity. It’s the same reason that historians don’t like counterfactuals, for the most part: one step beyond the moment of contingency and you either posit that everything would have turned out the same anyway, or you are stuck on a wild ride into an unknown, imaginary future that proceeds from the chosen moment. Fantasy, on the other hand, can follow what ifs as long as it likes. A what if where Franklin decides to be ambassador to the Iroquois rather than the French is a modest bit of low fantasy; a what if where Franklin summons otherworldly spirits and uses the secret alchemical recipes of Isaac Newton is a much bigger leap away, where the question of whether “Franklin” can be held in a recognizable form starts to kick in. But you gain in that move not only a lot of pleasure but precisely the ability to ask, “What makes the late colonial period in the U.S. recognizable? What makes the Enlightenment? What makes Franklin?” in some very new ways.

Part of what governs the use of fantasy as a way of making history contingent is also just storytelling craft: it allows the narratives that history makes available to become more interesting, more compressed, more focused, to conform not just to speculation but to the requirements of drama.

So Game of Thrones has established that its reading of the late medieval and early modern brings forward not only the violence and precarity of life and power in that time but also the uses and abuses of women within male-dominated systems of power. Fine. The show and the books have established that perfectly well at this point. So now you have a character like Sansa who has had seasons and seasons of being in jeopardy, enough to fill a lifetime of shows on the Lifetime channel. And there is some sense of a forward motion in the character’s story. She makes a decision for the first time in ages, she seems to be playing some version of the “game of thrones” at last, within the constraints of her role.

So why simply lose that sense of focus, of motion, of narrative economy? If Monty Python and the Holy Grail had paused to remind us every five minutes that the king is the person who doesn’t have shit on him, the joke would have stopped being funny on the second go. If Game of Thrones is using fantasy to simply remind us that women in its imagined past-invoking world get raped every five minutes unless they are plucky enough to sign up with faceless assassins or own some dragons, it’s not using its license to contingency properly in any sense. It’s not using it to make better stories with better character growth and it is not using it to imagine “what if”? If I want to tell the story of women in Boko Haram camps as if it were suffused with agency and possibility, I would rightly be attacked for trying to excuse crimes, dismiss suffering and ignore the truth. But that is the world that we live in, the world that history and anthropology and political science and policy and politics must describe. Fiction–and all the more, fantasy–have other options, other roads to walk.

There is no requirement for the show to have Sansa raped by Ramsay Bolton, no truth that must be told, not even the requirement of faithfulness to the text. The text has already (thankfully!) been discarded this season when it offers nothing but meandering pointlessness or in the case of Sansa, nothing at all. So to return suddenly to a kind of conservation of a storyline (“False Arya”) that clearly will have nothing to do with Sansa in whatever future books might one day be written is no justification at all. If it’s Sansa moving into that narrative space, then do something more with that movement. Something more in dramatic terms and something more in speculative, contingent terms. Even in the source material Martin wants to use, there are poisoners and martyrs, suicides and lunatics, plotters and runaways he or the showrunners could draw upon for models of women dealing with suffering and power.

Fantasy means you don’t have to do what was done. Sansa’s story doesn’t seem to me to offer any narrative satisfactions, and it doesn’t seem to make use of fantasy’s permissions to do anything new or interesting with the story and the setting. At best it suggests an unimaginative and desperate surrender to a character that the producers and the original author have no ideas about. At worst it suggests a belief that Game of Thrones‘ sense of fantasy has been subordinated to the imperative of “we have to be even grosser and nastier next time”! That’s not fantasy, that’s torture porn.

Posted in Popular Culture, Sheer Raw Geekery | 5 Comments

The Ground Beneath Our Feet

I was a part of an interesting conversation about assessment this week. I left the discussion thinking that we had in fact become more systematically self-examining in the last decade in a good way. If accrediting agencies want to take some credit for that shift, then let them. Complacency is indeed a danger, and all the more so when you have a lot of other reasons to feel confident or successful.

I did keep mulling over one theme in the discussion. A colleague argued that we “have been, are and ought to be” committed to teaching a kind of standardized mode of analytic writing and that therefore we have a reason to rigorously measure across the board whether our students are meeting that goal. Other forms of expression or modes of writing, he argued, might be gaining stock in the world but they shouldn’t perturb our own commitment to a more traditional approach.

I suppose I’m just as committed to teaching that kind of writing as my colleague, for the same reasons: it has a lot of continuing utility in a wide variety of contexts and situations, and it reinforces other less tangible habits of thought and reflection.

And yet, I found myself unsettled on further reflection about one key point: that it was safe to assume that we “are and ought to be” committed. It seems to me that there is a danger to treating learning goals as settled when they’re not settled, just as there is a danger to treating any given mix of disciplines, departments and specializations at a college or university as something whose general stability is and ought to be assured. Even if it is probable that such commitments will not change, we should always act as if they might change at any moment, as if we have to renew the case for them every morning. Not just for others, but for ourselves.

Here’s why:

1) even if a goal like “teaching standard analytic writing” is absolutely a bedrock consensus value among faculty and administration, the existence of that consensus might not be known to the next generation of incoming students, and the definition of a familiar practice for faculty might be unfamiliar to those students. When we treat some feature of an academic enviroment as settled or established, there almost doesn’t seem to be any reason to make it explicit, or to define its specifics, and so if students don’t know it, they’ll be continuously baffled by being held accountable to it. This is one of the ways that cultural capital acts to reproduce social status (or to exclude some from its reproduction): when a value that ought to be disembedded from its environment and described and justified is instead treated as an axiom.

2) even if something like “teaching analytic writing” is absolutely a bedrock consensus value among faculty, if some in a new generation of students consciously dissent from that priority and believe there is some other learning goal or mode of expression which is preferable it, then faculty will never learn to persuade those students, and will have to rely on a brute force model to compel students to comply. Sometimes that works in the same way that pulling a child away from a hot stove works: it kicks the can down the road to that moment when those students will recognize for themselves the wisdom of the requirement. But sometimes that strategy puts the goal itself at risk by exposing the degree to which faculty themselves no longer have a deeply felt or well-developed understanding of the value of the requirement they are forcing on their students.

3) Which leads to another point: what if the previously consensus value is not a bedrock consensus value even among faculty? If you assume it is, rather than treat the requirement as something that needs constantly renewed investigation, you’ll never really know if an assumed consensus is eroding. Junior and contingent faculty may say they believe in it, but really don’t, which contributes to a moral crisis in the profession, where the power of seniority is used to demand what ought to be earned. Maybe some faculty will say they believe in a particular requirement but actually don’t do it well themselves. That’s corrosive too. Maybe some faculty say they believe in it but what they think “it” is is not what other people think it is. You’ll never know if the requirement or value isn’t always being revisited.

4) Maybe there is genuine value-based disagreement or discord within the faculty that needs to be heard, and the assumption of stability is just riding roughshod over that disagreement. That’s a recipe for a serious schism at some point, perhaps at precisely the wrong moment for everyone on all sides of that kind of debate.

5) Maybe the requirement or value is a bedrock consensus value among faculty but it absolutely shouldn’t be–e.g., that the argument about that requirement is between the world as a whole and the local consensus within the academia. Maybe everything we think about the value we uphold is false, based on self-referring or self-validating criteria. At the very least, one should defy the world knowingly, if one wants to defy the world effectively.

I know it seems scary to encourage this kind of sense of contingency in everything we do in a time when there are many interests in the world that wish us ill. But this is the part of assessment that makes the most sense to me: not measuring whether what we do is working as intended (though that matters, too) but asking every day in a fresh way whether we’re sure of what we intend.

Posted in Academia, Defining "Liberal Arts", Swarthmore | 2 Comments

Apples for the Teacher, Teacher is an Apple

Why does AltSchool, as described in this article, as well as similar kinds of tech-industry attempts to “disrupt” education, bug me so much? I’d like to be more welcoming and enthusiastic. It’s just that I don’t think there’s enough experimentation and innovation in these projects, rather than there being too much.

The problem here is that the tech folks continue to think (or at least pretend) that algorithmic culture is delivering more than it actually is in the domains where it has already succeeded. What tech has really delivered is mostly just the removal of transactional middlemen (and of course added new transactional middlemen–the network Uber has established in a really frictionless world wouldn’t need Uber, and we’d all just be monetizing our daily drives on an individual-to-individual basis).

Algorithmic culture isn’t semantically aware yet. When it seems to be, it’s largely a kind of sleight-of-hand, a leveraging and relabelling of human attention or it is computational brute-forcing of delicate tasks that our existing bodies and minds handle easily, the equivalent of trying to use a sledge hammer to open a door. Sure, it works, but you’re not using that door again, and by the way, try the doorknob with your hand next time.

I’m absolutely in agreement that children should be educated for the world they live in, developing skills that matter. I’m also in agreement that it’s a good time for radical experiments in education, many of them leveraging information technology at new ways. But the problem is that the tech industry has sold itself on the idea that what it does primarily is remove the need for labor costs in labor-intensive industries, which just isn’t true for the most part. It’s only true when it’s true for jobs that were (or still are) rote and routinized, or that were deliberate inefficiencies created by middlemen. Or that tech will solve problems that are intrinsic to the capabilities of a human being in a human body.

So at the point in the article where I see the promise that tech will overcome the divided attention of a humane teacher, I both laugh and shudder. I laugh because it’s the usual tech-sector attempt to pretend that inadequate existing tech will become superbly useful tech in the near-term future simply because we’ve identified a need for it to be (Steve Jobs reality distortion field engaged) and I shudder because I know what will happen when they keep trying.

The central scenario in the article is this: you build a relatively small class with a relatively well-trained, attentive, human teacher at the center of it. So far so good! But the tech, ah the tech. That’s there so that the teacher never has to experience the complicated decision paths that teachers presently experience even in somewhat small classes. Right now a teacher has to decide sometimes in a day which students will get the lion’s share of the attention, has to rob Peter to pay Paul. We can’t have that in a world where every student should get all the attention all the time! (If nothing else, that expectation is an absolutely crystallized example of how the new tech-industry wealthy hate public goods so very much: they do not believe that they should ever have to defer their own needs or satisfactions to someone else. The notion that sociality itself, in any society, requires deferring to the needs of others and subsuming one own needs, even for a moment, is foreign to them.)

So the article speculates: we’ll have facial recognition software videotaping the groups that the teacher isn’t working with, and the software will know which face to look at and how to compress four hours of experience into a thirty-minute summary to be reviewed later, and it will also know when there are really important individual moments that need to be reviewed at depth.

Here’s what will really happen: there will be four hours of tape made by an essentially dumb webcam and the teacher will be required to watch it all for no additional compensation. One teacher will suddenly not be teaching 9-5 and making do as humans must, being social as we must. That teacher will be asked to review and react to twelve or fourteen or sixteen hours of classroom experience just so the school can pretend that every pupil got exquisitely personal, semantically sensitive attention. The teacher will be sending clips and materials to every parent so that this pretense can be kept up. When the teacher crumbles under the strain, the review will be outsourced, and someone in a silicon sweat shop in Malaysia will be picking out random clips from the classroom feed to send to parents. Who probably won’t suspect, at least for a while, that the clips are effectively random or even nonsensical.

When the teacher isn’t physically present to engage a student, the software that’s supposed to attend to the individual development of every student will have as much individual, humane attention to students as Facebook has to me. That is to say, Facebook’s algorithms know what I do (how often I’m on, what I look at, what I tend to click on, when I respond) and it tries (oh, how it tries!) to give me more of what I seem to do. But if I were trying to learn through Facebook, what I need is not what I do but what I don’t! Facebook can only show me a mirror at best; a teacher has to show a student a door. On Facebook, the only way I could find a door is for other people–my small crowd of people–to show me one.

Which probably another way that AltSchool will pretend to be more than it can be, the same way all algorithmic culture does–to leverage a world full of knowing people in order to create the Oz-like illusion that the tools and software provided by the tech middleman are what is creating the knowledge.

Our children will not be raised by wolves in the forest, but by anonymously posted questions answered on a message board by a mixture of generous savants, bored trolls and speculative pedophiles.

Posted in Academia, Digital Humanities, Information Technology and Information Literacy | 4 Comments

Hearts and Minds

Much as I disliked Jonathan Haidt’s recent book The Righteous Mind overall, I’m quite interested in many of the basic propositions that this strain of cognitive science and social psychology are proposing about mind, consciousness, agency, responsibility and will. Most often what frustrates me most is not how unsettling the scholars writing in this vein are but how much they domesticate their arguments or avoid thinking through the implications of their findings.

When we read The Righteous Mind together at Swarthmore, for example, one of my chief objections to Haidt’s own analysis is that he simply asserts that what he and others have called WEIRD psychosocial dispositions (Western, Educated, Industrial, Rich and Democratic) at some point emerged in recent human history (as the acronym suggests) and have never been common or universal at any point since, including now. Haidt essentially leverages that claim into an argument that “conservative” dispositions are the real universal, which I don’t think he even remotely proves, and then gets even more into the weeds by suggesting that people with WEIRD-inflected moral dispositions would accomplish more of their social and political objectives if only they acted somewhat less WEIRD. The argument achieves maximum convolution in Haidt when he seems to suggest that he prefers WEIRD outcomes, because he’s largely stripped away the ground on which he or anyone else could argue for that preference as something other than the byproduct of a cognitive disposition. Why are those outcomes preferable? If they are preferable in terms of some kind of fitness, that they produce either better individual or species-level outcomes in terms of reproduction and survival, presumably that will take care of itself over time. If they are preferable because of some other normative rationale, then where are we getting the capacity for reason that allows us to recognize that? Is it WEIRD to think of WEIRD, in fact? Is The Righteous Mind itself just a product of WEIRD cognitive dispositions? (E.g., the proposition that one should write a book which is based on research which argues that the writing of books based on research should persuade us to sometimes make moral arguments that do not derive their force from the writing of books based on research.)


Many newer cognitivist, evolutionary-psychological and memetics-themed arguments get themselves into the same swamp. Is memetics itself just a meme? What kind of meme reproduces itself more readily by revealing its own character? Is “science” or “rationality” just a fitness landscape for memes? Daniel Kahneman at least leaves room for “thinking slow”, which is potentially the space inhabited by science, but the general thrust of scholarly work in these domains makes it harder and harder to account for “thinking slow”, for a self-aware, self-reflective form of consciousness that is capable of accurately or truthfully understanding some of its own conditions of being.

But it isn’t just cognitive science that is making that space harder and harder to inhabit. Various forms of postmodern and postructuralist thought have arrived at some similar rebukes to various forms of Cartesian thinking via some different routes. So here we are: the autonomous self driven by a rational mind with its own distinctive individual character and drives is at the very least a post-1600 invention. This to my mind need not mean that the full package of legal, institutional and psychological structures bound up in that invention are either fake impositions on top of some other “real” kind of consciousness or sociality, nor that this invention is always to be understood as and limited to a Eurocentric imposition. “Invention” is a useful concept here: technologies do not drift free of the circumstances of their creation and dissemination but they can be powerfully reworked and reinterpreted as they spread to other places and other circumstances.

Still, if you believe the new findings of cognitivists, we may be at the real end of that way of thinking about the nature of personhood and identity, and thus maybe at the cusp of experiencing our sense of selfhood differently as well. I think this is where I really find the new cognitivists lacking in imagination, to the point that I end up thinking that they don’t really believe what their own research supposedly shows. If they’re right (and this might apply to some flavors of poststructuralist conceptions of subjectivity and personhood, too), then most of our social structures are profoundly misaligned with how our minds, bodies and socialities actually work. What I find most queasy about a lot of contemporary political and social discourse in the US in this respect is how unevenly we invoke psychologically or cognitively inflected understandings of responsibility, morality, and capacity. Often we seem to invoke them when they suit our existing political and social commitments or prejudices and forget them when they don’t. About which Haidt, Kahneman and others would doubtless say, “Of course, that’s our point”–except that if you believe that’s true, then that would apply to their own research and the arguments they make about its implications, that cognitivism is itself evidence of “moral intuitions”.


Think for example about the strange mix of foundational assertions that now often govern the way we talk about the guilt or innocence of individuals who are accused of crimes or of acting immorally. There’s always been some room for debating both nature and nurture in public disputes over criminality and immorality in the US in the 19th and 20th Century, but the mix now is strikingly different. If you take much of the new work in cognitive science seriously, its implications for criminal justice systems ought to be breathtakingly broad and comprehensive. It’s not clear that anyone is ever guilty in the sense that our current systems assume that we can be, e.g., that as rational individuals, we have chosen to do something wrong and should be held accountable. It’s equally unclear whether we can ever be expected to accurately witness a crime, nor that we are ever capable of accurately judging the guilt or innocence of individuals accused of crimes without being subject both to cognitive bias and to large-scale structural structures of power.

But even among the true believers in the new cognitive science, claims this sweeping are made at best fitfully, and equally many of us in other contexts deploy cognitive views of guilt, responsibility and evidence only when they reinforce political or social ideologies that we support. Many of us (including myself) argue for the diminished (or even absent) responsibility of at least some individuals for behaving criminally or unethically when we believe that they are otherwise the victims of structural oppression or that they are suffering from the aftermath of traumatic experience. But some of us then (including myself) argue for the undiminished personal-individual-rational responsibility of individuals who possess structural power, regardless of whether they have cognitive conditions that might seem to diminish responsibility or have suffered from some form of social or experiential trauma.

Our existing maps of power don’t overlay very well in some cases onto what the evidence of the new cognitive science might try to tell us, or even sometimes into other vocabularies that try to escape a Cartesian vision of the rational, self-ruling individual. A lot of cultural anthropology describes bounded, local forms of reason or subjectivity and argues against expecting human beings operating within those bounds to work within some other form of reason. We try to localize or provincialize any form of reason, all modes of subjectivity, but then we often don’t treat the social worlds of the powerful as yet another locality, we don’t try for an emic understanding of how particular social worlds of power see and imagine the world, but instead actually treat many social actors in those worlds as if they are the Cartesian, universal subjects that they claim to be, and thus hold them responsible for what they do as if they could have seen and done better from some point of near-universal scrutiny of the rational and moral landscape of human possibility.


From whatever perspective–cognitive science, poststructuralism, cultural anthropology, and more–we keep reanimating the Cartesian subject and the social and political structures that were made in its name even when we otherwise believe that minds, selves, consciousness and subjectivity don’t work that way and ought not to work that way. I think at least to some extent this is because we either cannot really imagine the social and political structures that our alternative understandings imply (and thus resort to metaphors: rhizomes, etc.) or because we can imagine them quite well and are terrified by them.

The new cognitivism or evolutionary psychology, if we took it wholly seriously, would either have to tolerate a much broader range of behaviors now commonly defined as crimes and ethical violations as being natural (because where could norms that argue against nature possibly come from, save perhaps from some countervailing cognitive or evolutionary operation) or alternatively would have to approach crime and ethical misbehavior through diagnosis rather than democracy.

The degree to which poststructuralism of various kinds averts its anticipatory gaze when actually confronted by institutionalizations of fragmented, partial or intersectional subjectivity (as opposed to pastward re-readings of subjects and systems now safely dead or antiquated) is well-established. We hover perpetually on the edge of provincializing Europe or seeing the particularity of whiteness because to actually do it is to established the boundedness, partiality and fragility of subjects that we otherwise rely upon to be totalizing and masterful even in our imagination of how that center might eventually be dispersed or dissolved.

I’m convinced that the sovereign liberal individual with a capacity (however limited) for a sort of Cartesian rationalism was and remains an invention of a very particular time and place and thus was and remains something of a fiction. What I’m not convinced of is whether any of the very different projects that either know or believe in alternative ways of imagining personhood and mind really want what they say they want.

Posted in Academia, Oh Not Again He's Going to Tell Us It's a Complex System, Politics | 7 Comments

“The Child Repents and Is Forgiven”

I occasionally out myself here at this blog, on Facebook or at Swarthmore as having a fairly encyclopedic knowledge about mainstream superhero comics, like a few other academics, but I’ve been much less inclined to make even a limited foray into either comics scholarship or comics blogging than I have with some of the other domains of popular culture that I know fairly well from my own habits of fan curation and cultural consumption.

Nevertheless, I’ve followed many comics blogs since the mid-2000s, most of which have traversed the same arc as academic blogs or any other kind of weblogs: from a small subculture dominated by strong personalities who were drawn to online writing for idiosyncratic reasons to a more professionalized, standardized, and commercialized mode of online publication. Two days ago, a well-known male comic blogger named Chris Sims who had moved from maintaining his own early personal blog to paid writing on a shared platform blog called Comics Alliance wrote an apology for having bullied and harassed a female blogger, Valerie D’Orazio, back in that earlier era of online writing.

The timing of the apology, as it turns out, was at least partly a result of Sims breaking through from comics blogging to actually writing a major mainstream title for Marvel, an X-Men comic intended to be a nostalgic revisitation of those characters as they were in the early 1990s. News of his hiring led to D’Orazio writing about how hard that was for her to stomach, particularly given that his bullying was particularly aimed at her after she was given a similar opportunity to write a mainstream Marvel Comics title.

There’s more to it all (there always is), including an assertion by some that “Gamergaters” are somehow involved in stirring this up, but I want to take note of two separate and interesting aspects of this moment.

The first is an excellent reprise of the full discursive history involved in this controversy by Heidi MacDonald. Not only does MacDonald add a lot of nuance to the controversy while remaining very clear on the moral landscape involved, she ends up providing a history of blogging and social media that might be of considerable interest to digital humanists who otherwise have no interest in comics as a genre. In particular, I think MacDonald accurately identifies how blogging used to be a highly individualized practice within which particular writers had surprising amounts of influence over the domains that drew their attention but also had largely undiscussed and unacknowledged impact on the psychological and personal lives of other bloggers, for good and ill. In a sense, the early blogosphere was a more direct facsimile of the post-1945 “republic of letters” than we’ve often realized: bloggers behaved in many ways just as print critics and pundits behaved, with rivalries and injuries inflicted upon one another but also with relational support and mutuality. Where they were interested in a cultural domain that had almost no tradition of mainstream print criticism attached to it (or where that domain had been especially confined or limited in scope), the new blogosphere often had a surprisingly intense impact on mainstream cultural producers. I’m recalling, for example, how very briefly before I started a formal weblog I published some restaurant reviews alongside some academic materials on a static webpage, and immediately got attention from some area restaurants and from some local journalists, which I hadn’t really meant to do at all.

MacDonald underscores the difference between this early environment and now, especially in terms of identity politics. It really is not just a story of going from individual curation of a subculture to a more mainstream and commercial platform, but also of how much attention and discourse in contemporary social media no longer really reproduces or enacts that older “republic of letters”. Attention in the early blogosphere was as individually curated as the blogs themselves, and commentariats tended to be much more fragmented and particular to a site. Now commentariats are much larger in scale, much less invested in the particular culture of a particular location for content, and are directed in their attention by much more palpably algorithmic infrastructures. This is sometimes good, sometimes bad, but is at the least very different.

The second aspect of the Sims controversy that interests me is the very active debate in various comments sections about whether Sims should be forgiven (by D’Orazio or anyone else). This has become a common discursive structure in the wake of controversies of this kind. Not just a debate over what the proper rhetorical and substantive composition of contrition should be, but whether the granting of forgiveness is either a good incentive for producing similar changes in the consciousness of past and present offenders or is an attempt to renormalize and cover-up harassment by placing it perpetually pastward of the person making a pro forma apology.

One of key issues in that ongoing debate is whether the presence of self-interest so contaminates an apology as to make it worthless. E.g., if Sims has to go public in order to keep his job offer from Marvel intact, then is that a sign that he doesn’t really mean it, and thus that his apology is worthless?

I think the discussion about the dangers of renormalization, of quickly kicking over the traces, is valid. But here I’d suggest this much: if male (or white, etc.) cultural producers, professionals, politicians, etc., come to feel that their ability to succeed professionally depends upon acknowledging bad behavior in the past and committing to a different kind of public conduct in the present, then that’s a sign of successful social transformation. The presence of self-interest doesn’t invalidate a public apology, but instead documents a new connection between professionalism, audiences and success. That might turn out to be a bigger driver of change than waiting for a total and irrefutable transformation of innermost subjectivity.

Posted in Blogging, Politics | 1 Comment

Raise the Barn/Autopsy the Corpse

A more detailed thinking-through of the case of Sweet Briar, and a proposal.

Five places to start a dissection of Sweet Briar College and the decision of its Board to close the school:

Laura McKenna, “The Unfortunate Fate of Sweet Briar’s Professors”.

Jack Marshall, “The Sweet Briar Betrayal”.

Roanoke Times Editorial Board, “Our View: Sweet Briar Board Should Resign”.

Brian C. Mitchell, “The Crack in the Faberge Egg”

Deborah Durham, “Suddenly Liminal: Reflections on Sweet Briar College Closing”

The thinking through. The more the details come out, the odder the decision to close appears. Sweet Briar had more liabilities and debts than its endowment size might suggest, and it clearly lacked a strategic plan that could provide answers to its shrinking enrollments. But to close so suddenly, while under the leadership of an interim President, and with no leadership in its Admissions office, makes little sense. The faculty and staff had spent a year considering plans. Why not hire a “crisis President” and take a shot at some of those plans? Surely there’s someone talented out there who would relish the chance to turn around a college in crisis. And surely the current students would appreciate their loyalty to the institution being rewarded by such an effort, rather than being pushed out the door allegedly for their own best interests. I think it’s reasonable to wonder if there isn’t a plan that isn’t being disclosed–perhaps that the only way to fully void Indiana Fletcher Williams’ will is to go completely out of business?

The proposal. If the current faculty and staff and students of Sweet Briar would welcome it, why not gather some current provosts, presidents, senior staff and faculty of liberal arts colleges together at Sweet Briar or nearby for a weekend-long summit that reviews the plans composed over the last year and suggests other possible solutions? A sequel, perhaps, to the meeting that the former President of Swarthmore Rebecca Chopp and the outgoing President of Haverford Dan Weiss organized at Lafayette College in 2012.

If there’s little interest among current faculty, staff and students at Sweet Briar, then there’s no point to trying to have such a meeting in a time-sensitive, hastily-organized way. But even if they aren’t interested, I think there should be such a meeting in the next two years, as a post-mortem. I do not accept the thought that some (including McKenna) offer that Sweet Briar is a sign of the imminent death of the small liberal-arts college, in no small measure because I don’t even think Sweet Briar was doomed to die.


Reading about the discussions that have been going on at Sweet Briar itself for the last year, I think it’s clear that folks there understood some of what they’d have to do to be viable, and that some of what they’d have to do would be hard to achieve, especially for faculty. Even in a situation of existential threat, it’s very difficult for faculty to dramatically reimagine the structure of a curriculum and the nature of their professional practices, and to find a way to systematically reduce the size of a faculty. You can’t have over one hundred faculty positions and only 500 students. You can’t have more than two hundred non-faculty employees and have only 500 students either.

This would be job #1 of a potential “emergency summit”: redesign a small college curriculum so that it has 75 or fewer faculty positions and yet retains intellectual and philosophical coherence. Typically when senior administrators are brought in to cut positions (or “detenure”) an institution, they do it by finding out which departments have the lowest enrollments, they do it by finding out which departments are the most politically hapless or exposed. That’s the wrong way to do it no matter what the crisis is, but it’s especially wrong in a situation where the institution itself has an identity problem.

Brian Mitchell’s “Faberge” essay points out that the small liberal-arts colleges that have scrambled to build highly distinctive, imaginative or innovative programs, or have restructured their overall institutional emphasis, are doing ok, precisely because they have something to offer prospective students beyond “small and liberal-arts”. St. John’s College is the classic established example of such a program, but there are many others: Berea College, College of the Atlantic, Quest University, Colorado College, Hampshire College. At the Lafayette meeting I mentioned, I was really struck at how many other small colleges with more limited resources were doing really creative things–and like Mitchell, I was also struck that the wealthiest and best-known liberal arts colleges were dramatically more risk-averse and mainstream.

I’m certain that there are ways to organize a faculty of fifty or seventy-five intellectuals and scholars that channels their teaching and engagement to great effect without having to offer forty-six majors, minors and certificates. I often despair of getting my colleagues at Swarthmore to grasp this same point, that a small college, even a rich one, has a choice between being a great small college or a shitty little university. The more programs a small college tries to have, the more fields it feels it must represent, the more specializations it feels it requires, the more it’s choosing to be a shitty little university. Faculty are usually the ones driving that kind of choice: this is one thing we can’t blame the administrators for. So unless a summit to #SaveSweetBriar was willing to dramatically reimagine what studying at Sweet Briar could entail, and accept that not every job can be saved, this meeting I’m proposing has to be a post-mortem that will warn the living rather than save the patient.

Job #2 is also clearly something that the faculty and senior staff at Sweet Briar are painfully conscious about, which is to break some of the restrictions surrounding the gifts that founded and sustained the college. But it’s been done: Sweet Briar found a way to get loose of the initial requirement that its students be white. Even if Sweet Briar were to remain a college for women, it could have a dynamic admissions strategy that sought out students from outside the United States, and non-traditional students inside the U.S. (which might then influence the curricular redesign in #1).

Job #3 is look at the financial picture after #1 and #2 and see what else the institution can do more cheaply or not at all. People who imagine that there’s lot of waste in a budget, any budget, are almost always wrong. But there might be administrative operations that a small college with a newly envisioned mission doesn’t need to pursue. And stop hiring consultants: that would be another purpose for this summit, to build a “pro bono” network of peer experts who can pitch in until the college is stabilized. The summit could look with fresh eyes at the day-to-day operations of the college and see what makes sense and what doesn’t make sense going forward.

Job #4 is a capital campaign that follows straight off of #SaveSweetBriar. Use the resigned, reimagined curriculum as a selling point to bring in new supporters, as well as tap the obviously considerable goodwill of Sweet Briar’s established donor base. I think a summit could at least help lay the groundwork for such a campaign.

This is obviously ambitious for a weekend, especially if it’s a meeting convened on short notice. But I don’t think it’s completely implausible.

If this ends up being a post-mortem instead, then the review of the issues involved could be broader, but I still think might follow the same rough contours: curricular design, admissions practices, donor practices, fiscal restraint (that avoids being austerity). All of it aimed at asking: how can liberal-arts colleges avoid making the same mistakes? What do we have to do in order to secure our collective future?

Posted in Academia, Swarthmore | 7 Comments