Don’t Panic! Leave That to the Experts

In many massively multiplayer online games (MMOGs), players who are heavily invested in the game (sometimes just in terms of time, but occasionally both time and money) often group together in organized collaborations, usually called guilds.

Guilds pool resources and tightly schedule and organize the activities of the members. This is typically a huge advantage in MMOGs, where many players either work together only temporarily with strangers, play completely by themselves, or belong to guilds that only offer weak or fitful organization. Many MMOGs tune the gameplay so that the most difficult challenges require this level of elite coordination. The rewards for overcoming these challenges typically have an accumulative effect, allowing the elites to overcome still more difficult challenges and to easily defeat other players in direct combat or competition. The virtual goods and powers obtained through elite coordination visually distinguish the members of these guilds when their characters are seen within the public spaces of the gameworld.

As in any status hierarchy, these advantages are only meaningful if the vast majority of participants do not and cannot obtain the same rewards. So the elite guilds in some sense have a very strong incentive to keep everyone else around. A gameworld abandoned by everyone but the elite stops being fun even for them. This is especially acute when the collaboration within a heavily invested group of elite players extends to keeping their advantage over others through pooling insider knowledge about the game systems, or even to protecting knowledge about a bug or flaw in the game systems which can be potentially exploited by everyone.

To give an example, one of the “virtual world” MMOGs that I spent considerable time studying a few years ago was called Star Wars Galaxies. It was a notable turning point in the history of game design in many ways, most of them not particularly happy, but it did give players a very significant amount of control over the gameworld and had a vigorous design infrastructure in particular for allowing players to compete with each other within a virtual economy. Players could produce a wide variety of items for other players, and the very best of these in terms of the power and utility were rare, difficult to make and worth a good deal of money, especially very early in the history of the game. In order to produce the best items, a player had to spend an immense amount of time making inferior items and incrementally increasing their skills.

But early in the game there was a bug. If you knew about it, you could gain a huge amount of incremental skill increase in a very compressed amount of time. So almost immediately after the game went live, there were a small number of players who could make the very best items that conferred enormous power on the owners of those items, literally weeks before it was even possible for anyone to have gained that level of skill. Naturally the wealth they accumulated was equally disproportionate, and that advantage remained permanent, because the developers chose not to strip away that benefit after fixing the bug. By the time everyone else caught up, the early exploiters–who had shared the secret with each other but not everyone else–were essentially a permanent class of plutocrats.

It keeps happening in such games. There’s almost no point to being a new player in games like DayZ or Ark, for example, unless you’re playing on a small server with a group of trusted friends. Even if there were no hacks or exploits, the established players have such enormous advantages that any new player will find again and again that whatever time they invest in gathering resources and making weapons and shelters will be stolen by elite groups of established players. But the established players have a problem too: they need a large group of victims to invest in the game. That’s where the easiest source of wealth is for them: much better to have a hundred newbies labor for two days and to steal what they’ve made in five minutes than it is to directly compete with an equally elite and invested group of rivals. So they need to talk up how fun the game is, to establish it as a phenomenon, maybe even sometimes to show selective mercy, to offer newbies a kind of protection-racket breathing space, to treat them like an exhaustible resource. (Not for nothing do players sometimes speak of “farming” another player as well as some aspect of the gameworld.)

Why is this on my mind today? Well, for one, I’ve been working off and on over the summer on trying to write about virtual worlds. But for another, I can’t help but think about the analogies I see between these experiences and the stock market.

—————-

In the middle of a sharp downturn like this one, there are expert investors who come on the radio, the television, the Internet. “Don’t panic,” they say. “Don’t sell. You’re in it for the long haul! That’s what the experts do.”

These appearances also offer many earnest attempts to explain the underlying reasons for the downturn. “It’s China!” “It’s the emerging markets!” “It’s the price of oil dropping!” “It’s the Fed raising rates!” Some of this frantic offering of explanation seems to me to have the same reassuring intent as “Don’t panic”. It is an attempt to rationalize the change, to relate it to something real in the world. In some cases, this offers the investor (small or large) an opportunity to calculate their own risk. “Ah, it’s China. Well, China’s government will find a way to fix it, I would guess.” “Oh, it’s the emerging markets! I always thought those were fishy, I think I’ll reduce my exposure.”

In some cases, I think these explanations are a form of pressure–even blackmail–directed against governments. “Don’t raise the rates, Fed, we like that easy money–so if you do, you’ll ‘shake investor confidence’ even more, and you wouldn’t want that, would you?” We saw that back in 2008, after all: it is the logic of “too big to fail”. Do this, don’t do that, or we’ll pundit the shit out of the investment economy and create a real panic.

Scattered amid the explanations are also some earnest attempts to argue that there is no explanation, to treat the market in a naturalistic object whose behavior is beyond human agency and not well understood by human science. “We’re still not sure about dark matter, and we’re still not sure why the stock market did that.” This too is a kind of reassurance, and often is followed by the reminder not to panic. “It’ll go up again, it just does that, don’t worry.” I think there’s something to that: the 21st Century market is a cybernetic mass brain that thinks in strange ways and reacts at speeds that we have never lived with before.

What I darkly fear is what I think might be said but never is. After all, the experts say, “Don’t panic!, don’t sell, you’re in it for the long haul”, but some of them panicked, or at least their high-frequency trading computers did. Sure, maybe someone else’s Skynet is buying it all, but this wouldn’t happen if it was just Mom and Pop investors getting nervous about China. And I think to myself, “This is like a guild that’s discovered a bug.” They need everyone to stay in so that they can farm them some more. They need to herd the cattle down the soothing Temple Grandin-style chutes. That some of the explanation is neither, “There is a rational thing that is causing this all” nor “This is something so complex that it just does things now and again that no one understands.” That instead some of it is, “We have trouble here in River City”.

The problem is that in 1987 or in 2001 the expert could also say, “If you’re afraid, then after the next rally, move your money into a safe harbor, stay out of the market.” There is no staying out any longer. That’s the other thing that’s changed because of income inequality, because of the way the elite guilds have changed the game. Nothing’s really safe as an investment. Nothing’s really safe as a life or a career. Our institutions (and even our government, especially when it pays pensions) are part of the asset class now. If you just earn a salary and work hard, your income and prospects have gotten steadily worse in the last three decades: the investment economy isn’t just a nice hedge against the worst now, it’s the only way to stay in the middle class.

This is what elite guilds in games would do if they could: require you to play the game.

Too bad if you don’t like spending two days training a velociraptor and building a shelter in Ark only to find that when you logged off for dinner, a couple of elite hackers took your dinosaur, destroyed your shelter and locked your naked body in a cage. (You think I’m kidding, but I’m not.)

Posted in Politics | 2 Comments

We Are Not Who We Will Become

One of the things about the reaction to Allison Bechdel’s Fun Home by a small subset of incoming Duke undergraduates that is important to grasp is that I think it’s a deliberate–and possibly even coordinated–re-deployment of activism about the content of college education that’s previously come from a “left” direction, right down to the way that the students articulate how reading Fun Home would harm their identities and how they ought to have the right to choose a college education that would never compel them to experience either content or instruction that contradicts the identities that they have chosen for themselves.

There is much more embedded inside of that set of moves than just distaste for a single book or the expression of a single ideology about sexual identity, and it is a good example of why many of us worry about political tactics even when we are sympathetic to the particular concerns, feelings or aspirations of people employing those tactics. Because tactics are mobile: they’re not copyrighted or trademarked.

But it’s not just tactics that’s the issue. It’s also philosophical substance. The Christian students at Duke and left or radical students elsewhere are sometimes proposing something basically similar about themselves, and about the relationship between their sense of self and liberal arts education. They’re proposing that identity is a product of agency (whether through struggle or chosen freely) and that the content of a liberal arts education may destabilize, challenge or unsettle that choice.

I think they’re complicatedly wrong about the former assertion: not only are we not necessarily a product of our own conscious self-making, I’m not even sure that we should hold that out as an aspiration for ourselves. Some aspect of our becoming should be a mystery (and will be whether it should be). They’re not wrong about the latter: the content of a good education may in fact destabilize, challenge or unsettle what we are in ways that neither faculty nor students can anticipate. I wouldn’t even care to guarantee that in the short-term that this shifting or unsettling will have positive outcomes for individuals or communities. But I would still say that it ought to be done.

What unites this particular set of complaints against liberal arts education is a kind of resurgent functionalism, a belief that specific content creates specific outcomes. That classical literature creates Western domination, that Fun Home creates sexual desire and lesbianism. That “problematic” texts create predictable problematic outcomes, that knowledge has a relationship to power over people and power within people that can be known in advance of acquiring that knowledge.

The Duke Christian students may even be right in some sense, if in ignorance of what is actually inside of Fun Home. It is not that there is one panel of oral sex that they should fear, but the fact that lesbians (and a closeted gay man) are present as intimately knowable, familiar human beings. That is a danger if you require them to be unfamiliar and inhuman to sustain your own sense of self. But that might be equally the real fear of some students and activists on the left: that texts that they believe to be doing nothing but the work of oppression nevertheless contain multitudes, just as oppressors do. That to pursue liberal arts education is to live a life without guarantees, to love, or at least make peace with, our own uncertainty.

Posted in Academia, Politics, Swarthmore | 9 Comments

Joke’s On You

Here’s my contribution to the DONALD TRUMP HOW IS THIS POSSIBLE sweepstakes:

Donald Trump is polling well for the same reason Bernie Sanders is polling well.

Sort of.

They’re not at all the same in the sociology of their attraction, nor in the content of their discourse about politics and within politics. Trump’s base and Sanders’ base have no overlap at all. The specifics of what they’re saying and how they’re acting is a product of the particular subculture of their party and their constituencies. It’s perfectly correct to say that Sanders’ enthusiasts are mostly progressives fed up with the Democratic Party in general and Trump’s reception has been fueled by ceaseless moves to a right-wing fringe, that in both cases, there is a history of political sentiment and action within each party which explains what’s going on.

The thing that makes them similar, however, is that they are also the latest spiralling out of a general disaffection with the formal political systems of liberal democracy. It is not limited to the United States, for all that commenters abroad are adopting a superior air in their commentary on the buffoonery of Trump. Jeremy Corbyn might be the Labour Party leader soon for similar reasons. Silvio Berlusconi’s longevity in Italian politics despite Trump-ish behavior has something to do with the same restiveness.

People who are fundamentally inside the world of the political classes–long-time civil servants, policy-making experts, mainstream pundits, elected officials, educated elites generally–are having a hard time fully grasping the big-picture story here. We read each election cycle on its own terms, prompted by horserace journalism.

But not only are publics in most liberal democracies dismayed by the incapacity of their elected officials to do much with the sprawling, recumbent states that they theoretically command, not only are they restive about the downward spiral of their economic and social lives and the predation of the global plutocracy, they’re also tired of the screaming inauthenticity of the entire wretched system. That’s what the low approval ratings mean, first and foremost.

The old saw is that insanity is doing the same thing over and over again and expecting the same results applies primarily to something that’s already demonstrably failed. Folly, in contrast, is doing the same thing over and over again and ignoring every sign of its imminent failure because it worked the last time. We drove across the bridge once again this morning, so who cares if it trembled and groaned? The power plant didn’t blow up today, even though all the red lights on the console are blinking, so fire it up tomorrow just like always.

The campaign consultants keep saying, “The old forms of message discipline and voter mobilization will work eventually, just ignore the sideshow.” The pundits keep laughing or crying or getting angry with Trump (and a few with Sanders) for taking time away from serious candidates and serious issues. What I think none of them get is that the bridge is trembling and groaning. What those polled in Iowa are saying about Trump and Sanders is less about affection for the specifics of their platform, just as what the people who might vote for Corbyn are probably in some cases not all that interested in the specifics of his political views. What they recognize in all of them is that they’re real people. That what you hear from them if you go to a speech is who they really are, what they really think, how they really feel. They’re not what their handlers have told them to be, they’re not the product of some laboratory.

Trump may be an insane, clownish vulgarian with horrific and brutal views of most issues, but he is at least really an insane vulgarian. With at least most of the rest of the Republicans, it’s never very clear what they actually are. Do they really hate science or education? Really want to drown government in a bathtub? Really believe ten-year olds should be compelled to carry a rape pregnancy to term? Who knows? They’re all just doing what they think the primary electorate will respond to. They’re awkwardly slouching out onto a vaudeville stage and asking desperately of the bored and disaffected audience, “What is it that you want to see? Do you want juggling? Burlesque? Stand-up? A guitar solo? I can try to do that.” Trump is just walking out and being himself at a party. Like him or hate him, you recognize at least that he is what he really is. Sanders, Corbyn, and so on as well.

What most people are not seeing when we look at our leaders is people. As fewer and fewer of us are part of the elite, as downward mobility latches on to the majority of the liberal democratic publics across the world, fewer people are inside the systems that produce and maintain political elites. What we see is more like what Roddy Piper’s character in They Live saw: manipulative aliens.

This is not to say that real, unperformative humanity should give anybody hope. The system will eventually find a way to knock such people out of the running. Or people will decide sooner or later what anyone hosting a party with Donald Trump attending would eventually decide: that he’s an asshole who needs to be booted to the curb before you lose all your friends. If by some insurgent chance someone like Sanders not only got the nomination but won he’d find that the system as a whole is unbeatable no matter how genuine your convictions might be.

At least as long as it is a system. Because that’s what the groans and trembles in the bridge really mean. Trump is less in that sense a comment on the specific madness of the current Republican Party and more a set of rivets explosively popping out in the bridge supports. Anybody who wants to keep crossing the river had better start thinking about building a better bridge.

Posted in Politics | 3 Comments

In Media Res

Ta-Nehisi Coates tweets (approvingly, I think) that historians are “not the most hopeful bunch”.

I’ve said as much myself. Among the many problems with David Armitage and Jo Guldi’s The History Manifesto is the authors’ belief that historians once had a seat at the table of power and then lost it (in their view because we started being more like humanists and less like social scientists). Historians have played a crucial role in the making of nations and national identity since the end of the 19th Century, but we’ve never been especially welcome in smoke-filled rooms and think-tank boardrooms where policy wonks have plied their craft.

There are lots of reasons why it’s hard for historians to join those conversations in a way that doesn’t complicate or derail the assembly line. Our sense of the relationship between the passage of time and social or political action is slower, longer, more intricate. It’s hard to say with a straight face that if only you make this regulation or announce this initiative that something’s going to change right away. We know how rare it is for intention to match outcome. We’ve seen it all before. We know that when things change for the better, it’s often due most to people who are also not at the table of people earnestly proposing and implementing solutions. And so on.

Which might suggest that if you have students who want to change the world, directing them to the study of history is just going to be an endless parade of deflation and disappointment. Like almost all historians I know, I think that’s not true. There’s the obvious, frequently made point that while history may not provide a ready-made solution, it does provide a much richer, more complicated understanding of where we are and how we got to this point. Trying to act without a historical understanding is like trying to be a doctor who never does diagnosis. Maybe every once in a while you’ve got a patient in the emergency room where you don’t need to know what happened because it’s obvious, and all you need to do is act–staunch the bleeding, bandage the wound, amputate, restart the heart. Usually though you really need to know how it happened, and what it is that happened, if you want to do anything at all to help.

I’m going to suggest there’s another reason to study history if you want to do something to change the world, and it’s something that applies especially to the rising generation of activists. The specific content of historical study offers a diagnosis of the present, and it also often offers a sense of the alternate possibilities, the turns and contingencies that could shape the future. But cultivating a historical sensibility is also an important warning that any time you act, you’re joining a story that’s already in progress.

This is a warning that falls from the lips of older people with distressing ease, because even if we don’t study history, we’ve lived it. We know just from experience what’s come immediately before the present. That knowledge sometimes blinds us, both to how the present might be genuinely new or just about the degree to which the third (or more) time is the charm, that even if events unfold once again as they have in the past, that repetition is sometimes enough to carry weight of its own.

So be wary about the injunction to think about precedent, but still think about it, and in particular think about it if you want to fight to make a change in the world. Because it’s crucial to know whether other people have fought for that change before, and especially to know whether they’re fighting even now. And it’s equally crucial not to take the absence of apparent victory as a sign of their failure or insufficiency, as a justification for the next generation to just grab the steering wheel.

I’ve talked before at this blog about reading grant applications, for example, from recent undergraduates hoping to pursue a project in another country. Again and again, I’ve seen many of these applicants, especially those seeking to go to African countries, act as if they are the first person to ever think of tackling a given problem or issue in that country. As if there’s no one there who has ever done it, and as if there’s no one here who has ever gone there to do it. You could write this off as simply ugly Americanism, but it’s only a more specific example of a generally weak devotion to thinking historically, to putting one’s own story, one’s own aspirations, into motion.

In almost every cause or struggle, in almost every community and institution, there are people who have been trying to do what you think should be done. They’ve almost certainly learned some important things in the process, and very likely have more at stake in those struggles than you do if you’re a newcomer, a traveller, a visitor. Thinking historically is the key to remembering to look for those predecessors before you start, and it’s a key to remembering to take them seriously rather than just look them up as a kind of pro forma courtesy before you get back to doing your own thing.

Almost nothing genuinely begins with your own life. Rupture and newness are a very small (if important) part of human experience. Yes, being mindful that you’re just the latest chapter in an ongoing story is humbling and a bit inhibiting, and another reason for historians to not be “the most hopeful bunch”. But it is better to live in conscious humility than blithe confidence, at least if you genuinely think that progress is possible. There is no need to steal Sisyphus’ boulder just so you can start fresh from the bottom of the hill.

Posted in Academia, Politics | 2 Comments

Peforming the Role

The short summary of the way that UIUC’s administrative and board leadership (and some of their closest faculty supporters) handled their reaction to Steven Salaita is that they screwed up and that serious professional consequences are completely appropriate.

And not just that they screwed up in “handling the fallout”, as if this is a question merely of public relations tactics. They screwed up substantively, philosophically, in terms of fundamentals. The archive of emails now available for critical examination document that error and how pervasive and systematic it was. Chris Kennedy’s interventions in particular are almost textbook examples of what academic freedom as an ideal is meant to prevent: a prejudicial, ideologically-derived attempt to target particular individual scholars using ad hoc standards that are not (and should not be) imposed on the rest of the faculty.

Until Steven Salaita himself says that he’s satisfied with whatever settlement UIUC offers, whether that is rehiring him or some other compensation, I would urge other academics to continue refusing to do service for UIUC as an institution. I know that imposes a burden on the many great faculty at UIUC by isolating them but I think it’s important to keep the pressure on. UIUC has more work to do in any event than settling with Salaita. And it’s not just UIUC that has these problems.

I do have two modest reservations about some of the responses to the email releases by academic critics. The first is that I don’t know that we should exult overly much about the release of the emails. UIUC’s leadership is ultimately responsible for creating the circumstances in which the release had to be sought through legal means, and thus is ultimately to blame for whatever larger consequences this might have. But the use of legal mechanisms to probe into the professional communications of faculty and staff at public universities has already been abused for political ends in the last decade and I fear this is only going to recommend that tactic further. We shouldn’t be too blithe about telling colleagues at public universities that they’ll just have to meet in person more, use the phone more, stick to their personal accounts more, and so on. That creates yet another kind of large-scale structural inequity for public institutions in a landscape increasingly full of such inequities. The acceleration of many work processes through electronic communication is a mixed blessing, but I personally have no longing at all for laboriously printing out recommendation letters, grant applications, dossiers, and many other kinds of professional labors that I handle at least partly through email. I also find it very valuable to get quick takes on institutional questions from colleagues via email and yes, sometimes to exchange cathartic observations about the week’s business with trusted colleagues.

The second reservation is more complicated, and has to do with the hostile commentary being directed at Phyllis Wise’s faculty confidants and to some extent Wise herself. I’m struggling to figure out how to express this feeling, because there’s a lot of inchoate things bundled inside of it. The place to start might be this: I think some of my colleagues across the country are potentially contributing to the creation of the distanced, professionalized, managerial administrations that they say that they despise, and they’re doing it in part through half-voiced expectations about what an ideal administrator might be like.

Occasionally folks in my social media feeds articulate a belief in faculty governance that has a sort of unexamined wash of nostalgia in it. That we had it all in the good old days and lost it, either to some kind of ‘stab in the back’ or through our own inattention or mistakes. (‘Stab in the back’ narratives generally worry me no matter what the circumstances, because they usually inform a politics that’s one part ressentiment and one part scapegoating.) Sometimes the same folks believe that if only faculty were in charge of everything (whether that’s “once again” or “for the first time”) the university would be working again as it ought to.

Now when I push some on that sentiment, it’s usually not hard to get the same critics to concede that there are a host of specialized professional jobs that have to be done in contemporary universities which can’t be done just by any old Ph.D-holding person who walks in the door. So the conversation refocuses. Who’s the problem, in this view? Basically the upper leadership hierarchy, especially at large corporatized universities that have added numerous vice-presidential positions to their administrations in the last decade. These are the administrators that faculty critics believe either are managing portfolios that no one needs managed or that are exercising forms of leadership that faculty are capable of leading on their own through their traditional structures of governance.

I agree completely that many institutions, especially large universities, have created administrative positions that are redundant or unnecessary. I’m not sure I agree with the idea that administrative leadership per se is largely unnecessary, nor do I think even many critical faculty really believe that–and it shows in some of the contradictory edges around the critical response to the Salaita affair.

First, you don’t have to go very far into the discussions and debates on social media about UIUC to find that faculty who believe in the sufficiency of faculty leadership don’t actually trust many other faculty to participate in governance or leadership. Most notably, there’s an undercurrent of debate about why many STEM faculty at UIUC either endorsed the administrative leadership or were indifferent to the issue–and one common explanation is that STEM faculty are already in thrall to the corporatist university or have actively connived in its making. Which means suddenly that the putatively capable-of-self-governance-faculty have been pared down to “just the humanists and social scientists, and maybe not even all of the folks in the latter group”. Which is sort of like saying that you believe in democracy as long as it’s just the people who share your politics who get to vote. Additionally, there’s a lot of contempt directed at the faculty who were exchanging emails with Wise, who are seen as collusive. But any self-governing faculty is going to have people whose genuinely held views of institutional policy are going to resemble the positions now commonly taken by administrative leaders. If Nicolas Burbules had no vice-chancellor to seek favor from, it’s possible that he (or someone like him) would still think as they think and drive deliberation in that direction. Certainly there will be Cary Nelsons on every faculty, aggressively expressing their views in every forum and meeting and doing in governance what Internet trolls often do in online discussions, which is driving the terms of the conversation towards more extreme or narcissistic terms.

Ultimately I think that the people who believe we can do it all on our own know that sooner or later we would all be desperate to delegate some of the responsibility for institutional leadership to appointed individuals, to not have to sit in shared deliberative session and endure an endless plague of Nelsons trying to cat-herd us towards whatever precipice they favor. In a sense, I think every faculty member who has held any sort of administrative responsibility is familiar with exactly how this works: colleagues who believe they should have a say in everything also want someone else to handle all the tedium of acting on all the contradictory imperatives that emerge out of deliberative process.

Moreover, most of us turn out to want at least some of the sausage-making involved in the life of an academic institution to happen with some kind of confidentiality. Even the most radical demands for transparency (and I’m usually one of those inclined to such) balk at doing everything out in the open. Tenure cases are only one part of a larger landscape of necessary judgment and assessment of the professionalism and practice of other professionals in a university. That’s what believing in self-governance means! Professionals often assert that only they can judge other professionals, that this is a prerogative of their training. Ok, but if that means, “And by the way, everybody who has the necessary minimal qualifications to be a professional is definitionally ok in our eyes for life, and everything we’re presently doing is exactly what we should go on doing forever”, then that’s doing it wrong. Even if we banished the spectre of neoliberal austerity, we’d still need to ask, “Are we doing what we should be doing? Are there things we should stop doing?” We’d still need to think about whether there are changes worth pursuing–say, the academic equivalent of Atul Gawande’s “checklist” reform in hospitals. At least the initial stage of many of those conversations is not something I want to be broadcasting to the largest possible audience in the most indiscriminate way. That too is something that I think we turn to “administration” or something like it to accomplish.

I think here is also where Wise’s critics occasionally end up with some strangely unreal implicit expectations of administrative decorum, a vision of leadership performativity that implicitly envisions administrators as more distant, more isolated, less human than the rest of us. For one, I almost feel as if people are expecting Wise to have had discretionary agency where I’m not sure she did or could–where I don’t know that any of us, faculty or administration, do. I think it’s reasonable to have expected Wise to tell Kennedy, for example, that his desired intervention into the Salaita case was unwise and unwelcome and that she would not do it. I don’t think it’s reasonable to expect, as I feel I’ve seen people expect, that she should have excoriated him or confronted him. I think we somehow expect that administrative leaders should be unfailingly polite, deferential, patient, and solicitious when we’re the ones talking with them and bold, confrontational, and aggressive when they’re talking to anyone else. We seem to expect administrative leaders to escape structural traps that we cannot imagine a way to escape from. There’s a lot of Catch-22 going on here.

We as faculty all have confidants, people we can talk to who help us work through our choices and our feelings. I would guess that most of us turn to people who are going to make us feel better, support us, reassure us. Ideally we should also have friends or trusted colleagues who will be honest with us, who will tell us when we’re making mistakes, but there are days when I suspect even the most iron-willed and psychologically robust person is not not looking for that.

And that’s just when we’re rank-and-file people. Imagine anyone in the role that Wise plays, anyone at all. Pick someone with your exact convictions. Pick yourself. Are we really expecting that the person in that role ought to listen judiciously, patiently and indiscriminately to every single person on their faculty with perfect equity and equanimity? We seem to desire leaders who are able say bluntly what we ourselves cannot or would not say and to mobilize institutional power with executive force in ways that we cannot and also desire leaders whose job it is to serve as a kind of infinitely passive psychic dumping ground, to receive every grievance and grudge within the institution without blinking. To decide what we know we can’t decide and to have never decided any such thing and to disavow any intent to make such decisions. To me that’s another kind of managerialism: the administrator as something other than fully human, needing to perform a professionalism that removes rather than connects them.

Posted in Academia | 1 Comment

Yes, We Have “No Irish Need Apply”

Just came across news of the publication of Rebecca Fried’s excellent article “No Irish Need Deny: Evidence for the Historicity
of NINA Restrictions in Advertisements and Signs”, Journal of Social History, 10:1093, 2015, from @seth_denbo on Twitter.

First, the background to this article. Fried’s essay is a refutation of a 2002 article by the historian Richard Jensen that claimed that “No Irish Need Apply” signs were rare to nonexistent in 19th Century America, that Irish-American collective memory of such signs (and the employment discrimination they documented) was largely an invented tradition tied to more recent ideological and intersubjective needs, and that the Know-Nothings were not really nativists who advocated employment (and other) discrimination against Irish (or other) immigrants.

Fried is a high school student at Sidwell Friends. And her essay is just as comprehensive a refutation of Jensen’s original as you could ever hope to see. History may be subject to a much wider range of interpretation than physics, but sometimes claims about the past can be as subject to indisputable falsification.

So my thoughts on Fried’s article.

1) Dear Rebecca Fried: PLEASE APPLY TO SWARTHMORE.

2) This does really raise questions, yet again, about peer review. 2003 and 2015 are different kinds of research environments, I concede. Checking Jensen’s arguments then would have required much more work of a peer reviewer than more recently, but I feel as if someone should have been able to buck the contrarian force of Jensen’s essay and poked around a bit to see if the starkness of his arguments held up against the evidence.

3) Whether as a peer reviewer or scholar in the field, I think two conceptual red flags in Jensen’s essay would have made me wary on first encounter. The first is the relative instrumentalism of his reading of popular memory, subjectivity and identity politics. I feel as if most of the discipline has long since moved past relatively crude cries of “invented tradition” as a rebuke to more contemporary politics or expressions of identity to an assumption that if communities “remember” something about themselves, those beliefs are not arbitrary or based on nothing more than the exigencies of the recent past.

4) The second red flag, and the one that Fried targets very precisely and with great presence of mind in her exchanges with Jensen, is his understanding of what constitutes evidence of presence and the intensity of his claims about commonality. In the Long Island Wins column linked to above, Jensen is quoted as defending himself against Fried by moving the goalposts a bit from “there is no evidence of ‘No Irish Need Apply'” to “The signs were more rare than later Irish-Americans believed they were”. The second claim is the more typical sort of qualified scholarly interpretation that most academic historians offer–easy to modify on further evidence, and even possible to concede in the face of further research. But when you stake yourself on “there was nothing or almost nothing of this kind”, that’s a claim that is only going to hold up if you’ve looked at almost everything.

I often tell students who are preparing grant proposals to never ever claim that there is “no scholarship” on a particular subject, or that there are “no attempts” to address a particular policy issue in a particular community or country. They’re almost certainly wrong when they claim it, and at this point in time, it takes only a casual attempt by an evaluator to prove that they’re wrong.

But it’s not just that Jensen is making what amounts to an extraordinary claim of absence, it is that his understanding of what presence would mean or not mean, and the crudity of his attempt to quantify presence, that is an issue. There may be many sentiments in circulation in a given cultural moment that leave few formal textual or material signs for historians to find later on. Perhaps I’m more sensitive to this methodological point because my primary field is modern Africa, where the relative absence of how Africans thought, felt and practiced from colonial archives is so much of a given that everyone in that field knows to not overread what is in the archive and not overread what is not in the archive. But I can only excuse Jensen so far on this point, given how many Americanists are subtle and sensitive in their readings of archives. Meaning, that even if Jensen had been right that “No Irish Need Apply” signs (in ads, in doors, or wherever) were very rare, a later collective memory that they were common might simply have been a transposition of things commonly said or even done into something more compressed and concrete. Histories of racism and discrimination are often histories of “things not seen”.

But of course as Fried demonstrates comprehensively, that’s not the case here: the signage and the sentiment were in fact common at a particular moment in American history. Jensen’s rear-guard defense that an Irish immigrant male might only see such a sentiment once or twice a year isn’t just wrong, it really raises questions about his understanding of what an argument about “commonality” in any field of history should entail. As Fried beautifully says in her response, “The surprise is that there are so many surviving examples of ephemeral postings rather than so few”. She understands what he doesn’t: that what you find in an archive, any archive, is only a subset of what was once seen and read and said, a sample. A comparison might be to how you do population surveys of organisms in a particular area. You sample from smaller areas and multiply up. If even a small number of ads with “No Irish Need Apply” were in newspapers in a particular decade, the normal assumption for a historian would be that the sentiment was found in many other contexts, some of which leave no archival trace. To argue otherwise–that the sentiment was unique to particular newspapers in highly particular contexts–is also an extraordinary argument requiring very careful attention to the history of print culture, to the history of popular expression, to the history of cultural circulation, and so on.

Short version: commonality arguments are hard and need to be approached with care. They’re much harder when they’re made as arguments about rarity or absence.

5) I think this whole exchange is on one hand tremendously encouraging as a case of how historical scholarship really can have a progressive tendency, to get closer to the truth over time–and it’s encouraging that our structures of participation in scholarship remain porous enough that a confident and intelligent 9th grader can participate in the achievement of that progress as an equal.

On the other hand, it shows why we all have to think really carefully about professional standards if we want to maintain any status at all for scholarly expertise in a crowdsourced world. I’ve said before that contemporary scholars sometimes pine for the world before the Internet because they felt safe that any mistakes they make in their scholarship would have limited impact. If your work was only read by the fifty or so specialists in your own field, and over a period of twenty or thirty years was slowly modified, altered or overturned, that was a stately and respectable sort of process and it limited the harm (if also the benefit) of any bolder or more striking claims you might make. But Jensen’s 2002 article has been cited and used heavily by online sources, most persistently in debates at Snopes.com, but also at sites like History Myths Debunked.

For all the negativity directed at academia in contemporary public debate, some surveys still show that the public at large trusts and admires professors. That’s an important asset in our lives and we have serious collective interest in preserving it. This is the flip side of academic freedom: it really does require some kind of responsibility, much as that requirement has been subject to abuse by unscrupulous administrations in the last two years or so. We do need to think about how our work circulates and how it invites use, and we do need to be consistently better than “the crowd” when we are making strong claims based on research that we supposedly used our professional craft to pursue. It’s good that our craft is sufficiently transparent and transferrable that an exceptional and intelligent young person can use it better than a professional of long standing. That happens in science, in mathematics, and other disciplines. It’s maybe not so good that for more than ten years, Jensen’s original claims were cited confidently as the last word of an authenticated expert by people who relied on that expertise.

Posted in Academia, Oath for Experts, Production of History | 14 Comments

All Grasshoppers, No Ants

It would be convenient to think that Gawker Media‘s flaming car-wreck failure at the end of last week was the kind of mistake of individual judgment that can be fixed by a few resignations, a few pledges to do better, a few new rules or procedures.

Or to think that the problem is just Gawker, its history and culture as an online publication. There’s something to that: Gawker writers and editors have often cultivated a particularly noxious mix of preening self-righteousness, inconsistent to nonexistent quality control, a lack of interest in independent research and verification, motiveless cruelty and gutless double-standards in the face of criticism. All of which were on display over the weekend in the tweets of Gawker writers, in the appallingly tone-deaf decision by the writing staff to make their only statement a defense of their union rights against a decision by senior managers to pull the offending article, and in the decision to bury thousands of critical comments by readers and feature a miniscule number of friendly or neutral comments.

Gawker’s writers and editors, and for that matter all of Gawker Media, are only an extreme example of a general problem that is simultaneously particular to social media and widespread through the zeitgeist of our contemporary moment. It’s a problem that appears in protests, in tweets and blogs, in political campaigns right and left, in performances and press conferences, in corporate start-ups and tiny non-profits.

All of that, all of our new world with such people in it, crackles with so much beautiful energy and invention, with the glitter of things once thought impossible and things we never knew could be. Every day makes us witness to some new truth about how life is lived by people all around the world–intimate, delicate truths full of heartbreaking wonder; terrible, blasphemous truths about evils known and unsuspected; furious truths about our failures and blindness. More voices, more possibilities, more genres and forms and styles. Even at Gawker! They’ve often published interesting writing, helped to circulate and empower passionate calls to action, and intelligently curated our viral attention.

So what is the problem? I’m tempted to call it nihilism, but that’s too self-conscious and too philosophically coherent a label. I’m tempted to call it anarchism, but then I might rather approve rather than criticize. I might call it rugged individualism, or quote Aleister Crowley about the whole of the law being do as thou wilt. And again I might rather approve than criticize.

It’s not any of that, because across the whole kaleidoscopic expanse of this tumbling moment in time, there’s not enough of any of that. I wish we had more free spirits and gonzo originals calling it like they see it, I wish we had more raging people who just want the whole corrupt mess to fall down, I wish we had more people who just want to tend their own gardens as they will and leave the rest to people who care.

What we have instead–Gawker will do as a particularly stomach-churning example, but there are so many more–is a great many people who in various contexts know how to bid for our collective attention and even how to hold it for the moments where it turns their way, but not what to do with it. Not even to want to do anything with it. What we have is an inability to build and make, or to defend what we’ve already built and made.

What we have is a reflexive attachment to arguing always from the margins, as if a proclamation of marginality is an argument, and as if that argument entitles its author to as much attention as they can claim but never to any responsibility for doing anything with that attention.

What we have is contempt for anybody trying to keep institutions running, anybody trying to defend what’s already been achieved or to maintain a steady course towards the farther horizons of a long-term future. What we have is a notion that anyone responsible for any institution or group is “powerful” and therefore always contemptible. Hence not wanting to build things or be responsible. Everyone wants to grab the steering wheel for a moment or two but no one wants to drive anywhere or look at a map, just to make vroom-vroom noises and honk the horn.

Everyone’s sure that speech acts and cultural work have power but no one wants to use power in a sustained way to create and make, because to have power persistently, in even a small measure, is to surrender the ability to shine a virtuous light on one’s own perfected exclusion from power.

Gawker writers want to hold other writers and speakers accountable for bad writing and unethical conduct. They want to scorn Reddit for its inability to hold its community to higher standards. But they don’t want to build a system for good writing, they don’t want to articulate a code of ethical conduct, they don’t want to invest their own time and care to cultivate a better community. They don’t want to be institutions. They want to sit inside a kind of panopticon that has crudely painted over its entrance, “Marginality Clubhouse”, a place from which they can always hold others accountable and never be seen themselves. Gawker writers want to always be “punching up”, mostly so they don’t have to admit what they really want is simply to punch. To hurt someone is a great way to get attention. If there’s no bleeder to lead, then make someone bleed.

It’s not just them. Did you get caught doing something wrong in the last five years? What do you do? You get up and do what Gawker Media writer Natasha Vargas-Cooper has done several times, doing it once again this weekend in a tweet: whomever you wronged deserved it anyway, you’re sorry if someone else is flawed enough to take offense, and by the way, you’re a victim or marginalized and not someone speaking from an institution or defending a profession. Tea Party members and GamerGate posters do the same thing: both of their discursive cultures are full of proclamations of marginality and persecution. The buck stops somewhere else. You don’t make or build, you don’t have hard responsibilities of your own.

You think people who do make and build and defend what’s made and built are good for one thing: bleeding when you hit them and getting you attention when you do it. They’re easy to hit because they have to stand still at the site of their making.

This could be simply a complaint about individuals failing to accept responsibility for power–even with small power comes small responsibility. But it’s more than that. In many cases, this relentless repositioning to virtuous marginality for the sake of rhetorical and argumentative advantage creates a dangerous kind of consciousness or self-perception that puts every political and social victory, small and large, at risk. In the wake of the Supreme Court’s marriage decision, a lot of the progressive conversation I saw across social media held a celebratory or thankful tone for only a short time. Then in some cases it moved on productively to the next work that needs doing with that same kind of legal and political power, to more building. But in other cases, it reset to marginality, to looking for the next outrage to spark a ten-minute Twitter frenzy about an injustice, always trying to find a way back to a virtuous outside untainted by power or responsibility, always without any specific share in or responsibility for what’s wrong in the world. If that’s acknowledged, it’s not in terms of specific things or actions that could be done right or wrong, better or worse, just in generalized and abstract invocations of “privilege” or “complicity”, of the ubiquity of sin in an always-fallen world.

On some things, we are now the center, and we have to defend what’s good in the world we have knowing that we are there in the middle of things, in that position and no other. To assume responsibility for what we value and what we do and to ensure that the benefits of what we make are shared. To invite as many under our roof as can fit and then invite some more after that. To build better and build more.

What is happening across the whole span of our zeitgeist is that we’ve lost the ability to make anything like a foundational argument that binds its maker as surely as it does others. And yet many of us want to retain the firm footing that foundations give in order to claim moral and political authority.

This is why I say nihilism would be better: at least the nihilist has jumped off into empty space to see what can be found when you no longer want to keep the ground beneath your feet. At least the anarchist is sure nothing of worth can be built on the foundations we have. At least the free spirit is dancing lightly across the floor.

So Gawker wants everyone else to have ethics, but couldn’t describe for a moment what its own ethical obligations are and why they should be so. Gawker hates the lack of compassion shown by others, but not because it has anything like a consistent view about why cruelty is wrong. Gawker thinks stories should be accurate, unless they have to do the heavy lifting to make them so.

They are in this pattern of desires typical, and it’s not a simple matter of hypocrisy. It is more a case of the relentless a la carte -ification of our lives, that we speak and demand and act based on felt commitments and beliefs that have the half-life of an element created in a particle accelerator, blooming into full life and falling apart seconds later.

To stand still for longer is to assume responsibility for power (small or large), to risk that someone will ask you to help defend the castle or raise the barn. That you might have to live and work slowly for a goal that may be for the benefit of others in the future, or for some thing that is bigger than any human being to flourish. To be bound to some ethic or code, to sometimes stand against your own desires or preferences.

Sometimes to not punch but instead to hold still while someone punches you, knowing that you’re surrounded by people who will buoy you up and heal your wounds and stand with you to hold the line, because you were there for them yesterday and you will be there with them tomorrow.

Posted in Blogging, Cleaning Out the Augean Stables, Information Technology and Information Literacy, Politics, Popular Culture | 8 Comments

The Production of Stigma

Since Swarthmore seems likely to be stuck debating or struggling over divestment for at least another year or more, I remain interested in trying to push at the central weakness of the pro-divestment argument.

The major argument of many divestment advocates is that divestment by higher education and other large civic organizations will cumulatively stigmatize fossil fuel producers within public culture. More than a few divestment advocates find it hard to stay “on message” with this idea, and often invoke instead tropes of purity or imply that divestment will produce direct economic pressure on fossil fuel companies by devaluing their shares, but when pressed, the movement generally underscores the stigma concept as their key strategic insight.

I’ve complained before that I think this entire argument is a distraction from other kinds of tactics that might produce more meaningful political and social pressure on fossil fuel producers as well as produce a direct impact on climate change itself. The response of many advocates is that institutions can both divest and pursue other kinds of tactics and work to reduce their own consumption of fossil fuels. For that to be true, divestment advocates would have to stop being scornful or disinterested when other tactics or strategies are being formulated. But let me stop with my own distractedness right here and just hone on one major question: are there good historical examples of the production of stigma from direct political or social action which in turn forced the stigmatized institutions or actors to behave differently, or led to general changes in public outlook that marginalized or disempowered the stigmatized? If so, how closely do those examples resemble the current divestment movement?

This takes asking as prologue: what do we mean by stigma? I suppose you could take stigma as accomplished if people, actions, things or institutions, are treated as moral and social pariahs. There needs to be a general social consensus that it is acceptable to mock, despise or shun the target of stigma. Stigma casts its targets out of the social order, and thus also requires ideologies of respectability. Stigma is categorical and even stereotypical, it relieves us of the burden of having to argue case-by-case about why something or someone is wrong. We bundle their wrongness into our common sense. As this definition probably underscores, stigma is a dangerous tool generally, and has far more often been a tool of oppression or domination than the other way around. That doesn’t necessarily mean that it has no purpose or legitimacy as a goal: stigmatizing racism or fascism, for example, not only seems useful but follows on generations of struggle that should serve as sufficient justification for pushing towards that objective.

—————–

1) Consumer boycotts such as the Nestle boycott, the boycotting of South African wine, or the boycotting of Israeli hummus. These campaigns I think by and large serve as good examples of successful direct action. It is possible to change how a proportion of the consuming public perceives a particular product through media campaigns of some kind or another. Some of those campaigns have been methodical and sustained, some of them have been the result of clever or viral strategies. Do they share more in common? I think so. First, most of them have involved products that it’s relatively easy to give up, generally single brands or types of a general commodity. Asking people to stop consuming chocolate, wine or hummus generally would have been a much harder sell. Second, most of these campaigns have involved petitionary addresses to the producer asking for a change in the producer’s behavior. That’s a bit more ambitious when it’s aimed at a state or a regime than when it’s aimed at the selling of infant formula, but in all cases, it is at least imaginable that the producer could try to respond positively to the boycott. Third, the stigma in these cases was mostly fairly limited to particular social groups or classes. When the intended stigma applied to a product that the most responsive social group didn’t consume, the campaign was not very successful. High-income liberals already didn’t drink very much Coors, for example. Fourth, successful cases of stigma creation were actually hard to undo or manage. The Nestle boycott has been cancelled and renewed multiple times and at this point I think is quite beyond the ability of organizers to actively manipulate or change as a result. I brought a South African wine to a party five years ago and the host frowned in concern because they couldn’t quite remember why they weren’t supposed to drink it, just that they were.

2) Tobacco. Tobacco has gone from being culturally omnipresent and generally legitimized to being conventionally loathed, tobacco producers have become commonly viewed as synonymous with dishonesty and the destructive pursuit of profit, and smokers have become marginalized, pitied and/or despised. This is probably the closest match to what the fossil fuel divestment movement might have in mind. The stigmatizing of tobacco has moved the tobacco industry from having strong political influence across the nation to being relatively vulnerable politically except in a handful of states. Defending the tobacco industry is almost synonymous with being grossly self-interested.

Because it’s a good model, it’s worth reviewing how it was accomplished.

First, a broad spectrum of campaigns targeting consumption and consumers of tobacco were integral to creating stigma, and most signally, those campaigns worked across many different cultural domains and communities. Public health and medicalization were the most powerful and earliest weapon in the stigma-producing arsenal, but there were many others along the way. Anti-tobacco campaigns brought pressure on consumers through domesticity and family life (the prenatal impact of smoking, the effects of secondhand smoke on family members); through trying to remove romanticized or positive images of smoking in popular culture; through underscoring how smoking made the appearance and smell of smokers unattractive; through emphasizing the pathos of addiction and early death from lung cancer.

Second, the anti-tobacco campaign did an effective job of exposing the manipulations and deceptiveness of the tobacco producers themselves and that exposure itself contributed to stigmatizing tobacco by pushing the companies involved to act more and more desperate, cynical and predatory. Big Tobacco stigmatized itself, and this reveals another dimension to the politics of stigma. In a public struggle, behavior that violates common or widely shared moral sentiments (in this case, about truth, honesty, care for others) makes it much easier to create stigma, even if that behavior doesn’t directly relate to the focus of the campaign. E.g., the point was to stigmatize the consumption of tobacco, but if its producers were unsympathetic moral actors, so much the better. This also requires the stigma-producing movement to appear morally superior or preferable to their targets, however.

3) Racism. The point here I think would be that stigma alone can only accomplish so much, and that the more general the target, the less potent it is as a political tool.

It’s true that the civil rights movement and its immediate aftermath did a great deal to make the open expression of racist sentiment disreputable, a shift which still holds to a large extent within American public culture. But only in very limited and particular ways, e.g., political actors and elites who want to make use of racial sentiments or mobilize on a more or less racist basis largely use various codes and ‘dog-whistles’ to accomplish their goals and hide behind plausible deniability. It is almost a case of James Scott’s “weapons of the weak”, only transferred to one subset of the powerful.

What’s worth noting here is the specific requirements for stigmatizing a widespread cultural or social phenomenon that resides in the everyday practices and consciousness of a large proportion of the population. Even the limited and tentative degree of stigma attached to overt racist sentiment required a very overt, aggressive use of the politics of respectability, especially invoking ideas about class and social mobility. It required building a general moral consensus about the harms of racist sentiment as well as formal structures of racial discrimination. To keep some sense of stigma in the air also has taken incessant amounts of public shaming and regular cultural mobilizations, even in the pre-digital culture of the 1970s-1990s.

4) Same-sex marriage, abortion, premarital sex, divorce, unmarried parenting, etc.

I cite these as examples of practices concerning sexuality, marriage, family, gender, etc. where “stigma” has been highly mobile over time and across social groups, moving in and out of general consensus, and also where “stigma” has been intensely felt and applied to real human beings with very real consequences. In every case, the development or falling away of stigma was also affected by some kind of deliberate social or political action, though many activists involved with these issues have tried to portray shifting sentiments as a natural byproduct of progress (or as a sign of deep-seated devotion to tradition).

Note again that stigma here is not merely spontaneous and purely social but is largely potent and powerful in everyday life because the practice in question also involves either state sanction or prohibition. However, when stigma enters the picture in any of these cases, it does through moral and emotional language and operates at the level of everyday social relations, not as a matter of dry debate over public policy.

A contrast here could be made to practices that have been in some sense “stigmatized” but did not involve substantial interaction with state authority as their cultural status shifted. Long hair on men, for example. There was still an enforcement mechanism in that case: a man who grew long hair prior to 1970 or so might have been terminated from his job, might have been denied service in a place of business, or might have been verbally or physically assaulted in some social situations. What I think the contrast shows is that individual (or even institutional) behavior can shift from stigmatized to legitimate (or vice-versa) more quickly if the state is not involved, and that the shift is more likely to be lasting. But also these tend to be less consequential or potent kinds of practices. Note that even in these instances, stigma and legitimacy operate through highly moralizing, visceral, emotional discourses.

5) Mental illness and alcoholism

Here are two examples of social issues where there has been an earnest attempt over many decades to destigmatize them via medicalization. Given that this effort has been at best only partially successful, what I take away from this is that if stigmatization takes hold, it’s very hard to undo. Shame and disgust are a very powerful social formation as well as individual psychological experience. If they’re imposed on a phenomenon whose persistence derives from very deep-seated structural roots, they do not stop or prevent that phenomenon but instead largely aggravate the suffering of individuals and groups who are entangled with it. Stigma may help those who do not suffer from the issue feel more secure or positive about themselves, e.g., the sober and the sane feel more self-righteous, more moral, more ‘normal’ via the enforcement of the stigma.

This is especially true if the stigma extends to or demands criminalization. Sex work might be an example of this, given that it is both stigmatized and usually criminalized. Neither does much to prevent sex work itself, but together they make the life of sex workers (and sometimes, but much more rarely, customers) more precarious.

———

To sum up, if a political struggle wants to use stigma as an instrument, it will need to accept the following as preconditions of success:

1. An embrace of “respectability” as an ideological formation which must make active use of some form of social division or cleavage, and an acceptance of moralizing rhetorics that accompany it. The problem here is that respectability is not an a la carte issue-driven coalition. For respectability to have real power, it has to mobilize across an entire social group, whether that’s class-based or otherwise. It has to operate as manners, as an unspoken everyday orientation towards life. It has to align and assemble assumptions about decency, fairness, righteousness, justice, goodness and attach them to places, people and practices in a somewhat consistent manner.

For campus divestment activists, the primary issue that the requirement to make use of respectability poses is two-fold. First, it requires some degree of investment in the cultural capital of the civic institutions being enlisted in the cause. You can’t exalt the trustworthiness and legitimacy of science, universities, churches, and so on simply when they’re endorsing divestment but otherwise scorn them as handmaidens of neoliberalism or as defenders of reactionary values. This is not just about being considerate to coalition partners: the point is that because the production of stigma requires operating within the register of respectability, to use it successfully a political struggle has to invest wholesale in the authority of respectable institutions. Second, divestment activists will have to pay more attention to large-scale forms of social consensus if they’re interested in using stigma as a weapon, meaning primarily that gestures that accentuate the radicalism or vanguardism of activists are self-defeating. Those moves only make sense in a politics that is attacking a settled consensus or that is seeking to mobilize a strongly radicalized class fraction, e.g., a politics that doesn’t care about being stigmatized rather than a politics hoping to confer stigma.

2. Moral language gains very little political traction when it is nakedly instrumental and temporary, for the most part. Yes, political leaders can get away with routinely violating the moral principles they otherwise attempt to enforce. David Vitter can be caught with his phone number in a prostitute’s contact list and still claim to be a defender of “traditional family values” on behalf of a highly conservative electorate. But even in these cases, the politician in question still has to agree that he ought to follow those values and perform as if he is sorry for failing to do so. You can’t deploy moralizing language and regard your own moral adherence to that language as a secondary or deferred priority.

To stigmatize successfully, you have to also at least pretend to represent the normative, respectable alternative. For divestment activists, this means that they have to stop treating challenges to their own consumption of fossil fuels as a purely malicious non-sequitur. It may well be so, in the sense that such challenges are usually made as provocations from opponents who are unlikely in any case to be swayed by the divestment argument (or indeed, by any environmental activism). But that’s because those opponents sense this is an area of legitimate vulnerability in relationship to the desired political objective. You cannot seek stigma without using moralizing language, and you cannot use moralizing language without at least performing (sincerely or otherwise) your own comparatively greater moral respectability.

What I think this means is that divestment activists will have to stop insisting that calls for attention to consumption ought to be deferred until after divestment is accomplished, or are at least simultaneous with divestment. In fact, I think they’re failing to understand that the moral authority that makes stigma take hold requires depends on a driving commitment to the control of fossil fuel consumption as a prior condition of the campaign’s success, and for that commitment to be visible in the lives of individuals within the movement as well as in institutions.

3. Following on this, stigma isn’t usually abstract. All the examples I can think of apply to and are strongly felt in the lives of individuals. For fossil fuels, that means one of two things: either stigma will eventually have to apply to the individual lives of consumers or it will have to apply to the individual lives of producers. The former strategy has risks that have long been discussed within the environmental movement: you can campaign to make people feel guilty about Nestle chocolate or South African wine, but stigmatizing individuals over whether they use air conditioning or fuel oil is a different political proposition. Shunning producers as individuals has a lot of appeal, in contrast, in that it creates a set of identifiable villains that lets everyone else feel righteous in contrast to. The move to stigmatize the wealthiest 1% has been one of the few things to even slightly restrain the political and social power of current oligarchs. There’s a danger to that approach too, precisely because most people are very familiar with the suffering that shame creates in its targets. Done carelessly, such a campaign creates more, not less, sympathy for its targets. “Which side are you on?” might be an example of being careless: if you’re dishing out stigma, the larger the group of individuals you’re potentially targeting, the more difficult it gets to really stigmatize. Stigma requires a strong majority, even a supermajority consensus, to have much power–if you’re not Amish, you could really give a shit what the Amish think about your use of technology. Stigma is a really strong and dangerous tool that may persist well after it was intended to and apply to targets it wasn’t mean to harm, and most people sense that. I’m not sure that divestment activists recognize what they’re proposing to work with, in contrast.

4. Eventually stigma will require the enlistment of the state to be really powerful and persistent. The problem here with the divestment movement is the chicken-and-egg logic that the campaign presently relies upon–that it will be the successful creation of stigma against the fossil fuel industry in the public sphere and in everyday life that will compel state action. But almost every example I can think of, good and bad, either started with the enlistment of some part of the government or mobilized state resources prior to stigma really taking hold at the popular level. What I think this suggests in part is that stigma requires a prior condition of political vulnerability in its targets–some degree of social or economic isolation. It may be that the fossil fuel industry is on the cusp of that vulnerability both because of general awareness of climate change and because of the growing economic viability of alternative energy producers. But that means again that divestment might be a distraction from producing a condition of stigma rather than a primary means of accomplishing it. E.g., that there are other things afoot that could benefit from activist support which are making fossil fuel producers vulnerable and creating at least the possibility of governmental action.

Posted in Politics | 3 Comments

The (Ab)Uses of Fantasy

Evidently I’m not alone in thinking that last week’s episode of Game of Thrones was a major disappointment. By this I (and other critics) do not mean that it was simply a case of poor craftmanship. Instead, it featured a corrosive error in judgment that raised questions about the entire work, both the TV show and the book. Game of Thrones has always been a high-wire act; this week the acrobat very nearly fell off.

In long-running conversations, I’ve generally supported both the violence that GoT is known for and the brutal view the show takes of social relations in its fantasy setting, particularly around gender. Complaints about its violence often (though not invariably) come from people whose understanding of high fantasy draws on a very particular domestication of the medieval and early modern European past that has some well-understood touchstones: a relentless focus on noble or aristocratic characters who float above and outside of their society; a construction of violence to either formal warfare or to courtly rivalry; a simplification (or outright banishment) of the political economy of the referent history; orientalist or colonial tropes of cultural and racial difference, often transposed onto exotic fantasy types or creatures; essentially modern ideas about personality, intersubjectivity, sexuality, family and so on smuggled into most of the interior of the characters.

These moves are not in and of themselves bad. Historical accuracy is not the job of fiction, fantasy or otherwise. But it is also possible that audiences start to confuse the fiction for the referent, or that the tropes do some kind of work in the present that’s obnoxious. That’s certainly why some fantasy writers like China Mieville, Phillip Pullman and George R.R. Martin have various objected to the high fantasy template that borrows most directly from Tolkien. It can lead to a misrecognition of the European past, to the sanctification of elitism in the present (by allowing elites to see themselves as nobility), and also simply to the reduction of creative possibility. If a fantasy writer is going to draw on history, there are histories outside of Europe–but also early modern and medieval Europe suggest other templates.

Martin is known to have drawn on the Wars of the Roses and the Hundred Years War (as did Shakespeare) and quite rightly points out when criticized about the violence in Game of Thrones that his books if anything are still less distressing than the historical reality. It’s a fair point on several levels–not just ‘accuracy’, but that the narrative motion of those histories has considerable dramatic possibility that Tolkienesque high fantasy simply can’t make use of. Game of Thrones is proof enough of that point!

But GoT is not Tuchman’s Distant Mirror nor any number of other works. A while back, Crooked Timber did a lovely seminar on Joanna Clarke’s novel Jonathan Strange and Mr. Norrell. Most of the commenters focused on the way in which the novel reprises the conflict between romantics and utilitarians in 19th Century Britain, and many asked: so what do you gain by telling that story as a fantasy rather than a history?

To my mind, you gain two things. The one is that there may be deeper and more emotional truths about how it felt to live and be in a past (or present) moment that you only gain by fiction, and that some of those in turn may only be achievable through fiction that amplifies or exaggerates through the use of fantasy. The second is that you gain the hope of contingency. It’s the second that matters to the last episode of Game of Thrones.

Historical fiction has trouble with “what if”? The more it uses fiction’s permission to be “what if”, the more it risks losing its historicity. It’s the same reason that historians don’t like counterfactuals, for the most part: one step beyond the moment of contingency and you either posit that everything would have turned out the same anyway, or you are stuck on a wild ride into an unknown, imaginary future that proceeds from the chosen moment. Fantasy, on the other hand, can follow what ifs as long as it likes. A what if where Franklin decides to be ambassador to the Iroquois rather than the French is a modest bit of low fantasy; a what if where Franklin summons otherworldly spirits and uses the secret alchemical recipes of Isaac Newton is a much bigger leap away, where the question of whether “Franklin” can be held in a recognizable form starts to kick in. But you gain in that move not only a lot of pleasure but precisely the ability to ask, “What makes the late colonial period in the U.S. recognizable? What makes the Enlightenment? What makes Franklin?” in some very new ways.

Part of what governs the use of fantasy as a way of making history contingent is also just storytelling craft: it allows the narratives that history makes available to become more interesting, more compressed, more focused, to conform not just to speculation but to the requirements of drama.

So Game of Thrones has established that its reading of the late medieval and early modern brings forward not only the violence and precarity of life and power in that time but also the uses and abuses of women within male-dominated systems of power. Fine. The show and the books have established that perfectly well at this point. So now you have a character like Sansa who has had seasons and seasons of being in jeopardy, enough to fill a lifetime of shows on the Lifetime channel. And there is some sense of a forward motion in the character’s story. She makes a decision for the first time in ages, she seems to be playing some version of the “game of thrones” at last, within the constraints of her role.

So why simply lose that sense of focus, of motion, of narrative economy? If Monty Python and the Holy Grail had paused to remind us every five minutes that the king is the person who doesn’t have shit on him, the joke would have stopped being funny on the second go. If Game of Thrones is using fantasy to simply remind us that women in its imagined past-invoking world get raped every five minutes unless they are plucky enough to sign up with faceless assassins or own some dragons, it’s not using its license to contingency properly in any sense. It’s not using it to make better stories with better character growth and it is not using it to imagine “what if”? If I want to tell the story of women in Boko Haram camps as if it were suffused with agency and possibility, I would rightly be attacked for trying to excuse crimes, dismiss suffering and ignore the truth. But that is the world that we live in, the world that history and anthropology and political science and policy and politics must describe. Fiction–and all the more, fantasy–have other options, other roads to walk.

There is no requirement for the show to have Sansa raped by Ramsay Bolton, no truth that must be told, not even the requirement of faithfulness to the text. The text has already (thankfully!) been discarded this season when it offers nothing but meandering pointlessness or in the case of Sansa, nothing at all. So to return suddenly to a kind of conservation of a storyline (“False Arya”) that clearly will have nothing to do with Sansa in whatever future books might one day be written is no justification at all. If it’s Sansa moving into that narrative space, then do something more with that movement. Something more in dramatic terms and something more in speculative, contingent terms. Even in the source material Martin wants to use, there are poisoners and martyrs, suicides and lunatics, plotters and runaways he or the showrunners could draw upon for models of women dealing with suffering and power.

Fantasy means you don’t have to do what was done. Sansa’s story doesn’t seem to me to offer any narrative satisfactions, and it doesn’t seem to make use of fantasy’s permissions to do anything new or interesting with the story and the setting. At best it suggests an unimaginative and desperate surrender to a character that the producers and the original author have no ideas about. At worst it suggests a belief that Game of Thrones‘ sense of fantasy has been subordinated to the imperative of “we have to be even grosser and nastier next time”! That’s not fantasy, that’s torture porn.

Posted in Popular Culture, Sheer Raw Geekery | 5 Comments

The Ground Beneath Our Feet

I was a part of an interesting conversation about assessment this week. I left the discussion thinking that we had in fact become more systematically self-examining in the last decade in a good way. If accrediting agencies want to take some credit for that shift, then let them. Complacency is indeed a danger, and all the more so when you have a lot of other reasons to feel confident or successful.

I did keep mulling over one theme in the discussion. A colleague argued that we “have been, are and ought to be” committed to teaching a kind of standardized mode of analytic writing and that therefore we have a reason to rigorously measure across the board whether our students are meeting that goal. Other forms of expression or modes of writing, he argued, might be gaining stock in the world but they shouldn’t perturb our own commitment to a more traditional approach.

I suppose I’m just as committed to teaching that kind of writing as my colleague, for the same reasons: it has a lot of continuing utility in a wide variety of contexts and situations, and it reinforces other less tangible habits of thought and reflection.

And yet, I found myself unsettled on further reflection about one key point: that it was safe to assume that we “are and ought to be” committed. It seems to me that there is a danger to treating learning goals as settled when they’re not settled, just as there is a danger to treating any given mix of disciplines, departments and specializations at a college or university as something whose general stability is and ought to be assured. Even if it is probable that such commitments will not change, we should always act as if they might change at any moment, as if we have to renew the case for them every morning. Not just for others, but for ourselves.

Here’s why:

1) even if a goal like “teaching standard analytic writing” is absolutely a bedrock consensus value among faculty and administration, the existence of that consensus might not be known to the next generation of incoming students, and the definition of a familiar practice for faculty might be unfamiliar to those students. When we treat some feature of an academic enviroment as settled or established, there almost doesn’t seem to be any reason to make it explicit, or to define its specifics, and so if students don’t know it, they’ll be continuously baffled by being held accountable to it. This is one of the ways that cultural capital acts to reproduce social status (or to exclude some from its reproduction): when a value that ought to be disembedded from its environment and described and justified is instead treated as an axiom.

2) even if something like “teaching analytic writing” is absolutely a bedrock consensus value among faculty, if some in a new generation of students consciously dissent from that priority and believe there is some other learning goal or mode of expression which is preferable it, then faculty will never learn to persuade those students, and will have to rely on a brute force model to compel students to comply. Sometimes that works in the same way that pulling a child away from a hot stove works: it kicks the can down the road to that moment when those students will recognize for themselves the wisdom of the requirement. But sometimes that strategy puts the goal itself at risk by exposing the degree to which faculty themselves no longer have a deeply felt or well-developed understanding of the value of the requirement they are forcing on their students.

3) Which leads to another point: what if the previously consensus value is not a bedrock consensus value even among faculty? If you assume it is, rather than treat the requirement as something that needs constantly renewed investigation, you’ll never really know if an assumed consensus is eroding. Junior and contingent faculty may say they believe in it, but really don’t, which contributes to a moral crisis in the profession, where the power of seniority is used to demand what ought to be earned. Maybe some faculty will say they believe in a particular requirement but actually don’t do it well themselves. That’s corrosive too. Maybe some faculty say they believe in it but what they think “it” is is not what other people think it is. You’ll never know if the requirement or value isn’t always being revisited.

4) Maybe there is genuine value-based disagreement or discord within the faculty that needs to be heard, and the assumption of stability is just riding roughshod over that disagreement. That’s a recipe for a serious schism at some point, perhaps at precisely the wrong moment for everyone on all sides of that kind of debate.

5) Maybe the requirement or value is a bedrock consensus value among faculty but it absolutely shouldn’t be–e.g., that the argument about that requirement is between the world as a whole and the local consensus within the academia. Maybe everything we think about the value we uphold is false, based on self-referring or self-validating criteria. At the very least, one should defy the world knowingly, if one wants to defy the world effectively.

I know it seems scary to encourage this kind of sense of contingency in everything we do in a time when there are many interests in the world that wish us ill. But this is the part of assessment that makes the most sense to me: not measuring whether what we do is working as intended (though that matters, too) but asking every day in a fresh way whether we’re sure of what we intend.

Posted in Academia, Defining "Liberal Arts", Swarthmore | 2 Comments