On the Deleting of Academia.edu and Other Sundry Affairs

Once again with feeling, a point that I think cannot be made often enough.

Social media created and operated by a for-profit company, no matter what it says when it starts off about the rights of content creators, will inevitably at some point be compelled to monetize some aspect of its operations that the content creators did not want to be monetized.

This is not a mistake, or a complaint about poor management practices. The only poor practices here are typically about communication from the company about the inevitable changes whenever they arrive, and perhaps about the aggressiveness or destructiveness of the particular form of monetization that they move towards.

The problem is not with the technology, either. Uber could have been an interface developed by a non-profit organization trying to help people who need rides to destinations poorly serviced by public transport. It could have been an open-source experiment that was maintained by a foundation, like Wikipedia, that managed any ongoing costs connected to the app and its use in that way. And that’s with something that was already a product, a service, a part of the pay economy.

Social media developed by entrepreneurs, backed by venture capital, will eventually have to find some revenue. And there are only three choices: they sell information about their users and content creators, even if that’s just access to the attention of the users via advertisements; they sell services to their users and content creators; they sell the content their creators gave to them, or at least take a huge cut of any such sales. That’s it.

And right now except for a precious few big operators, none of those choices really let the entrepreneurs operate a sustainable business. Which is why so many of of the newer entries are hoping to either threaten a big operator, get a payout and walk away with their wallet full (and fuck the users) or are hoping to amass such a huge amount of freely donated content that they can sell their archive and walk away with their wallet full (and fuck the users).

If the stakes are low, well, so be it. Ephemeral social conversation between people can perhaps safely be sold off, burned down and buried so that a few Stanford grads get to swagger with all the nouveau-richness they can muster. On the far other end, maybe that’s not such a great thing to happen to blood tests and medical procedures, though that’s more about the hideous offspring of the social media business model, aka “disruption”.

But nobody at this point should ever be giving away potentially valuable work that they’ve created to a profit-maker just because the service that hosts it seems to provide more attention, more connection, more ease of use, more exposure.

Open access is the greatest idea in academia today when it comes to make academia more socially just, more important and influential, more able to collaborate, and more able to realize its own cherished ideals. But open access is incompatible with for-profit social media business models. Not because the people who run academia.edu are out of touch with their customer base, or greedy, or incompetent. They don’t have any choice! Sooner or later they’ll have to move in the direction that created such alarm yesterday. They will either have amassed so much scholarship from so many people that future scholars will feel compelled to use the service–at which point they can charge for a boost to your scholarly attention and you’ll have to pay. Or they will need to monetize downloads and uses. Or monetize citations. Or charge on deposit of anything past the first article. Or collect big fees from professional associations to for services. Or they’ll claim limited property rights over work that hasn’t been claimed by authors after five years. Or charge a “legacy fee” to keep older work up. You name it. It will have to happen.

So just don’t. But also keep asking and dreaming and demanding all the affordances of academia.edu in a non-profit format supported by a massive consortia of academic institutions. It has been, is and remains perfectly possible that such a thing could exist. It is a matter of institutional leadership–but also of faculty collectively finally understanding their own best self-interest.

Posted in Academia, Information Technology and Information Literacy, Intellectual Property | 2 Comments

Technologies of the Cold War in Africa (History 90I) Syllabus

I saw last year that some smart academics were using Piktochart to design more graphical, visual syllabi, so I took a stab at it.



Posted in Academia, Africa, Swarthmore | Comments Off on Technologies of the Cold War in Africa (History 90I) Syllabus

“Dates Back Millennia”

You know, I have less of an unqualified hatred for the “dates back millennia” line than I used to. I’m thinking this as I see my feed fill up with friends and colleagues complaining about Obama’s use of it in his speech to talk about the Middle East. To some extent, historians overreact to its use by politicians for two separate reasons.

The first is that of course it’s factually wrong and not at all innocently so. Which is to say that this line of explanation, whether offered as a quick throw-away or as a substantive claim, looks away from the history of the 20th Century and the very decisive role played by European colonialism and post-WWII American intervention in structuring many supposedly “ancient hatreds”. In the case of Israel-Palestine, that is particularly convenient for the United States (and for Zionists) because the direct and immediate causal relevance of the precise way in which the state of Israel came into being and the ways in which the current states of the Middle East were brought into the geopolitics of the Cold War are the major and direct causal underpinnings of contemporary conflicts. It runs from mature responsibility and from genuine analytic understanding all at once.

The second reason for the reaction is Invoking “ancient hatreds” not only is a misdirection of attention, it also naturalizes conflicts in the bodies and minds of the combatants. It’s a kind of shrug: what can one do? but it also turns more to psychology than history as the toolset for thinking through current politics, which is at best futile and at worst creepy.

So why do I qualify my dislike? First I think among historians we all recognize that there’s a strong turn to the modern and contemporary among our students and our publics, a presentism that most of us criticize. But I think in moments like this, we contribute some to that presentism. We should leave a door open for times before the 20th Century to matter as causal progenitors of our own times and problems. Sure, that argument has to be made carefully (shouldn’t all historical arguments be thus?) but I actually think all of the past is weighing on the present, sometimes quite substantially so. “Ancient hatreds” isn’t quite the right way to put it, but there are aspects of conflict in the Middle East which do genuinely derive structure or energy from both the Ottoman period (early and late) and from times before that.

It’s also that I think we end up in getting angry at politicians who are trying to kick over the traces of their own government’s recent historical culpability but in so doing forget that there are many other actors who also believe and are motivated by the supposed antiquity of their actions. On some level, if they do think so, we ought to at least listen carefully and not quickly school-marm them about why the experts hold that they’re wrong. Authenticity is a strange twilight realm. If people believe that they are upholding something ancient, that has a way of becoming true enough in some sense even if they’re wrong about the history between them and that past moment and wrong about what the ancient history really was. It might be easier simply to focus on the culpability of some states and actors for the current situation and leave aside compulsively correcting their history in some cases.

But finally, as long as we’re talking culpability, the one problem with always, invariably locating conflict and hatred as having their most relevant origins in Western colonialism and in the decisions made during post-WWII decolonization is that we risk having our own version of a distraction from uncomfortable truth. As I noted, maybe sometimes there really is something older at play. There’s a really great book that the historian Paul Nugent wrote about the Ghana-Togo borderlands in West Africa that makes the argument that counter to the common trope that the Berlin Conference simply arbitrarily created random and incoherent borders–that the border there was both reflective of older 19th Century histories and that the communities in the borderlands did much to fashion those boundaries. More uncomfortably, maybe sometimes there’s something far more recent and contingent at play–maybe sometimes in current global conflicts even our preferred causal stage is an “ancient conflict” of little real empirical relevance to combatants, who are instead being put into motion by the political and cultural histories of the last twenty years or even the last ten.

Posted in Academia, Politics, Production of History | Comments Off on “Dates Back Millennia”

All Saints Day

Commenting on the debate over Halloween costumes seems freshly risky this week, but the subject has been on my mind since I read this New York Times article on the subject on October 30.

My first thought would be that calls for the resignation of the Silliman House masters at Yale are dangerously disproportionate to the email that they wrote in response to polite guidance from the Yale administration. I’ll come back to why that disproportionate response worries me so much later in this essay.

And yet I don’t entirely agree with the way that Erika Christakis chose to come at the issue. I wish everyone could back up a step so that the entire discussion is not about free expression vs. censorship or between safe spaces and stereotype threats. Once the discussion has locked into those terms, then the “free speech” advocates are stupidly complicit in defending people who show up at parties in blackface or are otherwise costumed or having themed parties with deliberately offensive stereotypes. Once the discussion has locked into those terms, people who want to say that such stereotypes have a real, powerful history of instrumental use in systems of racial domination are forced to understand that advocacy as censorship–and are also unable to leave space open to hear people like Erika and Nicolas Christakis as making any other kind of point.

The real issues we should be talking about are:

1) The concepts of appropriation and ownership. This is where moves are being made that are at least potentially reactionary and may in fact lead to the cultural and social confinement or restriction of everyone, including people of color, women, GLBQT people, and so on. In some forms, the argument against appropriation is closely aligned with dangerous kinds of ethnocentrism and ultra-nationalism, with ideas about purity and exclusivity. It can serve as the platform for an attack on the sort of cosmopolitan and pluralistic society that many activists are demanding the right to live within. Appropriation in the wrong institutional hands is a two-edged sword: it might instruct an “appropriator” to stop wearing, using or enacting something that is “not of their culture”, but it might also require someone to wear, use and enact their own “proper culture”.

When I have had students read Frederick Lugard’s The Dual Mandate in British Tropical Africa, which was basically the operator’s manual for British colonial rule in the early 20th Century, one of the uncomfortable realizations many of them come to is that Lugard’s description of the idea of indirect rule sometimes comes close to some forms of more contemporary “politically correct” multiculturalism. Strong concepts of appropriation have often been allied with strong enforcement of stereotypes and boundaries. “Our culture is these customs, these clothing, this food, this social formation, this everyday practice: keep off” has often been quickly reconfigured by dominant powers to be “Fine: then if you want to claim membership in that culture, please constantly demonstrate those customs, clothing, food, social formations and everyday practices–and if you don’t, you’re not allowed to claim membership”.

And then further, “And please don’t demonstrate other customs, clothing, food, social formations and everyday practices: those are for other cultures. Stick to where you belong.” I recall a friend of mine early in our careers who was told on several occasions during her job searches that since she was of South Asian descent, she’d be expected to formally mentor students from South Asia as well as Asian-Americans, neither of which she particularly identified with. I can think of many friends and colleagues who have identified powerfully with a particular group or community but who do not dress as or practice some of what’s commonly associated with that group.

What’s being called appropriation in some of the current activist discourses is how culture works. It’s the engine of cultural history, it’s the driver of human creativity. No culture is a natural, bounded, intrinsic and unchanging thing. A strong prohibition against appropriation is death to every ideal of human community except for a rigidly purified and exclusionary vision of identity and membership.

Even a weak prohibition against appropriation risks constant misapplication and misunderstanding by people who are trying to systematically apply the concept as polite dogma. To see one example of that, look to the New York Times article, which describes at one point a University of Washington advice video that counsels people to avoid wearing a karate costume unless you’re part of the real culture of karate. But karate as an institutional culture of art and sport is already thoroughly appropriated from its origins in Okinawa, and it was in turn an appropriation of sorts from Chinese martial arts–and no martial arts form in the world today is anything even remotely like its antecedents in practice, form or purpose. Trying to forbid karate costuming to anyone but a truly authentic “owner” of the costume is a tragic misunderstanding of the history of the thing being regulated. It’s also a gesture that almost certainly forbids the wearing of a costume that has a referent that is not wholly imaginary. If a karate outfit is appropriation for anyone but a genuine Okinawan with a black belt, then so also are firefighters, police, soldiers, nurses, doctors, astronauts and so on. Even imaginary characters are usually appropriations of some kind of another, drawn out of history and memory.

It is precisely these kinds of discourses about appropriation that are used by reactionaries to protest Idris Elba being cast as Heimdall, or to assert that a tradition of a particular character or cultural type being white or male or straight means it must always be so. It might be possible to configure a critique so that appropriation from below is always ok and appropriation from above is never ok, but that kind of categorical distinction itself rests on the illusion of power being rigid, binary and fixed rather than fluid, performative and situational.

What I think many activists mean to forbid is not appropriation but disrespect, not borrowing but hostile mockery. The use of costumes as weapons, as tools of discrimination. But it’s important to say precisely that and no more, and not let the word appropriation stand in for a much more specific situational critique of specific acts of harmful expression and representation. “Appropriation” is being used essentially to anticipate, to draw a comprehensive line proactively in order to avoid having to sort out with painful specificity which costumes and parties are offensive and which are not after the fact of their expression.

2) But this leads to my second point: “appropriation” is being used for the convenience of custodial authority, for the use of institutions, for the empowerment of a kind of kindly quasi-parental control over communities.

Institutions–like college administrations and particularly the legal advisors they employ–don’t like situational judgments, they don’t like critiques that apply with strong force in some situations and don’t apply at all in others. So they often seek to rework demands for change into rules and guidelines that can be applied evenly to all subjects at all times. That’s one reason why appropriation as a concept at least has the potential to force people to perform the identities they claim according to a pre-existing sketch in the hands of institutional power.

Custodial authority in this respect and many others is a danger for other reasons. Here I can’t do much more than echo Fredrik deBoer’s warning against “University Inc.”: the custodial university quickly becomes the neoliberal corporate university. On some campuses, student activists are incidentally or accidentally strengthening the capacity and reach of custodial power over faculty, staff and students alike. Among other consequences, this change in academic institutions often puts faculty from underrepresented groups at much more intense risk: student activists are sometimes accidentally undercutting one of their most cherished objectives.

Even when the people in the crosshairs do not have that vulnerability, they have the basic vulnerability that all working professionals have in the disastrous political economy of early 21st Century America. In the Christakis’ case and many others, I feel as if simplistic ideas of asymmetrical power and “punching up” are being used to overlook the potentially disastrous consequences of introducing greater precariousness into the lives of middle-aged professionals. Sometimes the consequences of failed leadership is sufficient cause to warrant making an individual’s life precarious, and sometimes the asymmetry of power is enough that one can sleep easy about the consequences–say, with the resignation of the University of Missouri’s president, who I think we can say will in fact land on his feet. But often not. What’s being said to the Christakises in those videos is serious business, and I don’t know that those saying it seem to realize it is, even though many of them clearly feel with legitimate passion that what was said by Erika Christakis is also serious business that makes them feel unsafe in a place where they prize a sense of security. It’s a cliche, but here something of “two wrongs don’t make a right” is important.

This is also a concern about the future of academic institutions themselves. This is the other problem with some of these protests. I feel badly for everyone today in that everything they write on social media, every protest they attend, every response they give, has some chance of being seized upon by commenters all over the world. Nobody was looking at my college life with that kind of attention. But for anyone who aspires to political action,even action as intimate and simple as seeking personal safety and happiness, they have got to pay attention to the infrastructure surrounding that action, and to the consequences that will flow from it. Bit by bit, protests that seem to assert that yes, the university is indeed a world completely apart from the social and cultural realities around it, add fuel to the fires being set by reactionary politicians all around the United States. Bit by bit, protests where the rhetoric that is meant to be strictly local but is turned national or global end up looking tone-deaf or disproportionate. This could be a learning experience: liberal arts learning is supposed to increase the capacities of students to speak, think, write and act in the world around them. But for it to be a learning experience, in some cases students (and faculty) will have to treat the question of how a particular claim will sound or mean outside of the local context seriously. And they will need to think very carefully about matching critical demands to visions of proportionality that sound reasonable to more than just the group at hand.

3) This leads in turn to my third point. What is going on with struggles over Halloween costumes and much else besides within college and university culture has implications for the futures of liberal arts-educated students. And they are not the implications that are commonly drawn either by “free speech advocates” or by defenders of current campus activism.

“Free speech”, broadly speaking, is not what is at risk in most campus disputes. Occasionally it is to some extent: that’s how I interpret the seriously misconceived protests at Wesleyan recently against the student newspaper. Even in the case of Wesleyan, however, the initial impulse to inhibit or constrict what can be said gave way to something more managerial and neoliberal, this time not from administration but from student leadership itself. The student assembly proposed cutting the funding of the paper in the name of a drive for efficiency, having it “compete” for positions against others with an inbuilt incentive-based reward for incorporating diversity.

What I think that move suggests is that some of the drive for cultural transformation, with its constant turn towards custodial forms of managerial and institutional power, may be in danger of turning away from an ideal of creating safety and security for all towards an ideal of governance over others. That the struggles now underway have at least some danger of congealing into an intramural struggle for elite power in the political economy to come. On one side, the future economic elites: the students from selective institutions feeding into the finance industry and Silicon Valley. On the other side, the future cultural managers and bureaucrats: the students from selective institutions feeding into consultancies, non-profits, risk management administration, human resources, into the civic middlemen of a future capitalism.

Where that danger becomes clearest is precisely in the talk of guidance and guidelines, suggestions and “soft rules”. Not so much in the talk itself, but in who the talk is aimed at. Free speech advocacy tends to see every guideline from an institution as a law, and turn to a libertarian vocaculary to contest it. The issue is less the making of law and more the incipient character of class hierarchy in the political economy to come.

One of the things that I heard coming from a substantial wave of student activism here several years ago was that they held themselves to be already knowledgeable about all the things that they felt a good citizen and ethical person should know. It was the other students, the absent students, the students who don’t study such subjects, who worried them. And some of the activists had a touching faith in a way in the power of our faculty’s teaching to remake the great unwashed of the student body. If only they took the right classes, they’d do the right thinking. As one Swarthmore student in spring 2013 said in the group I was in, “I can’t believe there are students here who graduate without having heard the word intersectionality.”

This moment worried me, even though it is important as always to remember: this was a young person, and I said things under similar circumstances that I would be deeply embarrassed to hear quoted directly back to me. It worried me because I hear that same concern a lot across the entire space of cultural activism, both on and off-campuses.

It worries me first because that student and many similar activists are wrong when they assume that what they don’t like in the culture is a result of the absence of the ideas and knowledge that they hold dear. Far more students here have been in a course where concepts like “intersectionality” come up than this student thought. All political ideologies in the contemporary American public sphere, from the most radical to the most reactionary, have a troubling tendency to assume that agreement with their views is the natural state of the mass of people except for a thin sliver of genuinely bad actors, and therefore where a lack of agreement or acceptance holds, it must be because the requisite knowledge has been kept from the masses. This is a really dangerous proposition, because it blinds any political actor to the possibility that many people have have heard what you have to say and don’t agree for actual reasons–reasons that you’ll have to reckon with eventually.

It worries me second because I think some activists may be subconsciously thinking that if they can sufficiently command custodial or institutional power, they will not have to reckon with such disagreement. Not only does that mistake custodial power as permanently and inevitably friendly to their own interests, it is where the temptation to use class power against other social groups will enter in, has already entered in.

This is what worries me most. The thing that I wish that student had recognized is that some of the people that he wishes knew the word intersectionality already know the reality of it. They might not have the vocabulary he does, but they have the phenomenology right enough. Perhaps more right than the student did.

I worry, as in the case of Halloween costumes and much else, that at least some cultural activists are setting themselves up as future commissioners of culture over other social classes and their worlds, that this is as much about admonishing people “out there” for their failure to use the right terms, for their outre mannerisms and affect, for their expressive noncompliance. That this is all about kids who will become upper middle-class (or rich) through access to education judging and regulating kids who will not have that status or education, no matter where the educated kids started in life. That making blanket policies about Halloween costumes and much else might become a building block of class differentiation, part of a system of middle-class moral paternalism.

That’s what an earlier generation of cultural activism left me doing as a young graduate who wanted to be an “ally”: piously correcting people outside of my immediate social universe whenever life put me into close contact with them. Often when it was the most innocent and well-intended on my part, it gave the greatest offense, as when I once started talking about the importance of working-class unionism with my non-union working-class cousins that I was meeting for the first time at my paternal grandfather’s house.

At least in some cases, the entire infrastructure of current cultural activism is disabling the need for careful listening, for patience, for humility, at the moments where it is needed most, particularly within the ethical commitments that many activists themselves treasure and articulate. That’s why guidelines and rules and custodial dictates and finger-wagging about general concepts like appropriation are a problem: they take what is profoundly situational and circumstantial and turn it systematic. They interrupt rather than intensify attention. They make a spectrum of expressive practice into a right-wrong binary.

We need to tell someone thinking of wearing blackface to a party to absolutely stop right there and think again. We need to tell someone planning a fraternity party with a “gang theme” to cut that shit out or else. Neither of those moments is meaningful expression or harmless fun, and there needs to be no room for them. But we also need to not give ourselves permission to piously tell the kid in the karate uniform that they’re appropriating someone’s culture, or to inform the guy in the cowboy uniform that cowboys were nothing but agents of genocidal conquest.

We need to not self-nominate as authorities over culture, especially the speech and cultural activity of people whom we arrogantly judge don’t know as much about it as we do. We need to be in culture, in circulation, even acting through appropriation and imitation, a part of the crowd and not above it. We are all dancers, not choreographers; our only choreographer is the endless, ceaseless and sometimes abrasive motion of human thought and expression in a never-simple world.

Posted in Academia, Popular Culture, Production of History, Swarthmore | 17 Comments

On the Eating of Lotuses

Wary as I might be (like most historians) of historical analogies, there’s an obvious one out there that could stand some use. I think about it every time I read a story about how young and innocent men and women, most but not all of them Muslims, have snuck out of their home nations to join ISIS in Syria or Iraq. The first thing I think about is, “This is pretty much like the Spanish Civil War and the International Brigades”.

Before all my lefty friends descend upon me in their full wrath, let me be really clear about what is and is not similar. The values, ideologies, purposes, and moral character of the two cases are 110% different, alien to each other. I think of the Abraham Lincoln Brigade as genuinely heroic if also slightly naive; I think of ISIS as a horrific movement that I would love to see scourged from the planet and I think anybody who joins them is at best a dupe in service to evil.

So what’s the point of the analogy, and what can we learn from it? The thing is that the mass media in the U.S. and Western Europe acts as if it is baffling when promising, bright and often quite “Westernized” young people want to join ISIS. That often leads to polite, careful, nervous attempts to anatomize Islam as a religion or the “culture” of Islamic communities as if they hold the answer.

The answer in some sense is the same as the answer about young foreigners who flocked to Spain to fight the fascists. What the people who evaded legal restrictions were seeking was the chance to really matter in the world, to put their lives on the line to shape the future in a situation where it seemed to genuinely hang in the balance. They did so in a context where the everyday world around them offered nothing more than stasis and passivity to ordinary citizens and a world where the people in charge during their lives had largely proved that they were feckless, arrogant and untrustworthy at best. If you were nineteen in 1937, your life began in the midst of catastrophic war, your childhood was in an era where heedless plutocrats speculated carelessly and governments demonstrated their near-total inability to understand the economic systems they supervised. You came of age in the middle of a global depression full of misery, and even if you were hopeful about the countermoves of social democrats, you had to wonder why anyone thought that eating your vegetables, studying hard in school and being a good citizen was either a secure or existentially meaningful way to look at your own future, The Waltons notwithstanding.

The reason the media professes to be mystified by the charisma of ISIS for some young people is that they aren’t prepared to countenance the degree to which the world that’s on offer to those young people, Muslim or otherwise, is at best something to be resigned to. Even for people born in privilege, this is a world full of short-term and long-term precariousness. You can’t look forward to working for a company that will reward your long service. You can’t acquire skills that have lasting market value. You can’t count on social progress in your world, or expect basically good governance from your nation. Technological innovation seems less likely unless you’re looking forward to the next social media app. There isn’t that much to believe in any longer, no matter where you live.

Most importantly, many people, old and young, have every reason to think they don’t matter as individuals. The financial inequality tilting the entire planet towards a smaller and smaller elite is matched by a kind of spiritual and imaginative inequality. Yes, sure, online media offer some new avenues for democratic participation in culture-making, often in a better and richer way than Andy Warhol’s fifteen minutes of fame. But otherwise? Liberal democracies around the planet have stunted horizons, barely daring to tackle minor bureaucratic reforms, and they’re frequently captured by political elites that increasingly seem like aristocracies. Social mobility seems as much a quaint thing of yesteryear as slide rules and telegraphs. Nobody goes from the mail room to the boardroom now. It doesn’t matter if these perceptions are not empirically accurate: they are the way the world feels to many people. Much as I intensely disliked what he advocated as the answer to the problem he identified, I still think Paul Berman was right in 2001 to call out liberalism for its “coldness”, to remark on the ways that defending liberalism and democracy seems abstract, distant, and passive. In fact, that a commitment to liberalism seems only like defense, never like movement or change or improvement.

In the search for something warmer and more sustaining, something that promises a direct relationship between being an individual and changing the world, there is not a lot of out there at the moment. Small surprise then that ISIS and similar movements strike a chord. No less a surprise that a big, dumb, clumsy quasi-empire like the United States and Western Europe can barely even understand that this is what they’re facing, let alone have anything like a remotely honest conversation about what it means to stand against that moment. Small wonder in some sense that ISIS volunteers and anti-ISIS volunteers have some similar motives.


That’s what burns me so much about the wider coverage of US attempts to counter the influence of ISIS and their allies in Syria, Iraq and elsewhere. For the last six months, we’ve had a steady flow of articles in the New York Times and other respectable, mainstream publications that have privileged access to sources in the US military and government about how previous strategies in the region have failed despite considerable expenditure of funds and efforts, and in particular how the groups we’ve chosen as our allies and surrogates are poorly trained, indifferently led, and half-heartedly committed. And then the pundits and the journalists and the mostly-anonymous sources harrumph and ponder and pontificate about how to do it right. Which is just silly. There isn’t a way to do it right as long as we remain what we are, how we are. Doing it right would involve living up to our potential as a liberal democracy rather than down to our ersatz empirehood. Or it would require thinking more clearly about what we are doing as an empire in ways that I think the United States is incapable of achieving.

I’ve found some of the smart scholarship on the history and character of empires published in the last few years very useful for rethinking the rather kneejerk understanding of empire as a massively and inarguably worse and more oppressive political form than nationhood, particularly Fred Cooper and Jane Burbank’s sweeping history of empire as a political form and Charles Maier’s thoughtful reflections on similar themes. These are not apologies for empire, let alone advocacy of it, but they do open up a more analytic understanding of why empires, including modern Western ones, tend to experience certain kinds of recurrent crises and to fall prey to some of the same self-defeating uses of violence and injustice even if they also avoid some of the distinctively modern failures of national, Westphalian sovereignty.

One of those recurring problems is the relationship between imperial cores and their clients on their frontiers. What can the core offer to agents or groups at its periphery who might protect or enhance the power of the center?

Not incorporation or membership: that’s the whole point of the different between empires and nations, that at least in theory the autonomy of actors at the periphery or frontier is a selling point for both empire and client alike. In practice, empires usually forget that autonomy both because of their own chauvinistic ideologies, how the center justifies its centrality, and because some of the value of empires to core and periphery alike is in the standardization of mechanisms of exchange, in the protection of travel, in the regulation of commerce, all of which can slide very quickly into other kinds of constraint and aggression.

No, what the core can offer is resources that allow its clients or agents to enact their own goals and achieve supremacy over rivals beyond the edges of imperial influence. In return, the empire should not expect ideological, religious or cultural loyalty. Because being loyal to an empire for its values, its culture, its way of life, is strictly for dummies. The empire will not respect that loyalty because it doesn’t see its frontiers as “home”. Committing to empire in that sense involves selling out authenticity, selling out everything that makes you a part of your own historical world, in return for nothing at all. So the only collaborators who show up when that’s the deal are naive, stupid or desperate.

Maier is willing to concede that maybe the post-1945 United States isn’t fully an empire in the classic sense, but he insists that it has had many of the same characteristics in its relationship to the world and that Americans have had a hard time grasping the implications of that relationship, as the citizens and leaders of empires often do.

The point is not to seek more brutal or unprincipled clients, either. This is what “realist” thinkers in the Cold War and since have habitually preferred, that the United States or the West should select clients who will do whatever dirty work the empire can’t be seen to do. That’s just the mirror opposite of asking groups you support to show their fealty to Mom, apple pie and Chevrolet, but it’s the same mistake. Empires pay clients on their frontiers just so the clients won’t attack the empire and will attack those who would. That’s it. They’re not employees, they’re not citizens, they’re not servants or slaves.

So what can we offer to fighters in Syria? Weapons, cash, resources. Fine. What they do next is up to them. That’s their reason for asking for those things: so they can do what they want and fight whom they choose, to be who they already are. They might indeed eventually choose to fight the empire that gave them the weapons in time. That, too, is a common part of the history of empires. But that’s what the bargain is all about, if the empire chooses to pursue it. The pundits who pretend that we should be looking for good, loyal, liberal democratic capitalist evangelical Christian American-loving Syrian proxies who will follow our orders and obey our strategy should really just say that we’re looking for idiots and unicorns, because it amounts to the same thing. Especially in the Middle East, there are many reasons to remember that giving America the loyalty it imagines it requires is a way to end up abandoned and desperate whenever the shooting dies down for a moment. Anyone who is paying attention knows what America does, what even the least racist and brutal empire will eventually do, which is betray anyone who actually believed in it. We encourage people to paint targets on themselves and then we look the other way when they’re murdered and tortured and then we have the incredible cheek to act like nobody notices when we do it.


In both cases–the search for proxies to fight wars we can’t or won’t fight, the desire to stop people from going to join ISIS (or to fight it)–the real problem is with the level of hallucinatory self-regard in American appraisals of the situation. Maybe that’s another weakness of empires. Maybe that’s a reason to start thinking about a world that’s organized around neither nations nor empires, since both constructions of sovereignty frequently wrap the world in layers of injustice, stupidity and delusion. Pundits keep talking about “non-state actors” without being even slightly interested in talking about them, much as the Pentagon and the politicians jabber on about Syria without wanting to acquire even the slightest curiosity about the specific humanity and history of the people who are dying, killing, leaving or hanging on there.

Maybe the only way to move forward is to stop being what we are and have been, stop wanting what we’ll never get. Maybe it takes understanding and curiosity before demands and judgment. Maybe it takes acknowledging that what we’re offering many people at home is powerlessness, insignificance and passivity while we claim to be providing the opposite. Maybe it takes acknowledging that all we have to offer is money and guns and have no right to tell anyone else how to use them, since we don’t permit anyone else to tell us the same. If we want to offer more at home and the world, the problem is not out there, it’s in here. We’re the worst victims of our many and myriad illusions, though by no means the only ones.

Posted in Politics | 5 Comments


Over the last decade, I’ve found my institutional work as a faculty member squeezed into a kind of pressure gradient. On one side, our administration has been requesting or requiring more and more data, reporting and procedures that are either needed to document some form of adherence to the standards of external institutions or that are wanted in order to further professionalize and standardize our operations. On the other side, I have colleagues who either ignore such requests (both specific ones and the entire issue of administrative process) to the maximum extent possible or who reject them entirely on grounds that I find either ill-informed or breathtakingly sweeping.

That pressurized space forms from wanting to be helpful but wanting also to actually take governance seriously. I think stewardship doesn’t conform well to a hierarchical structure, but it also should come with some sense of responsibility to the reality of institutions and their relationship to the wider world. The strongest critics of administrative power that I see among faculty, both here at Swarthmore and in the wider world of public discourse by academics, don’t seem very discriminate in how they pick apart and engage various dictates or initiatives and more importantly, rarely seem to have a self-critical perspective on faculty life and faculty practices. At the same time, there’s a lot going on in academia that comes to faculty through administrative structures and projects, and quite a lot of that activity is ill-advised or troubling in its potential consequences.

A good example of this confined space for me perennially forms around assessment, which I’ve written about before. Sympathy to my colleagues charged with administrative responsibilities around assessment means I should take what they ask me to produce seriously both in the sense that there are consequences to the institution if faculty fail to do in the specified manner and seriously because I value them and even value the concepts embedded in assessment.

On the most basic human level, I agree that the unexamined life is not worth living. I agree that professional practices which are not subject to constant examination and re-evaluation have a tendency to drift towards sloppiness and smug self-regard. I acknowledge that given the high costs of a college education, potential students and their families are entitled to the best information we can provide about what our standards are and how we achieve them. I think our various publics are entitled to similar information. It’s not good enough to say, “Trust us, we’re great”. That’s not even healthy if we’re just talking to ourselves.

So yes, we need something that might as well be called “assessment”. There is some reason to think that faculty (or any other group of professionals) cannot necessarily be trusted to engage in that kind of self-examination without some form of institutional support and attention to doing so. And what we need is not just introspective but also expressive: we have to be able to share it, show it, talk about it.

On the other hand, throughout my career, I’ve noticed that a lot of faculty do that kind of reflection and adjustment without being monitored, measured, poked or prodded. Professionalization is a powerful psychological and intellectual force through the life cycle of anyone who has passed through it, for good and ill. The most powerfully useful forms of professional assessment or evaluation that I can think of are naturally embedded in the workflow of professional life. Atul Gawande’s checklists were a great idea because they could be inserted into existing processes of preparation and procedure, because they are compatible with the existing values of professionals. A surgeon might grouse at the implication that they needed to be reminded about which leg to cut off in an amputation but that same surgeon would agree that it’s absolutely essential to get that right.

So assessment that exists outside of what faculty already do anyway to evaluate student learning during a course (and between courses) often feels superfluous, like busywork. It’s worse than that, however. Not only do many assessment regimes add procedures like baroque adornments and barnacles, they attach to the wrong objects and measure the wrong things. The amazing thing about Gawande’s checklists is that they spread because of evidence of their very large effect size. But the proponents of strong assessment regimes, whether that’s agencies like Middle States or it’s Arne Duncan’s troubled bureaucratic regime at the U.S. Department of Education, habitually ignore evidence about assessment that suggests that it is mostly measuring the wrong things at the wrong time in the wrong ways.

The evidence suggests, especially for liberal arts curricula, that you don’t measure learning course by course and you don’t measure it ten minutes after the end of each semester’s work. Instead you ought to be measuring it over the range of a student’s time at a college or university, and measuring it well afterwards. You ought to be measuring it by the totality of the guidance and teaching a faculty member provides to individual students, and by moments as granular as a single class assignment. And you shouldn’t be chunking learning down into a series of discrete outcomes that are chosen largely because they’re the most measurable, but through the assemblage of a series of complex narratives and reflections, through conversations and commentaries.

In a given semester, what assessment am I doing whether I am asked to do it or not? In any given semester, I’m always trying some new ways to teach a familiar subject, and I’m always trying to teach some new subjects in some familiar ways. I am asking myself in the moment of teaching, in the hours after it, at the end of a semester and at the beginning of the next: did that work? What did I hope would work about it? What are the signs of its working: in the faces of students, in the things they say then and there in the class, in the writing and assignments they do afterwards, in the things they say during office hours, in the evaluations they provide me. What are the signs of success or failure? I adjust sometimes in the moment: I see something bombing. I see it succeeding! I hold tight in the moment: I don’t know yet. I hold tight in the months that follow: I don’t know yet. I look for new signs. I try it again in another class. I try something else. I talk with other faculty. I write about it on my blog. I read what other academics say in online discussion. I read scholarship on pedagogy.

I assess, I assess, I assess, in all those moments. I improve, I think. But also I evolve, which is sometimes neither improvement nor decline, simply change. I change as my students change, as my world changes, as my colleagues change. I improvise as the music changes. I assess.

Why is that not enough for the agencies, for the federal bureaucrats, for the skeptical world? Two reasons, namely. The first is that we have learned not to trust the humanity of professionals when they assure us, “Don’t worry, I’m on it.” For good reasons sometimes. Because professionals say that right up to the moment that their manifest unprofessionalism is laid screamingly bare in some awful rupture or failure. But also because we are in a great war between knowing that most of the time people have what my colleagues Barry Schwartz and Ken Sharpe call “practical wisdom” and knowing that some of the time they also have an innocent kind of cognitive blindness about their work and life. Without any intent to deceive, I can nevertheless think confidently that all is well, that I am teaching just as I should, that I am always above average and getting better all the time, and be quite wrong. I might not know that I’m not seeing or serving some group of students as they deserve. I might not know that a technique that I think delivers great education only appears to because I design tests or assignments that evaluate only whether students do what I want them to do, not whether they’ve learned or become more generally capable. I might not know that my subject doesn’t make any sense any longer to most students. Any number of things.

So that’s the part that I’ll concede to the assessors: it’s not enough for me to be thoughtful, to be practically wise, to work hard to sharpen my professionalism. We need something outside ourselves: an observer, a coach, a reader, an archive, a checklist.

I will not concede, however, that their total lack of interest in this vital but unmeasurable, unnumbered information is acceptable. This should be the first thing they want: our stories, our experiences, our aspirations, our conversation. A transcript of the lived experience of teaching. This is the second reason that the assessors think that what we think about our teaching is not wanted or needed. They don’t want that because they believe that all rhetoric is a lie, all stories are told only to conceal, all narrative is a disguise. They think that the work of interpretation is the work of making smoke from fog, of making lies from untruths. The reason they think that is that stories belong at least somewhat to the teller, because narratives inscribe the authority of the author. They don’t want to know how I assess the act of teaching as I perform it because they want a product, not a process. They want data that belongs to them, not information that creates a relationship between the interpreter and the interpreted. They want to scrub evidence clean, to make an antiseptic knowledge. They want bricks and mortar and to be left alone to build as they will with it.


I get tired of the overly casual use of “neoliberal” as a descriptive epithet. Here however I will use it. This is what neoliberalism does to rework institutions and societies into its preferred environment. This is neoliberalism’s enclosure, its fencing off of commons, its redrawing of the lines. The first thing that gets done with data that has had its narrative and experiential contaminants scrubbed clean is that the data is fed back into the experience of the laborers who first produced it. This was done even before we lived in an algorithmically-mediated world, and has only intensified since.

The data is fed back in to tell us what our procedures actually are, our standards have always been. (Among those procedures will always be the production of the next generation of antiseptic data for future feedback loops.) It becomes the whip hand: next year you must be .05% better at the following objectives. If you have objectives not in the data, they must be abandoned. If you have indeterminacies in what you think “better” is, that’s inadmissable: rarely is this looping even subject to something like a Bayesian fuzziness. This is not some exaggerated dystopic nightmare at the end of a alarmist slippery slope: what I’m describing already happened to higher education in the United Kingdom, largely accomplishing nothing besides sustaining a class of transfer-seeking technocratic parasites who have settled into the veins of British universities.

It’s not just faculty who end up caught in the loop, and like frogs boiling slowly to death, we often don’t see it happening as it happens. We just did our annual fire drill here in my building, and this year the count that we did of the evacuees seemed more precise and drawn-out than last year, and this year we had a mini-lecture about the different scenarios and locations for emergency assembly and it occurred to me: this is so we can report that we did .05% better than last year.

We always have to improve just a little, just as everything has to be “growth-based”, a little bigger next year than last year. It’s never good enough to maintain ground, to defend a center, to sustain a tradition, to keep a body healthy happy and well. Nor is it ever good enough to be different next year. Not a bit bigger, not a bit better, but different. New. Strange. We are neither to be new nor are we to maintain. We are to incrementally approach a preset vision of a slightly better but never perfect world. We are never to change or become different, only to be disrupted. Never to commune or collaborate, always to be architected and built.


So here I am in the gradient again, bowed down by the push on all sides. I find it so hard when I talk to faculty and they believe that their teaching is already wholly and infinitely sufficient. Or that it’s nobody’s business but their own how they teach, what they teach, and what comes of their teaching. Or that the results of their teaching are so sublime, ineffable and phenomenologically intricate that they can say nothing of outcomes or consequences. All these things get said, at Swarthmore and in the wider world of academia. An unexamined life.

Surely we can examine and share, express and create. Surely we can provide evidence and intent. Assess and be assessed in those ways. Surely we don’t have to bury that underneath fathoms of tacit knowledge and inexpressible wisdom. We can have our checklists, our artifacts.

But surely too we can expect from administrations that want to be partners that we will not cooperate in building the Great Machine out of the bones of our humane work. That we’re not interested in being .05% better next year, but instead in wild improvisations and foundational maintenance, in becoming strange to ourselves and familiar once again, in a month, a moment or a lifetime. Surely that’s what it means to educate and become educated in an uncertain world: not .05% more measured comprehension of the impact of the Atlantic slave trade on Sao Tome, but thinking about how a semester of historical study of the Atlantic slave trade might help make a poet forty years hence to write poems, might sharpen an analytic mind, might complicate what was simple or simplify what was complex. Might inform a diplomat ten years from now, might shape a conservative’s certainty that liberals have no answers when he votes next year’s Presidential race. Might inspire a semester abroad, might be an analogy for an experience already had. I can talk about what I do to build ramps to all those possibilities and even to the unknown unknowns in a classroom. I can talk about how I think it’s working and why I think it’s working. But don’t do anything that will lead to me or my successors having to forgo all of that thought in favor of .05% improvements onward into the dreary night of an incremental future.

Posted in Academia, Defining "Liberal Arts", Oh Not Again He's Going to Tell Us It's a Complex System, Swarthmore | 5 Comments

Oath for Experts Revisited

I was just reminded by Maarja Krustein of a concept I was messing around a while back, of getting people together to draft a new “oath for experts”. I had great ambitions a few years back about this idea, about trying to renovate what an expert ought to act like, to describe a shared professional ethic for experts that would help us explain what our value still might be in a crowdsourced, neoliberal moment. The Hippocratic Oath is at least one of the reasons why many people still trust the professionalism of doctors (and are so pointedly scandalized when it is unambiguously violated).

We live in a moment where increasingly many people either believe they can get “good enough” expertise from crowdsourced knowledge online or where experts are all for sale to the highest bidder or will narrowly conform their expertise to fit the needs of a particular ideology or belief system.

I think in both cases these assumptions are still more untrue than true. Genuine experts, people who have spent a lifetime studying particular issues or questions, still know a great deal of value that cannot be generated by crowdsourced systems–in fact, most crowdsourcing consists of locating and highlighting such expertise rather than spontaneously generating a comparable form of knowledge in response to any query. I still think a great many experts, academic and otherwise, remain committed to providing a fair, judicious accounting of what they know even when that knowledge is discomforting to their own political or economic interests.

Mind, you, crowdsourcing and other forms of networked knowledge are nevertheless immensely valuable, and sometimes a major improvement over the slow, expensive or fragile delivery of authoritative knowledge that experts in the past could provide. Constructing accessible sources of everyday reference in the pre-Internet world was a difficult, laborious process.

It’s also undoubtedly true that there are experts who sell their services in a crass way, without much regard for the craft of research or study, to whomever is willing to pay. But this is why something like an oath is necessary, and why I think everyone who depends upon being viewed as a legitimate expert has a practical reason to join a large-scale professional alliance designed to reinvigorate the legitimacy of expertise. This is why professionalization happened during the 20th Century, as groups of experts who shared a common training and craft tried to delegitimate unscrupulous, predatory or dangerous forms of pseudo-expertise and insist on rigorous forms of licensing. I don’t think you can ever create a licensing system for something as broad as expertise, but I do think you could expect a common ethic.

The last time I tried to put forward one plank of a plausible oath, I made the mistake of picking an example that created more heat than light. I might end up doing that again, perhaps by underestimating just how many meal tickets this proposed oath might cancel. But let’s try a few items that I personally would be glad to pledge, in the simplest and most direct form that I can think of:

1) An expert should continuously disclose all organizations, groups and companies to whom they have provided direct advice or counsel, regardless of whether the provision of this advice was compensated or freely given. All experts should maintain a permanent, public transcript of such disclosures.

2) An expert should publically disclose all income received from providing expert advice to clients other than their main employer. All experts should insist that their main employer (university, think tank, political action committee, research institute) disclose its major sources of funding as well. The public should always know whether an expert is paid significantly by an organization, committee, company or group that directly benefits from that person’s expert knowledge.

3) Any expert providing testimony at a criminal or civil trial should do so for free. No expert should be provided compensation directly or indirectly for providing expert testimony. Any expert who serves as a paid consultant for a plaintiff or a defendant should not provide expert witness at a trial involving that client.

4) All experts should disclose findings, information or knowledge that contradicts or challenge their own previous conclusions or interpretation when that information becomes known to them in the course of their own research or analysis. Much as newspapers are expected to publish corrections, experts should be prepared to do the same.

Posted in Oath for Experts | 4 Comments

Putting Out Fire With Gasoline

I appreciate what Sady Doyle is trying to do in this essay on humor, culture and politics. Primarily the essay is addressed to artists and performers (and their audiences) who object to what they perceive as “politically correct” censoriousness. (One notable recent example are the comedians who’ve suggested that they won’t play college campuses because activists attempt to micromanage what they can and can’t say.)

Doyle uses the Glen Ridge rape case, particularly the relationship between an infamous lyric in a Beastie Boys’ song and the actions of one of the rapists, to offer an olive branch to artists and performers. I’m compressing a long and careful development of the argument of the piece, but fundamentally the analysis goes like this: activists know that the artists are “good people”, but if so, when you find out that the content of your expressive work is in the heads of “bad people” or is associated with “bad actions”, you should want to avoid that content in the future. Doyle couches this almost as a secular concern for the souls of artists and performers: “it must be one of the worst feelings in the world”, to discover that something you sang or joked or wrote or painted has been cited by or admired by a person who associates that cultural work with their own commission of evil.

The essay is very careful in the early going to avoid simplistic claims about causality. The content of expressive culture doesn’t cause bad actions to happen, Doyle initially acknowledges. The lyric didn’t cause the rape, it just informed it, gave it substance, suggested its horrific specificities. But by the end of the essay, that’s no longer the case: bad culture not only causes harm to the feelings or subjectivities of some who encounter it, but we’re back to the content of culture causing people to have explicit thoughts, thoughts that have tangible ideological intent to discriminate or harm. (A “man who believes all black people are criminals is going to shoot an unarmed black man”.) I think here Doyle demonstrates what has become a characteristic view of a lot of current identity-based activism: that discrimination, oppression and racism originate from the hidden interiority of individuals, that “bad action” is located in “bad thinking” and “bad personhood”, that bad thinking has a kind of explicit propositional character, and that its propositional content bad is a concentrated, distilled form of everyday language and representation. By the end of the essay, Doyle isn’t worrying about whether that shooter has a song lyric playing in his head when he shoots, but whether the song lyric got him to shoot when he wouldn’t have otherwise done so.

So by this point, the olive branch is this: if you don’t want to be the person who causes someone to do evil, then listen to us when we tell you that what you just said or performed or visualized is going to cause someone to do evil. Because, Doyle says, we know (we think we know) that you aren’t evil. It is almost a doppleganger of the debate on guns: comedians and others are portrayed as if they believe jokes don’t hurt people, people hurt people; Doyle is offering them the chance to think that it’s just jokes, jokes or art or culture as a technology that is separable from the personhood of its maker. You, she argues, can know that “some people are flammable” and you, she argues, can “be careful about where the spark lands”.


Put in this fashion, this is another round in a venerable debate about the responsibility of artists for the consequences of their art. A thought which, I have to confess, first fills me with a certain degree of professorial and middle-aged weariness. It is not that I want citations galore, but I do wish we could get some degree of acknowledgement that this is an ongoing conversation where many good points and difficult experiences have already been had. This does not mean it is impossible to come to new understandings, to move ahead, and every generation also has to undertake its own encounter with fundamental human problems. But just knowing that you are not the first to think these things tends to moderate the degree to which you speak as a missionary might speak to a heathen, as if you’re delivering a message that up to this point has never been heard. That’s especially important if you mean to offer an olive branch. We don’t hate you as a person, we just hate your jokes! is an easier message for a comedian to take, I am guessing, if it is offered as the latest modest turn of a familiar dilemma.

But this point opens up into another landscape of difficulty for this kind of argument. First, Doyle’s approach strikes me as a fairly typical example of the way that current activism has amended a postmodern approach to interpretation and hermeneutics, I think in some ways without knowing that something’s been left out. Foucault announced the “death of the Author”, which to simplify somewhat meant in his thinking and much of other postmodernist or poststructuralist theory, that to understand what a text meant had little to nothing to do with discovering what the producer of that text thought that it meant. For all sorts of reasons: the producer was no longer understood to be a masterful individual agent in control of their own consciousness and intention: power and culture and institutions and history all radiated through the Author like light shining through a prism and thus spoke within whatever the Author produced. But also: the audience, the reader, the viewer, determined what the text meant, and determined that within the circumstances of a single moment of interpretation. It could mean one thing today and another thing tomorrow even to the same person, it could mean one thing before it was used or cited or deployed and another thing after it was used, it could mean two things at once or ten things, it could mean nothing fixed or determinate at all. The text could be paired with another text and change meaning; it could mean something different in a library or a bookstore or read aloud on a tape; it could mean one thing if it was held by a preacher and thrown into a fire and another thing if it was read lovingly to a child in front of a fireplace. I caricature a bit: postmodern approaches to interpretation did not hold, as they are often accused of holding, that texts meant everything or nothing, that signifiers floated utterly free. But it was important in this style to say that meaning was a very large, messy and protean space even for the most seemingly banal or straightforward texts, and that context mattered as much as text, that saying that a certain work always meant something no matter where it was or who was reading it, was a kind of folly.

The postmodern emphasis on language preceding and shaping thought and thought shaping action is intact in this new activist stance, but not the indeterminacy and multiplicity of meaning. And the Author has been brought forth from his grave, but not entirely to a new life. Doyle, like many, argues that the meaning of culture is often quite determinate, and it should be determined not by an act of discerning interpretation but in relationship to a set of social subjects. E.g., meaning still resides in that sense with the audience and with usage and context, but only some audiences and some contexts. Only two audiences have authority to make meaning, in this view: the people who use expressive culture deliberately as a weapon and the people who are wounded by that weapon. The Author is being forgiven here: the Author does not wound. The Author is only the blacksmith who makes the sword on an anvil. Whether the sword is wielded by the righteous or the wicked, or left above the mantlepiece, is not the Author’s will–unless he deliberately peddles it to the wicked.

Anyone else who claims, however, to see the sword as spit for grilling meat, or as a fashion accessory, or as demonstration of metallurgical skill, or as a symbol of aristocratic nostalgia, or as a visual stimulus for writing fantasy novels, or as one of a class of crafted objects, etc., is being ruled out of bounds. Those other meanings and interpretations are unavailable if there is someone somewhere who has been wounded.

Let me try to make the problem more concrete and responsive to Doyle’s argument. Doyle focuses on the documented presence of a lyric about sexual assault with a baseball bat in the thinking of a young man who sexually assaulted a woman with a baseball bat. The first problem with that focus is, “What do we do about the presence of that lyric in the minds of so many who never did anything of the sort?” This point needs to be made carefully, because lurking behind it is the callowness and stupidity of slogans like “All Lives Matter”.

This is a genuine mystery if the argument is made that words and texts and performances do have (or can have) a singular meaning and do reliably serve as the predicate of bad thinking, bad personhood, and bad action. When media critics predict, as they have for decades, that the representation of violence in media will create violent people and violent action in some sort of rough tandem (the more of the first, then the more of the latter) and that doesn’t happen (it didn’t happen), that should mean that the initial assertion that the representation of violence has a fixed meaning and a fixed relationship to self-fashioning is just plain wrong. What it means is that if there’s more violence represented and less violent action that many people consuming that violent media are interpreting it and understanding it in ways that don’t actually incline them mimetically towards what they’ve seen, towards enactment. It means, well, that lots of things are happening when that media is consumed, and not just lots of things across the whole society, but lots of things in every single person.

When I’ve gotten into debates over the years with violence-in-media activists, one of the responses I often hear is, “Well, we’re not concerned with what well-educated, economically comfortable people in stable homes think when they watch violent media, we’re concerned about it as a contributing factor to violence in impoverished, marginalized and unstable homes”. At which point, my response is that “violent media” is being used as a substitute and alibi for poverty, inequality and injustice. It’s being made to stand in for the whole because the whole is perceived as too big and too difficult to attack. If that move were really about a strategic subdivision of a complex problem into small and manageable ones, it might be ok, though even there the whole point of thinking strategically is to prioritize, and violent media’s negligible and difficult-to-demonstrate contribution to violent action should be a low priority even in that context. But the problem is that small and manageable tasks should require small and manageable contributions of labor. Trying to cleanse the culture of violent video games or shows–or to get comedians to stop telling offensive jokes–is not a small or manageable task. So what happens is that the strategy swallows the whole; the small task comes to stand in for the entirety of the problem. Violent media become the way that one set of critics talk about poverty, and so they stop naming poverty for what it is. The enormity of structure disappears from view and becomes equivalent to the manageable choice of what to watch or play that night, or how to film a particular scene. In making a big problem open to our agency as individuals, we flatter ourselves too much. It’s as much an entrepreneurial or self-promoting move as it is a practical one.

Let me raise one last thought to trouble Doyle’s point. People who do evil sometimes leave in their wake considerable evidence about what they were watching, what they were listening to, what they liked and identified with in culture. The story is often told, for example, that Richard Nixon watched “Patton” and was profoundly influenced by it in his decision to illegally invade Cambodia. This story is often compared with the fact that the same film supposedly played a key role in getting the Israeli and Egyptian delegations to agree to the Camp David accords. Same film, seen in very different ways by different individuals and in different contexts. Score one for postmodernism, or maybe just old-fashioned critical analysis. It’s fair to say, “If someone tells you they were hurt by a joke, you should listen”. If there’s a fire where your sparks fell, pay attention. But it’s equally fair to say, “If someone else grabs the spark and builds a warming campfire with it, or cooks a meal over it, or makes a light from it, take note of that.” And equally fair to note that lightning starts fires too–and strikes in ways that no one expects.

After all, in the wake of some of the evil things that people have done, the archives of culture they leave behind often contain texts and songs and performances and images that none of us would intuitively see as a predicate of that evil. Much as I find clowns scary, I would not say that John Wayne Gacy’s obsession with clowns can be predictably “read out” from the art of clowning, nor that clowns ought to take their makeup off as a result. Murderers, rapists, bigots: populate the rogues’ gallery as you will, and you will find that what they viewed and heard and read are often not at all obviously tied to their actions. If you understand social evil as originating from bad thought and bad language and bad culture, and you keep finding that the inventory of social evil’s cultural world is brimming over with much more than you expected, you either have to decide that your understanding of the relationship between representation and action is too simplistic or that there is far more that artists and writers and comedians should have to be responsible for not painting or writing or saying. I think that’s the prospect that makes Patton Oswalt angry and other comedians afraid.

But there’s another mirroring complexity worth respecting: that the inventory of people who have fought for social justice–or who have suffered social injustice–is often also more capacious and contradictory than you’d expect if you think there’s a close relationship between social action and cultural consumption. That people suffering oppression sometimes see meaning and possibility even in texts that are very literally dedicated to that oppression, that the richness and indeterminacy of meaning flows in many directions.

The unpredictability of meaning, in so many different ways, suggests that our first and last response to it should be humility, should be a kind of principled uncertainty about what we think a joke will mean, can mean, has meant. Which is an uncertainty that should afflict comedian and critic alike. You might indeed be showering sparks on flammable people, or even calling down the lightning in an open field. But equally what looks like a spark might be a light in the darkness, or a warming memory of a distant flame. We should not manage that uncertainty by requiring everyone to perform and listen while covered in fire-retardant foam.

Posted in Politics, Popular Culture | 5 Comments

Is There a Desert or a Garden Underneath the Kudzu of Nuance?

I like this essay by Kieran Healy a lot, even though I am probably the kind of person who habitually calls for nuance. What this helps me to understand is what I am doing when I make that nearly instinctive move. I suppose in part I am doing what E.P. Thompson did in writing against theory as abstraction: believing that the important things to understand about human life are always descriptive, always in the details, always in what is (or was) lived, real, and tangible. There are days where I would find more persuasive, both as scholar and person, from the truths found in a novel or a deep work of narrative journalism than from social theory. But it is stupid to act as if one can be a microhistorian in a naive and unstructured fashion: there’s tons of theory in there somewhere, from the selection of the stories that we find worth our time to what we choose to represent them as saying. I do not read about human beings and then insist that the only thing I can do is just read to you what I read. I describe, I compress, I abstract. That’s what Kieran is arguing that theory is, and what the demand for “nuance” prevents us from doing in a conscious and creative way.

I suppose I lately have a theory of theory, which is that it is usually a prelude to doing something to human beings wherein the abstractions that make theory ‘good to think’ will become round holes through which real human square pegs are to be pounded. But this is in some sense no better (or worse) than any other abstraction–to really stick to my preferences, I should take every theory (and its application or lack thereof) on its particulars.

I also think that there is something of a puzzle that Kieran works around in the piece, most clearly in his discussion of aesthetics. (Hopefully this is not an objection about the need for nuance by some other name.) But it is this: on what grounds should we prefer a given body of theory if not for its descriptive power? Because that’s what causes the kudzu of nuance to grow so fast and thoroughly: academics read each other’s work evaluatively, even antagonistically. What are we to value between theories if not their descriptive accuracy? (If that’s what we are to value, that will fertilize the kudzu, because that’s what leads to ‘your theory ignores’ and ‘your theory is missing…’) We could value the usefulness of theory: the numbers of circumstances to which it can apply. Or the ease-of-use of theory: its memorability, its simplicity, its familiarity. Or the generativity of theory, tested by the numbers of people who actually do use it, the amount of work that is catalyzed by it.

The problem with all or any of those is that I don’t know that it leaves me with much when I don’t like a theory. Rational choice/homo economicus fits all of these: it is universal in scope, it’s relatively easy to remember and apply as a way to read many many episodes and phenomena, and it has been hugely generative. I don’t like it because I think for one it isn’t true. Why do I think that? Because I don’t think it fits the actual detailed evidence of actual human life in any actually existing human society. Or the actual evidence of how human cognition operates. But I also don’t like it because of what is done in the name of such theory. That would always have to be a post-facto kind of judgment, though, if I were prohibited from a complaint about the mismatch between a theory and the reality of human life, or it would have to be about ad hominem: do I dislike or mistrust the politics of the theorists?

I think this is why we so often fall back into the kudzu of nuance, because if we clear away the overgrowth, we will face one another naked and undisguised. We’d either have to say, “I find your theory (and perhaps you) aesthetically unpleasing or annoying” or “I don’t like the politics of your theory (and perhaps you) and so to war we will go”. The kudzu of nuance may be ugly and confusing, but it at least lets us continue to talk at and past one another without arriving at a moment of stark incommensurability.

Posted in Academia, Generalist's Work, Oh Not Again He's Going to Tell Us It's a Complex System | 1 Comment

Don’t Panic! Leave That to the Experts

In many massively multiplayer online games (MMOGs), players who are heavily invested in the game (sometimes just in terms of time, but occasionally both time and money) often group together in organized collaborations, usually called guilds.

Guilds pool resources and tightly schedule and organize the activities of the members. This is typically a huge advantage in MMOGs, where many players either work together only temporarily with strangers, play completely by themselves, or belong to guilds that only offer weak or fitful organization. Many MMOGs tune the gameplay so that the most difficult challenges require this level of elite coordination. The rewards for overcoming these challenges typically have an accumulative effect, allowing the elites to overcome still more difficult challenges and to easily defeat other players in direct combat or competition. The virtual goods and powers obtained through elite coordination visually distinguish the members of these guilds when their characters are seen within the public spaces of the gameworld.

As in any status hierarchy, these advantages are only meaningful if the vast majority of participants do not and cannot obtain the same rewards. So the elite guilds in some sense have a very strong incentive to keep everyone else around. A gameworld abandoned by everyone but the elite stops being fun even for them. This is especially acute when the collaboration within a heavily invested group of elite players extends to keeping their advantage over others through pooling insider knowledge about the game systems, or even to protecting knowledge about a bug or flaw in the game systems which can be potentially exploited by everyone.

To give an example, one of the “virtual world” MMOGs that I spent considerable time studying a few years ago was called Star Wars Galaxies. It was a notable turning point in the history of game design in many ways, most of them not particularly happy, but it did give players a very significant amount of control over the gameworld and had a vigorous design infrastructure in particular for allowing players to compete with each other within a virtual economy. Players could produce a wide variety of items for other players, and the very best of these in terms of the power and utility were rare, difficult to make and worth a good deal of money, especially very early in the history of the game. In order to produce the best items, a player had to spend an immense amount of time making inferior items and incrementally increasing their skills.

But early in the game there was a bug. If you knew about it, you could gain a huge amount of incremental skill increase in a very compressed amount of time. So almost immediately after the game went live, there were a small number of players who could make the very best items that conferred enormous power on the owners of those items, literally weeks before it was even possible for anyone to have gained that level of skill. Naturally the wealth they accumulated was equally disproportionate, and that advantage remained permanent, because the developers chose not to strip away that benefit after fixing the bug. By the time everyone else caught up, the early exploiters–who had shared the secret with each other but not everyone else–were essentially a permanent class of plutocrats.

It keeps happening in such games. There’s almost no point to being a new player in games like DayZ or Ark, for example, unless you’re playing on a small server with a group of trusted friends. Even if there were no hacks or exploits, the established players have such enormous advantages that any new player will find again and again that whatever time they invest in gathering resources and making weapons and shelters will be stolen by elite groups of established players. But the established players have a problem too: they need a large group of victims to invest in the game. That’s where the easiest source of wealth is for them: much better to have a hundred newbies labor for two days and to steal what they’ve made in five minutes than it is to directly compete with an equally elite and invested group of rivals. So they need to talk up how fun the game is, to establish it as a phenomenon, maybe even sometimes to show selective mercy, to offer newbies a kind of protection-racket breathing space, to treat them like an exhaustible resource. (Not for nothing do players sometimes speak of “farming” another player as well as some aspect of the gameworld.)

Why is this on my mind today? Well, for one, I’ve been working off and on over the summer on trying to write about virtual worlds. But for another, I can’t help but think about the analogies I see between these experiences and the stock market.


In the middle of a sharp downturn like this one, there are expert investors who come on the radio, the television, the Internet. “Don’t panic,” they say. “Don’t sell. You’re in it for the long haul! That’s what the experts do.”

These appearances also offer many earnest attempts to explain the underlying reasons for the downturn. “It’s China!” “It’s the emerging markets!” “It’s the price of oil dropping!” “It’s the Fed raising rates!” Some of this frantic offering of explanation seems to me to have the same reassuring intent as “Don’t panic”. It is an attempt to rationalize the change, to relate it to something real in the world. In some cases, this offers the investor (small or large) an opportunity to calculate their own risk. “Ah, it’s China. Well, China’s government will find a way to fix it, I would guess.” “Oh, it’s the emerging markets! I always thought those were fishy, I think I’ll reduce my exposure.”

In some cases, I think these explanations are a form of pressure–even blackmail–directed against governments. “Don’t raise the rates, Fed, we like that easy money–so if you do, you’ll ‘shake investor confidence’ even more, and you wouldn’t want that, would you?” We saw that back in 2008, after all: it is the logic of “too big to fail”. Do this, don’t do that, or we’ll pundit the shit out of the investment economy and create a real panic.

Scattered amid the explanations are also some earnest attempts to argue that there is no explanation, to treat the market in a naturalistic object whose behavior is beyond human agency and not well understood by human science. “We’re still not sure about dark matter, and we’re still not sure why the stock market did that.” This too is a kind of reassurance, and often is followed by the reminder not to panic. “It’ll go up again, it just does that, don’t worry.” I think there’s something to that: the 21st Century market is a cybernetic mass brain that thinks in strange ways and reacts at speeds that we have never lived with before.

What I darkly fear is what I think might be said but never is. After all, the experts say, “Don’t panic!, don’t sell, you’re in it for the long haul”, but some of them panicked, or at least their high-frequency trading computers did. Sure, maybe someone else’s Skynet is buying it all, but this wouldn’t happen if it was just Mom and Pop investors getting nervous about China. And I think to myself, “This is like a guild that’s discovered a bug.” They need everyone to stay in so that they can farm them some more. They need to herd the cattle down the soothing Temple Grandin-style chutes. That some of the explanation is neither, “There is a rational thing that is causing this all” nor “This is something so complex that it just does things now and again that no one understands.” That instead some of it is, “We have trouble here in River City”.

The problem is that in 1987 or in 2001 the expert could also say, “If you’re afraid, then after the next rally, move your money into a safe harbor, stay out of the market.” There is no staying out any longer. That’s the other thing that’s changed because of income inequality, because of the way the elite guilds have changed the game. Nothing’s really safe as an investment. Nothing’s really safe as a life or a career. Our institutions (and even our government, especially when it pays pensions) are part of the asset class now. If you just earn a salary and work hard, your income and prospects have gotten steadily worse in the last three decades: the investment economy isn’t just a nice hedge against the worst now, it’s the only way to stay in the middle class.

This is what elite guilds in games would do if they could: require you to play the game.

Too bad if you don’t like spending two days training a velociraptor and building a shelter in Ark only to find that when you logged off for dinner, a couple of elite hackers took your dinosaur, destroyed your shelter and locked your naked body in a cage. (You think I’m kidding, but I’m not.)

Posted in Politics | 4 Comments