Procrustes’ Market

My knee jerks pretty hard when I read an argument that because a service or institution does not behave like a perfect model market in economics, it must be changed until it conforms to the model. There is wreckage strewn all around the world from attempts to lop off limbs and contort reality in order to make non-model markets fit into a Procrustean bed.

So this is part of my problem with Dylan Matthews’ latest analysis of the cost of higher education: it is another example of a think-tanky Beltway writer with the generic hammer of neoclassical economics running around pounding down a world full of nails. But there are important and legitimate points made along the way.

For one, it’s true that education fits what economists call an “experience good”, though that’s a little like saying that life is an “experience good” and if you only had more information you’d make proper consumer decisions about living from seven to seventy. Life–and learning–aren’t just commodities that should be evaluated in some abstracted way by a universal rationality, even though life–and learning–have a big price tag. Even if there was all the information in the world, you couldn’t make the correct decision.

I point this out often to students when we’re discussing which class to take, or to prospectives who are thinking about applying to Swarthmore. There is a lot of information that you could acquire about courses or about colleges that you could reasonably use to assemble a decision matrix. What size is the class or the college? Do you have a good reason for thinking that you flourish in small or large classes or institutions? What do you think you need in terms of knowledge or training? What kinds of environments and teaching styles do you enjoy or find stimulating? And so on–this often information you could have, and sometimes, I agree, information that is hard to come by that shouldn’t be so hard to get.

But then think on all the things that make a difference in a class or a university that you cannot possibly know about no matter how much information you have. The friends you will make. The people you will love. The mentors who will strike a chord with you. The class that will surprise you and change your views of everything. The chance experience you have that will transform you. You can find an environment that is rich in people, in time, in resources, in the unexpected (and some colleges and classes are impoverished in all or most of those). But you can’t determine any of the specifics with all the information in the world and yet it is these specifics that create the most “added value”. Perhaps even more importantly, it’s not the person who just had the experience who will value the commodity most (or rue it most dearly), it’s the person you will be in the years to come. That person is not you in so many ways: you are today very very bad at predicting what that person needed or wanted and you always will be bad at it. If we could sue our younger selves, many of us probably would.

So the people who look and say, “Oh, just make sure there’s more information and people will make the choices that economists think they ought to make” are doomed to disappointment. Which would be fine if they would consent to just being disappointed but policy wonks tend to think that when the outcome that the models predicted doesn’t happen, the answer is to make people behave like the models said they would.

Matthews also indulges the “more administrators equals more budgetary bloat” argument straight out of Benjamin Ginsberg’s Fall of the Faculty. Again, there’s something to it: large universities especially have added layer upon layer of administrative hierarchy and organizational complexity to little perceptible gain. But the way that Ginsberg lays it out, it’s a Manichean drama with the noble, voluntarily parsimonious faculty on one side and the sinister imperial administrators on the other, with the administrators grabbing territory for no reason other than “because that’s what they do”.

Go deeper and there’s several things at work. The first is way more important than Ginsberg (or Matthews) credit, which is the massive growth in “external mandates”, e.g., regulations and legal obligations that have been slapped onto universities and colleges (both public and private) without any attention to the costs of compliance. This is one of the few areas where I’m fairly sympathetic to a common conservative refrain about businesses: the desired outcomes of regulation are often very important, but it’s equally important not to act as if they are free. Equally important are “internal mandates”: new administrative staff who are brought on board to answer a demand made by faculty and students. It really grates on me when someone like Ginsberg complains about administrative bloat without an audit of the role of faculty themselves in creating the need for many administrative positions. Not infrequently, when I’m talking with a colleague who talks about how there are too many administrators (here specifically or generically in academia), if I ask, “So which positions do you have in mind in specific”, I get a lot of vague hand-waving. Or I get a named position and then I ask who is going to do the work that the person doing that work now does, and it’s usually, “Oh, someone else on staff will just have to do more work”, which is not exactly what you’d call grand solidarity between laboring people. Very rarely can a critic name an overall kind of administrative work that simply should not be done at all, and usually when they can they’re just being stupid or self-destructive.

Though in the end this is the kind of thinking that maybe we do need, and not just about institutional or instructional support, but about instruction itself. The basic thrust of Matthews’ analysis is fair enough: until they have to, institutions of all kinds (including companies) do not often think about what they shouldn’t be doing, or what they don’t need to do. And when they have to it’s usually too late to make those decisions judiciously and ethically. You want that a faculty and an administration and students should work and live in a progressively more focused or generative way all the time, to operate sustainably (with all that implies, environmentally and otherwise). Living sustainably means at least thinking about costs, and not thinking about costs (or making other people think about them) makes a savage, degrading reckoning with reality an inevitability.

But mindfulness of cost does not mean an abstemious, starveling existence. It doesn’t mean lopping off your feet so you don’t have to worry about the expense of boots any longer. Education, life and other “experience goods” are partly valued for their unpredictable excesses, their moments of giddy and unexpected sensation. A grey, spare, brutalist education in which one always received the training that was expected in order to service the need that was defined would impoverish more than professors and administrators.

Posted in Academia, Politics | 4 Comments

Suez Has Already Happened

The problem with “turning points”, as they’re commonly described in popular forms of historical storytelling, is that very few of them were recognized clearly as such at the time.

When I’m in the archives reading past individuals writing to and talking to one another, I’m often struck that they accord as much importance to events and news that we now consider to be either a trivial sideshow or to have been a small part of some significant general trend. Sometimes those same people treat what we now consider to be an absolutely crucial moment as if it were merely one more event of note: e.g, they often know that it matters but it’s not as if they are changed in a flash.

When you teach and study the history of the post-1945 world, particularly if you’re a specialist who focuses on a part of the world that had previously been under European imperial rule or you focus on the Cold War, the Suez Crisis of 1956-1957 is one of the absolutely unquestioned major “turning points” that you draw attention to. Certainly any adult with even a moderate interest in global affairs who lived through that time, particularly in England, France or the United States, remembers it. People at the time knew it was important, and for many British and French citizens it was a key turning point in their views of empire, the Cold War, the United States and the non-Western world. But if you read the correspondence of officials who were not absolutely in the heart of the storm at that exact moment, they sometimes just lumped Suez in with lots of other developments or felt it would be resolved in a more ordinary, expected way that was largely continuous with the prior history of empire. It was only five or ten years out that everyone could see clearly that Suez was the moment where the changed relation between Western Europe and the U.S. was finalized, where the terms of the Cold War in relation to the Non-Aligned Movement were really defined, where the end of empire became inevitable, where the blighted politics and military doctrines of Middle East rivalries congealed. It was quickly clear within the UK that Anthony Eden had been destroyed politically by the resolution of the crisis, but the intensity of the stench of failure and miscalculation around his decision-making has become far more pronounced in the decades since.

I’m raising this point because right now it is becoming clear that the post-imperial moment for the United States is not in some relatively imminent future but has already come and gone. It’s becoming clear that the Iraq War, contrary to the dearest wishes of its most lunatic devotees, was the Suez of the Pax Americana, that moment that comes in the life of most empires, however they’re configured, where they are goaded into a florid, expensive attempt to secure a distant frontier and end up proving only that the core no longer has and never will again have the resources or reputation to succeed in such attempts.

The brash neoconservative ideologues who planned and enacted the war never believed in “soft power” and so never realized the importance of reputation and sincerity, never realized the subtle benefits of international legitimacy, never realized that it actually mattered that the United States at least try to define and adhere to a more moral standard of conduct than its rivals and adversaries. Never realized that if you pull a Reichstag fire with your intelligence and send the guy who people trust the most out to tell lies and exaggerations, you will never be trusted again even when you’re telling the truth.

So they threw all of that soft power overboard in a way that makes it impossible to accumulate more of it, not that the current Administration is making much effort even after receiving an overearnest plea to go back to being the America the world used to know from the Nobel Prize Committee back in 2009.

Equally, the Iraq War’s planners managed to demonstrate what the upper limit of American military and economic resources were and where the political tolerance of a population much more eager to appear supportive of its military would end. Much as Suez made it clear that the UK and France would never again be able to control their empires if any territory or people refused to bow to “hegemony on a shoestring”. In both moments, all the old diplomatic and miltiary magic tricks were exposed for the shabby, threadbare things that they had become. The nitty-gritty revelations of the Wikileaks cables about how American officials struggled to gain even minute advantages in a hundred countries around the globe were already visible for anyone who cared to see in the straining inability of the United States to even decide what it wanted in Iraq, let alone achieve its aims.

The principal difference is that the American public and its punditry doesn’t know that the end of American dominance has already happened, let alone acknowledge who is to blame for it. Eden’s disgrace began immediately, whereas only the people who opposed the Iraq War in the first place so far seem to hold its planners fully accountable for the permanent damage it caused. So our talking heads continue to talk about what we should do about Egypt, or how we should dictate terms to Syria, and so on. Really that kind of talk was always a mistake, both morally and empirically, but now it’s a delusion.

The sad part of all this is that when the people of Aleppo cry out to the world and ask why the world has failed them, they are really crying out to the imaginary “world” of the Pax Americana, which even if it never lived up to its billing, was at least a world where it seemed somehow possible that something would be done and something was doable. The “world” to which the burned and dying of Aleppo now cry out is a world that has never cared for them. It’s a world that is ruled by political elites who either look at such an atrocity and calculate when they themselves might have need to order such actions or who avert their gaze and hope that somehow there’s a ten-point policy framework dreamed up by a gang of think-tank eunuchs that clearly spells out what the next six years of multilateral summits should talk about in order to determine what the following six years of summits should talk about and oh dear it’s terribly complex.

They’re not wrong, it is terribly complex and yet is also simple. Somebody flew a plane, someone dropped an awful weapon that should never exist except in nightmares, and the flesh of people who just want to go about their business melted. It’s terribly complex in the sense that there are people with political power in Syria who don’t seem to care much about Syrians, who have no vision about power except to keep it. They don’t pretend any longer to have ideology or purpose. It almost seems beside the point to call yourself the Democratic Republic of Bullshitistan and stage photo opportunities to prove you are the Beloved Leader. North Korea never fails to amuse for this reason: it’s like the Renaissance Faire of postwar politics, re-enacting when regimes cared enough to create elaborate lies about their benevolence. Now you just go ahead and kill people or promise that if your rebellion wins you’ll kill the other kind of people or at least cut a few hands off here and there. Or you go from being the regime that shelters dissidents to angrily chasing down your own, from being the regime that opposes torture to being the regime that normalizes it.

There isn’t even a fairy tale any more to believe in, nor a sovereign to petition. If you were with the dying in Aleppo and you cried out in anger that the world has failed you, you’re only wrong about one thing: there’s no ‘world’ left to have failed. The only hope is in you yourselves, and you’ve already given everything you had to give and more.

Maybe there will be some world that we can imagine doing something, once again starting a halting walk towards global progress, some future day after the United States gets over being over. You can be over and come out the other side truer and better to the deep histories and values of your people and nation. “I pass the test. I will diminish, and go into the West, and remain Galadriel.” But empires that don’t know they’re not any more have often managed to thrash their phantom limbs hard enough to either cause enormous suffering (the humiliation of Suez was avenged in part on the people of Kenya during the Mau Mau Rebellion and on the people of Algeria as that war ground to a close) or lead to farces like the Falklands War.

Posted in Politics | 12 Comments

The Method

Obama’s new education policy neatly showcases the spectrum of choice we now have in our political system: to be ground down a bit at a time by technocrats who either won’t admit to or do not understand the ultimate consequences of the policy infrastructures they so busily construct or to be demolished by fundamentalists who want to dissolve the modern nation-state into a panoptic enforcer of their privileged morality, a massive security and military colossus and an enfeebled social actor that occasionally says nice things about how it would be nice if no one died from tainted food and everyone had a chance to get an education but hey, that’s why you have lawyers and businesses.

A lot of observers have been quick to point out that the Obama Administration’s announced policy on higher education is a Race to the Bottom that will make rich institutions richer and starve out the rest. There’s really no other plausible outcome when you say, “The institutions with the best results get access to financial aid for students, and the ones with the worst results can go suck a bag of dicks”, at least if you aren’t also providing some kind of aid or resource targeted specifically at weak institutions that’s intended to make them stronger or more successful. I also think it’s clearly intended to kill online for-profit education without the Administration having to say so directly by denying them access to the Pell Grants that are so crucial to their bottom line.

There are other collateral consequences to the policy that will probably come to pass even if it’s softened or modified in many respects (which, at least as the for-profits go, is almost certainly going to happen given their numerous powerful protectors in Washington). For example, the policy creates a whole new category of data that every college and university is going to have to track, that they literally can’t afford to not track, and that’s employment outcomes for all future graduates. I’m sure administrations across the country are sighing and beginning to think about how they would track such data systematically rather than anecdotally, and the answer has to involve more personnel or personnel with new and more expensive skill sets, because that’s a hard kind of data to come by without a major continuing effort to collect it. So in an initiative that’s supposedly about making institutions more cost-conscious, there are plenty of new costs being generated.

The deeper question really is, “Why do technocrats insist on indirect, stealthy approaches to objectives they refuse to declare directly? Why not talk more clearly about both the philosophy and the goals behind a major public policy initiative?” And deeper still is, “Why do technocrats still not understand after all this time that it is precisely their indirection that continuously fuels American popular skepticism about ‘government’ and robs them of the political goodwill of many people who actually support the values or objectives that can be dimly discerned inside of the Rube Goldberg machine of some overcomplex policy apparatus that is being dragged through the torturous rounds of our highly dysfunctional bureaucracies and legislature?”

Why, for one example, struggle so hard to craft the ACA and protect it from political backlash, why make legislation which could so easily be painted as a labyrinthine mess of contradictions and confusion because it is a labryinthine mess of contradictions and confusion, when there was ample evidence that a solid majority of American voters would support a simple strong regime of mandatory cost controls and something rather like a single-payer system?

Why, for the present example, not just say, “For-profit education should make its profits off of the services it provides being valued sufficiently by its customers, not off of public monies intended to help needy students get access to non-profit education.” Why not just say, “There are colleges and universities out there which do not meet minimal standards of quality (graduation rates or any other metric) and we believe they should close their doors.” Or if you actually feel like being a visionary more than an executioner, why not say, “Look, here’s a better idea for what mass higher education should be–it shouldn’t be in a race to compete with elite private universities–and the federal government is going to put a lot of money behind building mass higher education 2.0 as an alternative to the needs of the majority of Americans seeking education beyond K-12, this is going to be the new G.I. Bill for the 21st Century.”

As always, Matthew Yglesias is a great portrait of what happens when a well-meaning kid with a good education settles down to become a technocratic barnacle on some encrusted rock. You get a clear picture from reading him of what technocrats think they’re doing, and why the concept of the incentive, disembedded out of economics, has become the technocrat’s version of the Nicene Creed. Incentive, in their world, is a compressed way of saying, “I am smarter than you are, but you unfortunately have just enough power to get in my way if I try to do you what I think is best for you, so I’m going to try to trick you into doing what I think you should do.” Incentive is also the sound of a dumb chortle from someone who thinks he’s just gotten a free lunch. Not only does the magic of incentive get the people who didn’t marinate in the think-tank juices to do what their betters deem they ought, it’s a way to make them pay for doing it. Incentive is also a promise to the powerful and the interested that there will be a way to let them out of anything they really don’t like, as long they’re willing to pay a modest premium to opt out of obligations that others can’t escape.

So the idea with higher education is, “I can get rid of for-profits and I can get rid of shitty fifth-tier colleges and universities and I don’t have to take the political heat for doing either. And I don’t have to actually say what I think mass higher education should be if not an expensive imitation of what elite selective education should be, because ‘wisdom of crowds’ and all that, if we set the incentives right, that will emerge.” Technocrats live in the wonderland of the question marks in the Underpants Gnomes business model, endlessly fussing over the exact terms of Point #1 and certain that the Profit! of #3 will follow.

The rest of us also end up in the question marks, but in a rather different way. We end up with a question of just how long the slow decay of existing systems (many of them admittedly dysfunctional) will go on without anyone, technocrat or otherwise, having to deal with the fact that the needs that created those systems remain as acute as ever while the ability of our society to satisfy those needs is more and more deficient. Defunding ‘bad’ K-12 schools via a testing regime and ‘accountability’ has led either to organized cheating that makes everything worse or it has led to the creation of a hodgepodge alternative of semi-private charter schools which on balance are no better than the “bad” systems that they are meant to reform. Meaning we now know less about what the situation on the ground really is and we have even less clarity about what education is for and what the public stake in education might be. About all we’ve achieved is the dispersal of agency over bad outcomes: now no one is really responsible for and accountable to the direction of the system as a whole. The heads of little people get to roll with depressing regularity, while the Big Men and Women get to move on after a few years with a nice sparkling new item in their resumes. Which suits the technocrat just fine, because those are the clients they really want to provide service to.

None of what has happened in K-12 ‘reform’ even remotely attempts to struggle with the fundamental question: why do we need education for our children, and why is there a pressing public interest in supplying that education? Defund the schools in Philadelphia and the children of Philadelphia who can’t flee to private institutions or move with families to the suburbs are still there. Incentives can only push so much dust under the rug before the rug itself is mounded high to the ceiling of the room.

The same goes for higher education, or health care. I’d support the Administration with a whole heart if they said, “Look, if you want to make money educating people, that’s your goddamn problem: don’t build Pell Grants into your bottom line. We’re cutting you off.” And if they said, “Citizens shouldn’t have to pay a lot to go to a low-quality university that takes nine years to graduate from and provides almost nothing in return, any more than they should have to worry that the sausage they just bought in the market has fecal matter and the fingers of a factory worker in it.” Put some teeth into accreditation if you like at the boundary between the minimally acceptable and the unacceptable. There are institutions out there that should be out of business. There are institutions out there that the colleges and universities that do care should work harder to put out of business or that we should invest directly in improving, because their failures affect all of us.

But if the federal government wants that outcome or we want that outcome, then figure out an answer to the question, “So why do students and their families actually pay so much for a bad service?” The answer is a little bit, “Because there’s not enough information out there to make a good choice” and a lot, “Because they have to.” Why do they have to? Because there are too many people chasing too few jobs, and because employers are using credentials as a proxy for, “People who want the jobs bad enough that they might do the job well”. The more people that get the credentials, the more that people looking for a proxy that makes a more and more arbitrary process of selecting a candidate manageable look for more credentials. Which turns higher education into a kind of death march of debt and dysfunctionally pegs its content to whatever concretized credentials desperate middle managers think they need for jobs that really just take common sense, critical thought, energy and the ability to communicate.

You are not addressing that problem when you just lop off the bottom of the education marketplace with policy mechanisms that clumsily attempt to hide that as their intent. You are not speaking to the real issues or the real failures and you’re not providing either moral guidance or technical insight into how those issues might be solved or those failures made into successes. You are not helping at all when you officially define higher education as being a jobs training program but refuse to cop to that as a statement about values.

The technocrat imagines himself the captain of the S.S. Creative Destruction and in so doing argues in advance that if he happens to strike an iceberg, then he meant to do so all along. Really all he is doing is being a punk who breaks into the vacant home down the block and spray-paints a few walls. It may still be vacant but hey, it looks a bit better, right?

Posted in Academia, Politics | 11 Comments

Denial Is a River In Egypt

While I’m broadly in agreement with Adam Frank’s op-ed about the grave political and social costs of the current state of scientific literacy in America, there is something about the way that he comes at the issue that feels like it’s a part of the problem rather than the solution.

Frank follows a path through thinking about the state of science in American life that I encounter at lot at Swarthmore and institutions like it, especially but not exclusively among our students. The story is told as a tale of decline: once Americans trusted and valued science, and now increasingly they don’t, putting not just individual issues at risk but the entire practice of scientific inquiry.

In some measure that story is incontestably true. For me the most visceral gut-punch truth of it is to watch some of old animated and documentary presentations that used to appear on Disney’s television showcase that were unreservedly committed to the proposition that American modernity and prosperity were synonymous with science, and that was about as close to a consensus artifact as you could ask for.

Or was it? That’s the problem with the story that Frank tells. What he doesn’t know–and maybe none of us can be sure about–is whether all that’s changed is that the Americans who do not trust, value or practice scientific inquiry and knowledge are now politically or socially empowered in a way that they weren’t in the heyday of mid-20th Century high modernism. Did people listen to scientists in the 1950s because they had to, because science had a strong kind of authority within civic and political institutions that were themselves far less inhibited about imposing their general authority on the population? Did people who never really accepted or trusted scientific perspectives just decide to shut up and knuckle under in order to get their professional credentials or to be accepted in a much more conformist middle-class culture?

I’m inclined to think that in some sense denial and opposition to science is not a new social movement enabled by new political ideologies and forces but a sensibility with a much more continuous underground history that traces all the way back to the first third of the 20th Century, at least in the United States. Which, if true, has huge implications for saving what Frank calls “the tradition”: it means that this is less a matter of returning to a venerable and venerated way of living and more a question of doing a kind of work that was never done in the first place, which is rolling up some collective sleeves and making the case for science (and maybe other kinds of knowledge, academic and otherwise) within the terms of the everyday culture and social lives of Americans. It doesn’t mean conceding to those terms but it does require understanding them and sorting out the reasons and roots that have given rise to them. That’s a kind of work that I think many contemporary scientists and academics are profoundly unprepared to do–but if the tradition is to be a living one (in a way that perhaps it never was), that’s what is required.

Posted in Academia, Oh Not Again He's Going to Tell Us It's a Complex System, Politics, Popular Culture, Swarthmore | 7 Comments

A Clean Room for Colleges?

Collaboration is one of those things that everyone in higher education claims to want more of, in more ways, and yet almost no one ever gets around to pushing beyond calls for more. I keep feeling in particular that there’s a collaborative space that is largely empty at institutions like Swarthmore that seemingly have the resources to support new initiatives.

For my teaching and scholarship, I’m largely free to do as I please, including to pursue ad hoc partnerships with other authors or teachers. In the sciences, scholarship is almost definitionally collaborative at the level of a single lab or a partnership between labs; in the humanities it is typically individual but that is perhaps changing slowly. Co-teaching has a pretty significant institutional price tag so if I were to do it regularly that would likely raise some issues, but on an occasional basis it is available to many of the faculty. My department has a very open curriculum so I only need to do some basic diligence when I’m considering a new course regarding whether it overlaps with someone else’s existing course and whether it meets the broadly-constituted needs of my department. Other departments have more tightly designed majors and that can act as a constraint on the curricular creativity of individual faculty in those departments. In terms of course structure and pedagogical technique, I’m even more free to do as I see fit.

Collaboration between whole institutions is, as perhaps it should be, a complicated matter that requires a lot of high-level agreement and a good deal of legal and procedural formality, even though the need for such collaboration is increasingly clear to many presidents and senior administrators, say, around issues of publication and open-access.

Collaboration at the level of whole disciplines between scholars mostly doesn’t involve formally defined institutional mechanisms or governance structures, but it nevertheless shapes our professional lives a great deal–journals, peer reviews, grant committees, setting funding priorities for research.

At Swarthmore, with a very strong tradition of faculty governance, there is a culture of protracted deliberation around matters that involve the entire curriculum or around policies that directly impact the faculty, such as deciding which departments will be given permission to search and hire into a tenure line or setting general education requirements. This is also, for the most part, as it should be.

The empty space that bugs me lies between my autonomy over my own teaching and scholarship and the deliberative work of committees and the faculty as a whole. What bothers me is that there is almost no opportunity to collaborate in a relatively spontaneous way with other faculty as well as administrators or even students on projects or initiatives that involve institutional processes or structures that are smaller and more consciously transitional or temporary than “the entire curriculum”, “divisions”, “departments”.

I came across an article recently about a company where the CEO limits meetings to no more than three people unless there is some overwhelming reason for the meeting to be bigger. More people, he argued, just adds more reasons to say “no” to a good idea or useful innovation. I gather this is a bit of a mantra in the tech industry and maybe thus a bit of a fiction. But I like the concept, much as I like at least the imaginative vision behind the supposed “employee manual” of the software company Valve.

Now I readily understand that visions of small innovative groups and flat hierarchies can hide all sorts of inequalities and problems if adopted, as some of the follow-up reporting on Valve’s work culture has suggested. Changes that will affect everyone, or that will have a major financial impact on an institution and its faculty, should have to go through a more grinding and inclusive deliberative process: there is no alternative to that except direct top-down hierarchical control, which I think is the worst of all the options, at least for higher education.

I feel though as if there are ideas and project that could operate in the ‘in-between’. Shorter-term curricular collaborations that aren’t about discipline or subject but about some pedagogical concept or technique, small groups of faculty collecting data or information about trends in higher education or their own institutions with the aim of producing a white paper, small ‘fairs’ where faculty teach other a specific technique or skill, microbusiness ventures of various kinds, pop-up instances of instruction dealing with current events, and so on, a two-year studyproject on an issue. All of which could be considered “labor” for the institution, none of which need the heavy hand of a full process of faculty deliberation. A kind of space for institutional experiments, a sort of “venture capital”.

I know there are some universities that have something like this in place, and some small liberal-arts colleges have a few projects or programs out there that might fit this description as well. (Say, Bryn Mawr’s 360 initiative.) But I do feel that small institutions, despite having what seems like an intimate and informal scale that ought to allow for flexibility and invention, are paradoxically often more restrictive, more laborious in their deliberative processes, more prone to trip over small bureaucratic artifacts and the temperamental conservatism that hangs over committee work like a shroud. The ambition to create an institute for the liberal arts is attractive to me partly because I see it as potentially creating this sort of architectural breathing room in the structure of the college–a sort of ‘clean room’ where small groups of faculty, staff and students can tinker with their practices and aspirations without requiring a big budgetary investment up front or requiring extensive consultative efforts with the entirety of the community. But my feeling from talking to colleagues is that this kind of space is needed throughout higher education, and that this is the kind of initiative that really should come from faculty rather than be done to them in the name of some outsider’s vision of what constitutes innovation.

Posted in Academia, Swarthmore | Comments Off on A Clean Room for Colleges?

It’s a Confidential Thing, You Wouldn’t Understand

I had a heated conversation a few years back with someone I know whose work for the U.S. government routinely involves classified information. We weren’t talking about the content of that work, because this person takes classification very very seriously. We were talking about WikiLeaks and the documents they had acquired from Bradley Manning.

I was arguing that there is legitimate public interest in much of what Manning’s archive revealed. I said I was more interested in the bulk of the files than videos of combat in Iraq. In fact, I claimed, the much more fine-grained information should have been accessible to public review all along, that the work of the U.S. government should be comprehensively more transparent. After all, I observed, the net effect if you read through those files is actually reassuring: the competency and expertise of officials at State and Defense and other offices is generally evident, though there were also some depressing exceptions. Even more importantly, I suggested, these files make what our diplomats and attaches and observers know available to a wider global public who often have few other reliable sources of information about what’s happening in the world, or in their own countries. Much as I find that U.S. documents about events in Africa in the 1950s and 1960s are often surprisingly insightful or useful sources despite some prevailing misconceptions or limitations.

At this point in my little speech, my conversation partner heatedly interrupted me and said that most of what I’d said was hypocritical bullshit of one kind or another. “Would you,” I was asked, “be willing to have all your emails for the last ten years and all your memos be posted indiscriminately on the Internet just so people could gain a better understanding of what it is professors do, or how higher education works? Don’t people deserve to know more about that? Could they really feel confident they were seeing the truth if you just selectively published the emails that didn’t involve ‘confidential’ questions? Why should you get to hide all your conversations about students and grades and tenure and peer review and curriculum policy and strategic planning and the mistakes you made in your work while every embassy officer and State Department analyst gets everything they ever said about the countries and regions they’ve served in spewed all over the web so that every asshole who knows how to read a Wikipedia entry can rant about them on some bulletin board?”

Which I have to say shut me up for a bit, hard as that might be to believe.

Would I be willing? What is confidentiality or classification for? Do I and my colleagues misuse or overuse it as much as I think my government does? Do most organizations overuse it similarly? Do I think that at the least someday the emails of myself and all my colleagues should be available for some kind of historical scrutiny if not immediate transparent review?

——————

I’ve been thinking about this again while doing some archival work this summer. Reading back into the 1950s and 1960s, I feel confident that the labels “Confidential”, “Secret” and “Top Secret” were and still are seriously misused by the U.S. government and likely by most states that seriously aspire to be liberal democracies or republics of some kind or another. Quite aside from any philosophical argument about the relationship between openness and an informed citizenry, there are practical problems following from the overuse of classification and confidentiality that are often evident in older documents (and in the Manning archive): closed informational loops often leave out a participant with vital information to add to the discussion, participants in a decision-making process often themselves remark about how confused they are about what is and is not restricted or secret information and therefore often shut down a generative conversation just as it is producing useful insights, information piles up because there are far too few people with appropriate levels of access to handle it. And of course the legacy of such overuse creates weird processes and procedures after the fact, as anyone who has been forced to go get a special handwritten label for every box of documents that they want to take photographs from at the U.S. National Archives in College Park knows very well.

But the philosophical problem is important too. It’s common to trace the political cynicism and disengagement of the U.S. public to Watergate or more generally to a longer series of similar kinds of revelations about the gap between what government says and what government does. When you read through postwar documents once classed as Confidential or Secret, you tend to see that this gap was opened by a constant, incessant series of disjunctures between what officials said to each other, what they said to privileged clients and audiences, and what they said to a series of concentrically larger and wider publics. If you were an academic or a business leader or a local public official or really even just a generally educated or literate person, you had by the early 1960s already been witness to such disjunctures, potentially many of them, though most probably seemed inconsequential or even positive.

Positive in part because to be aware that there was a gap between what was said to the widest publics and what you knew was being said by those “in the know” was a sign of your advantage, that you were a beneficiary of information asymmetry.

Which, in the end, is the purpose of all confidentiality: to use institutional or structured power to create information inequality. That’s what privacy is: unequal access to information. Modern societies have defended privacy in various forms or ways out of a belief that such inequality is either the generative precondition of forms of individuality or sociality or that such inequality is a necessary constraint on state and civic power.

Which brings me back to my own uses of confidentiality. What is confidential in my professional world?

All of academia treats grading and other forms of assessment of students as highly confidential, which is reinforced by legal requirements. Though there are many contexts (employment, competitions, even candidacy for office) in which a student is expected to permit a review of a transcript.

Letters of recommendation are commonly regarded as highly confidential, though there are some subcultures in academia that follow practices in some other institutions where recommendees may be ask to write part or all of their own letters and there are other contexts in which recommenders are cautioned that a recommendee may have rights to view all or part of a dossier that contains letters.

Similarly assessments of candidates for hiring, tenure and promotion are almost always viewed as confidential, though with the same caveat that in some cases a candidate may have rights to review such assessments.

Judiciary hearings tend to be treated roughly as if they were assessment: both the conversations within such hearings and the outcomes are treated as confidential unless there is a circumstance in which someone is expected or required to disclose such outcomes as part of being reviewed or examined.

Past that point things get more variable.

At Swarthmore, many committees treat their deliberations as at least partially confidential in order to encourage members to discuss the issues at hand openly and to prevent the community at large from confusing proposals and drafts with final recommendations. I think that’s fairly typical, but this is a different kind of confidentiality than the protection of private information tied to individuals.

Swarthmore in its recent history has tended to treat institution-level enrollment data (numbers of majors or students taking courses in various departments, for example) and other curricular information as fairly confidential, disclosing it only in protected settings. I know this varies quite a bit: there are some universities and colleges where the faculty or even outsiders have much more access to this kind of information.

Most other institutional data (budget, admissions, investments, salaries etc.) is disclosed largely in highly standardized forms used by various ratings systems or consumers of information about higher education (including prospective students and their parents); the more complete or raw forms of such data is generally only available inside protected or privileged environments.

——–

Let’s work this distinction. Would it be better if more institutional data was available in a less mediated, prepared or standardized form and minimally restricted by confidentiality or privileged access? I’m on the record as saying unambiguously yes: it would be better for us, better for higher education, and better for wider publics. It’s true that some of that data requires a lot of knowledge if you want to interpret it well. In a college the size of Swarthmore, there are so many quirks and variables involved in enrollment patterns and such small populations as well that it would be hard to say much of anything confidently if you were just let loose with the raw numbers of the last thirty years. Or so it seems on first thought. I don’t know that we’ve really put such a proposition to the test. One of the good things about the era of “big data” and open-source approaches is that there’s often some outsider who looks at a body of data and sees some tool or interpretation that wouldn’t ever occur to the people responsible for generating the data.

But what about assessments of individuals? Would I want everything I’ve ever said about students and colleagues to be public, much as Bradley Manning’s files made fairly personal assessments by some American officials public?

On one hand, yes, there are things I’ve said that I think needed to be said about the suitability of a person for a task or job or the qualifications of a student for a grant or a course that I would not like to say with the same candor or clarity to that person’s face. (In any professional context, there are things you say that no one should quote or repeat because they’re not said as purposeful contributions to a deliberation or process. Emergency medical technicians and doctors should have their gallows humor; professors should be able to vent about a student who is just not getting it.) But even when I do say such things to meaningfully influence what the institution is going to choose to do, I feel the need to be able to say them vividly, urgently, clearly rather than the softer, kinder, more indirect ways I’d express my reasoning if I had to tell the individual directly.

In a deliberation, in fact, I feel that obscured and indirect ways of speaking about judgments often help to disguise or enable discriminatory or unfair thinking. If you can’t say clearly what your criteria for assessment are and can’t speak with some urgency and clarity about how you make your judgments, then you also can’t call someone else out for their assessment even if you think it goes against the larger values of the institution.

Just as American diplomats and officials would lose clarity and decisiveness if they had to speak to each other privately in the polite or evasive diplomatic or political languages they are forced to use in public statements.

However.

What strikes me in both kinds of judgment is that this kind of confidentiality is also a system for the preservation of cultural capital. It makes decisive, vividly observed assessments of individuals possible by creating a space that is set aside from everyday rituals of social respect and consideration. But if you’re a person who has never been inside such a space and has no intuition or knowledge about how they operate, about what gets said or how decisions get made, then you’re almost certainly also a person who is at a big disadvantage in such assessments.

Which, in something of a vicious cycle, is another thing that makes confidentiality that protects individual-level assessments so appealing.

If those assessments are about competition or selection, the people who know the least about the process of selection and the characteristic rhetoric of judgment are the quickest or easiest people to eliminate from consideration. If those assessments are about seeking manipulable clients in foreign and military affairs, the people who know the least about the ways they’re being judged are often marking themselves as targets and pawns.

Any time I’ve participated in a grant process, a hiring decision, a comparative assessment of student performance, I think to myself, “There is so much I could tell the people who were totally uncompetitive in this process if only I were allowed to share some really frank or direct observations about what they did wrong”. But you can’t ever do that for all sorts of reasons: the people applying or being assessed didn’t sign up for advice and might not welcome it (frank or otherwise) and the time involved would be staggering. Moreover, the more that the issues involve social capital and basic competencies (an applicant who doesn’t understand the nature of a job or grant, a student with fundamental issues at the level of basic skills, a person who doesn’t know anything about the institution or people to whom he or she is speaking), the more you can’t just settle for a couple of sentences or a quick comment if you want to open a dialogue. You’re talking about almost everything at that point. And you have to grapple with the Henry Higgins problem: developing the cultural capital that lets someone work towards the inside of a system of selection or merit isn’t a value-neutral act of empowerment.

So at best we settle for generalized, abstracted advice: make eye contact! Run a spell-check! Don’t use an ugly font on your cover letter! Do a bit of research about what you’re applying for! Which rarely addresses the meaningful, decisive, systematic weaknesses that quickly separate uncompetitive students or applicants from the competitive in-group, because those problems are much more intricate and yet much more comprehensive than these simple kinds of mechanical rules.

In international policy, what would happen if every official or leader knew how they were really viewed by foreign diplomats, spies, and colleagues? The least savvy or self-aware officials would either wise up (leading to a loss of cannon fodder, pawns and patsies) or they’d drop deeper into delusional egotism (which might make them more dangerous or harder to manipulate). Nobody wants the latter but losing the former is just about one group of people losing their perceived edge or advantage. What if every leader or official knew what was said in private assessments about why someone succeeded in their goals or was given what they were asking for?

What would happen if every student and applicant knew exactly how they were being assessed, in the actual language of assessment? What if they saw the detailed language used to assess highly successful applicants or candidates? They might either acquire the cultural capital to speak effectively “in” to the social and professional world of their aspirations or they might suffer enormous humiliation to no good or productive end, or discover that their exclusion is beyond plausible remedy. The former is something that most people in education claim they’d like to see, and the latter something that no one seems to want. But if we succeeded in cutting down on the numbers of people who could easily be excluded from meritocratic processes of selection, we might also have to confront the absurdity of highly pyramidal, exclusionary structures of selection in the first place.

The way I process this whole issue is that transparency and disclosure can’t be an ideology in and of themselves. But neither is privacy. Both ends of that characteristically modern spectrum demand to be re-examined for the instrumental and imaginative propositions embedded inside of them.

Maybe we could find ways to more effectively translate or reveal the interior of processes and conversations that we reflexively see as confidential or classified–or more imaginatively describe the subjectivities and ways of speaking that we think need the kind of intimacy and nurture that confidentiality offers.

Posted in Academia, Politics, Swarthmore | 6 Comments

Teleology and the Fermi Paradox

I sometimes joke to my students that “teleology” is one of those things like “functionalism” that humanist intellectuals now instinctively recoil from or hiss at without even bothering to explain any longer to a witness who is less in-the-know what the problem is.

But if you want a sense of how there is a problem with teleology that is a meaningful impediment to thoughtful exploration and explanation of a wide range of existing intellectual problems, take a look at io9’s entry today that reports on a recent study showing that self-replicating probes from extraterrestrial intelligences could theoretically reach every solar system in the galaxy within 10 million years of an initial launch from a point of origin.

I’ve suggested before that exobiology is one of the quintessential fields of research that could benefit from keeping an eclectic range of disciplinary specialists in the room for exploratory conversations, and not just from within the sciences. To make sure that you’re not making assumptions about what life is, where or how it might be found or recognized, and so on, you really need some intellectuals who have no vested interest in existing biological science and whose own practices could open up unexpected avenues and insights into the problem, whether that’s raising philosophical and definitional questions, challenging assumptions about whether we actually could even recognize life that’s not as we know it (or whether we should want to), or offering unexpected technical or artistic strategies for seeing patterns and phenomena.

As an extension this point, look at the Fermi Paradox. Since it was first laid out in greater detail in 1975 by Michael Hart, there’s been a lot of good speculative thinking about the problem, and some of it has tread in the direction I’m about to explore. But you also can see how for much of the time, responses to the concept remain limited by certain assumptions that are especially prevalent among scientists and technologists.

At least one of those limits is an assumption about the teleology of intelligence, an assumption that intelligent life will commonly or inevitably trend towards social and technological complexity in a pattern that strongly resembles some dominant modern and Western readings of human history. While evolutionary biology has long since moved away from the assumption that life trends towards intelligence, or that human beings are the culmination of the evolution of life on Earth, some parallel speculative thinking about the larger ends or directionality of intelligent life still comes pretty easily for many, and is also common to certain kinds of sociobiological thought.

This teleology assumes that agriculture and settlement follow intelligence and tool usage, that settlement leads to larger scales of complex political and social organization, that larger scales of complex political and social organization lead to technological advancement, and that this all culminates in something like modernity as we now live it. In the context of speculative responses to the Fermi Paradox (or other attempts to imagine extraterrestrial intelligence) this produces the common view that if life is very common and intelligent life somewhat common that some intelligent life must lead to “technologically advanced civilizations” which more or less conform to our contemporary imagination of what “technological advancement” forward from our present circumstances would look like. When you add to this the observation that in some cases, this pattern must have occurred many millions of years ago in solar systems whose existence predates our own, you have Fermi’s question: where is everybody?

But this is where you really have to unpack something like the second-to-last term in the Drake Equation, which was an attempt to structure contemplation of Fermi’s question. The second-to-last term is “the fraction of civilizations that develop a technology that releases detectable signs of their existence into space”. For the purposes of the Drake Equation, the fraction of civilizations that do not develop that technology is not an interesting line of thought in its own right, except inasmuch as speculation about that fraction leads you to set the value of that term low or high. All we want to know in this sense is, “how many signals are there out there to hear?”

But if you back up and think about these questions without being driven by teleological assumptions, if you don’t just want to shortcut to the probability that there is something for SETI to hear–or to the question of why there aren’t self-replicating probes in our solar system already–you might begin to see just how much messier (but more interesting) the possibilities really are. Granted that if the number that the Drake Equation produces is very very large right up until the last two terms (up to “the fraction of planets with life that develop intelligence”) then somewhere out there almost any possibility will exist, including a species that thinks very substantially the way we do and has had a history similar to ours, but teleology (and its inherent narcissism) can inflate that probability very wildly in our imaginations and blind us to that inflation.

For example:

We’ve been notoriously poor in the two centuries since the Industrial Revolution really took hold at predicting the forward development of technological change. The common assumption at the end of the 19th Century was to extrapolate the rapid development of transportation infrastructure and assume that “advancement” always would mean that travel would steadily grow faster, cheaper, more ubiquitious. In the mid-20th Century it was common to assume that travel and residence in space would soon be common and would massively transform human societies. Virtually no one saw the personal computer or the Internet coming. And so on. The reality of 2013 should be enough to derail any assumptions about our own technological future, let alone an assumption that there will be common pathways for the technological development of other sentient life. To date, futurists have been spectacularly wrong again and again about technology in fundamental ways, often because of the reigning teleologies of the moment.

It isn’t just that we tend to foolishly extrapolate from our technological present to imagine the future. We also have very impoverished ways of imagining the causal relationship between other possible biologies of intelligent life and technosocial formations, even in speculative fiction. What technologies would an underwater intelligence develop? An intelligence that communicated complex social thoughts through touch or scent? An intelligence that commonly communicated to other members of its species with biological signals that carried over many miles as opposed to at close distances? And so on. How much of our technological histories, plural (because humanity has many more than one technological history) are premised on our particular biological history, the particular contingencies of our physical and cultural environments, and so on? Lots, I think. Even within human history, there is plenty of evidence that fundamental ideas like the wheel may not be at all inevitable. Why should we assume that there is any momentum towards the technological capabilities involved in sending self-replicating probes to other star systems or any momentum towards signalling (accidentally or purposefully)?

Equally: why should we assume that any other species would want to or ever even think of the idea? Some scientists engaging the Fermi Paradox have suggested that signalling or sending probes might prove to be dangerous and that this is why no one seems to be out there. E.g., they’ve assumed a common sort of species-independent rationality would or could guide civilizational decision-making, and so either everyone else has the common sense to be quiet or everyone who wasn’t quiet is dead because of it. But more fundamentally, it seems hard for a lot of the people who engage in this sort of speculation to see something like sending self-replicating probes for what they really might be characterized as: a gigantic art project. It’s no more inevitable than Christo draping canyons in fabric or the pharoahs building pyramids. It’s as much about aesthetics and meaning as it is technology or progress. There is no reason at all to assume that self-replicating probes are a natural or inevitable idea. We might want to at least consider the alternative: that it is a fucking strange idea that another post-industrial, post-scarcity culture of intelligences with a lot of biological similarity to us might never consider or might reject as stupid or pointless even if it occurred to them.

Anthropocentrism has died slowly by a thousand cuts rather than a single decisive strike, for all that our hagiographies of Copernicus and Galileo sometimes suggest otherwise. Modern Western people commonly accept heliocentrism, and can dutifully recite just how small we are in the universe. Until we began getting data about other solar systems, it was still fairly common to assume that the evolution of our own, with its distribution of small rocky planets and gas giants, was the “normal” solar system, which is increasingly obviously not the case. That too is not so hard to take on board. But contemporary history and anthropology provide us plenty of information to suspect that our anthropocentric (specifically modern and Eurocentric) understandings of how intelligence and technology are likely to interrelate are almost certainly equally inadequate to the reality out there.

The more speculative the conversation, the more it will benefit from a much more intellectually and methodologically diverse set of participants. Demonstrating that it’s possible to blanket the galaxy with self-replicating probes within ten million years is interesting, but if you want to know why that (apparently) didn’t happen yet, you’re going to need some philosophers, artists, historians, writers, information scientists and a bunch of other folks plugged into the discussion, and you’re going to need to work hard to avoid (or at least make transparent) any assumptions you have about the answers.

Posted in Defining "Liberal Arts", Generalist's Work, Sheer Raw Geekery | 5 Comments

Historians Don’t Have to Live in the Past

In what way is the American Historical Association’s notion of a six-year embargo on digital open-access distribution of dissertations even remotely sustainable in the current publishing and media environment surrounding academia?

On one side, you have disciplinary associations like the Modern Language Association and the American Anthropological Association that have somewhat similar traditions of tying assessment and promotion to the publication of a monograph that are to varying degrees embracing open-access publishing and digital dissemination and trying to work out new practices and standards.

On the other side, you have disciplines that have no particular obsession with the idea of the published monograph as the standard.

Whether or not the published monograph is or ever was a good standard for judging the worth of a historian’s scholarship, how long does the AHA think that historians can stand alone in academia as a special case? “Oh, we don’t do open-access or digital distribution until we’ve got a real book in hand and are fully tenured, those few of us remaining who are in tenure-track positions, because that’s a fundamental part of history’s particular disciplinary structure.”

Um, why?

“Because history dissertations take a long time to write and thus need protection?” Right, unlike anthropology or literary criticism or other fields in the humanities. FAIL.

“Because many publishers won’t publish an open-source dissertation?” Right, so this assumes: a) the dissertation will be so little revised that the two texts would be essentially identical and b) but the magic fairy-dust of a book makes it the real benchmark of a properly tenurable person. E.g., “Oh noes, we couldn’t decide if someone’s scholarship was tenurable from a dissertation that is nearly identical to a book”. Here’s where the real fail comes in because it reveals how much the disciplinary association is accepting the clotted, antiquated attachment of a small handful of tenured historians to their established practices even when those practices have had any semblance of reason or accommodation to reality stripped from them.

Let’s suppose that university presses do stop publishing essentially unrevised dissertations. I can’t blame them: they need to publish manuscripts that have some hope of course adoption and wider readership, sold at a reasonable price, or they need to price up library editions high enough that the remaining handful of “buy ’em all” libraries will make up for the loss of libraries that buy in a more discretionary fashion.

You can understand why the publishers who are largely following option #B would not want to publish monographs that were marginally revised versions of open-access dissertations, because even the richest libraries might well decide that buying a $150 physical copy is unnecessary. But by the same token, again, why should a tenure and promotion process value the physical copy over the digital one if they’re the same? Because the physical copy has been peer-reviewed? Meaning, if two scholars who do not work for the same institution as the candidate have reviewed the manuscript and deemed it publishable, that alone makes a candidate tenurable? Why not just send out the URL of a digital copy to three or four reviewers for the tenure and promotions process to get the same result? Or rely more heavily upon the careful, sophisticated reading of the manuscript (in whatever form) by the faculty of the tenuring department and institution?

What the AHA’s embargo embarrassingly underscores is the extent to which many tenured faculty have long since outsourced the critical evaluation of their junior colleagues’ scholarship to those two or three anonymous peer reviewers of a manuscript, essentially creating small closed-shop pools of specialists who authenticated each other with little risk of interruption or intervention from specialists in other fields within history.

Thirty years ago, when university presses would publish most dissertations, you could plausibly argue that the dissertation which persistently failed review and was not published by anyone had some sort of issue. Today you can’t assume the same. Maybe we never should have given over the work of sensitive, careful engagement with the entire range of work in the discipline as embodied in our own departments, but whether that was ever a good idea, it isn’t now and can’t be kept going regardless.

Suppose we’re talking about option #A instead, the publishers who are being more selective and only doing a print run of manuscripts with potential for course adoptions or wider readership. Suppose you use that as the gold standard for tenurability?

That’s not the way that graduate students are being trained, not the way that their dissertations are being shaped, advised and evaluated. So you would be expecting, with no real guidance and few sources of mentorship, that junior faculty would have the clock ticking on their first day of work towards adapting their dissertations towards wider readability and usefulness. That’s a dramatic migration of the goalposts in an already sadistic process. You could of course change the way that dissertations are advised and evaluated and therefore change the basic nature of disciplinary scholarship, which might be a good thing in many ways.

But this would also accelerate the gap between the elite institutions and every other university and college in even more dramatic fashion: writing scholarship that had market value would qualify you for an elite tenure-track position, writing scholarship that made an important if highly specialized contribution to knowledge in a particular field of historical study would qualify you for more casualized positions or tenure-track employment in underfunded institutions that would in every other respect be unable and unwilling to value highly specialized scholarship. (E.g., have libraries that could not acquire such materials, curricula where courses based on more specialized fields and questions could not be offered, and have little ability to train graduate students in fields requiring research skills necessary for such inquiry.) In terms of the resources and needs of institutions of higher learning, it arguably ought to be the reverse: the richest research universities should be the institutions which most strongly support and privilege the most specialized fields and therefore use tenure and promotion standards which are indifferent to whether or not a scholar’s work has been published in physical form.

Yes, it’s not easy to move individual departments, disciplines or entire institutions towards these kinds of resolutions. But it is not the job of a professional association to advocate for clumsy Rube Goldberg attempts to defend the status quo of thirty years ago. If individual faculty or whole departments want to stick their heads in the sand, let that be on them. An organization that aspires to speak for an entire discipline’s future has to do better than that. The AHA’s position should be as follows:

1) Open-access, digitally distributed dissertations and revised manuscripts should be regarded as a perfectly suitable standard by which to judge the scholarly abilities of a job candidate and a candidate for tenure in the discipline of history. A hiring or tenuring committee of historians is expected to do the work of sensitive and critical reading and assessment of such manuscripts instead of relying largely on the judgment of outside specialists. The peer assessment of outside specialists should be added to such evaluation as a normal part of the tenure and promotion process within any university or college.

2) The ability of a historian to reach wider audiences and larger markets through publication should not become the de facto criteria for hiring and tenure unless the department and institution in question comprehensively embraces an expectation that all its faculty in all its disciplines should move in the course of their career towards more public, generalized and accessible modes of producing and disseminating knowledge. If so, that institution should also adopt a far wider and more imaginative vision of what constitutes engagement and accessibility than simply the physical publication of a manuscript.

Posted in Digital Humanities, Information Technology and Information Literacy, Intellectual Property, Production of History | 19 Comments

The Codes of the Political Class

One of Benedict Anderson’s most famous insights into modern nationalism is the role he assigns to national newspapers in creating the sense of “simultaneity” that gave people across the national territory a sense that they were experiencing events together at the same moment as their fellow citizens.

There’s a different kind of simultaneity visible today in the relationship between the US press and the US government. I mention Anderson’s analysis rather than something like Chomsky’s accusation of a deliberate, instrumental collusion between the owners of the press and other capitalist elites because I don’t think the simultaneity of the New York Times editorial staff and the top ranks of the federal government is self-aware or even necessarily self-interested. But when you see it happening, you do get a sense of how tightly bound together the social worlds of the two groups are, how much they look and and speak about the world in the same terms, how tightly the discourse of one institutional world inflects the other.

The best example is to watch the speed and breadth with which the circumlocutions of governmental speech disseminate through the old-guard mainstream media, and how using that speech becomes a self-fetishized sign of reliable “objectivity”. In the Bush Administration, for example, “enhanced interrogation” quickly was adopted in the common language of press reporting. “Torture”, the simpler and clearer word that would be preferred by both E.B. White and George Orwell, became instead the sign of partisan and polemical writing (within the culture of mainstream editorial).

This is not new: I recall day after day of bleak rage reading Christopher S. Wren’s reporting on South Africa in the late 1980s, which took the distinctive discursive language of official reports by the apartheid government as ‘fact’ and information from anyone else as rumor or opinion until witnessed by “reliable sources”. But Wren and other reporters for the dominant US papers in the 1970s and 1980s were often commissioned to fall into line with the Cold War proclivities and loyalties of the old-guard publishers in a manner that now seems almost charmingly quaint in its crude directness. Today it’s become a more automatic, unconscious and sociological kind of simultaneity, a synchronicity.

Case in point: it’s taken only a few days for the Grey Lady to almost autonomically adopt passive-voice constructions like “the military intervention that deposed Mr. Morsi” rather than “coup” (thanks to Robert Wright for spotting this).

I recall while living in Zimbabwe in the 1980s and 1990s that the Zimbabwe Herald, published by the government, was worth reading less for an account of news-worthy events in Zimbabwe and the world and more as a window into what the dominant factions within the government wanted people to believe was happening. While that’s always been true to some extent about the major press in any modern society, it is more and more true lately in the United States in a more and more specific sociological way than it has been for some time.

This is not ideology we are seeing: it is consciousness, a shared imaginary that is both the literal product of overlapping social worlds and evidence of an increasing isolation of the American political class from wider social and cultural networks (both national and otherwise). In that narrower world of the political class, it is more important to operate within elaborated pseudo-statutory discursive frameworks whose etiquette makes a Tokugawa-era tea ceremony look like grabbing a quick bite at the mall food court than it is to make any kind of common-sense connection with either constituents or global publics. Within that sensibility, it is less important for reportage to clearly narrate and describe what is happening than it is to communicate sympathy between press bureaucrats and government bureaucrats. It’s like witnessing two fireflies signalling to each other across the darkness so that they can come together to mate.

Posted in Politics | 11 Comments

There Are More Things on Heaven and Earth Than Dreamt of in Your Critique

Just back from some research work that took up my energy for writing and thinking, I spent some time catching up on blogs and social media. I followed one link out from a Facebook friend to Paul Mullins’ excellent Archaeology and Material Culture blog, which often has content that I bookmark and mean to respond to but never quite get around to tackling. The linked essay was actually an older one, on “ruin porn”. This caught my eye because I like ruin photography quite a bit, as well as the general practice of “urban exploration”.

Mullins has a typically careful, considered, densely hyperlinked appreciation of the topic that exemplifies the best kinds of curation that digital culture has to offer. But as I followed some of his links to the critics of “ruin porn”, even the more subtle and careful critiques like John Patrick Leary’s response to ruin photography centered on Detroit, I found myself thinking about some tendencies in humanistic writing that I think first took shape in the 1980s but continue to be a limitation of some humanistic intellectuals both inside and outside of the academy.

I’m aware that some of what I’m about to say commits some of the sins I’m identifying, partly because I want to abstract some of these observed problems away from any single one of the links that Mullins offers rather than turn this into “my blog vs. that blog” (especially when some of the linked blog entries in Mullins’ essay are two or three years old: the presentism of digital culture sometimes means that people are deeply puzzled when you start a debate about entries that the authors wrote years ago). None of the critiques that Mullins includes in his overview perfectly exemplifies the issues I’m about to describe, and in many cases they simply reminded me of overall frustrations I have with large strains and tendencies in work that I want to like more than I do, including issues I sometimes see in the writing of my students.

So here are six tendencies that I have a problem with:

1) The complaint against omission and the demand for an impossible culture. Reading through Mullins’ links, some of the critics of ruin photography complain that many ruin images leave out people, excise the history of how a building or place came to be ruined, or omit the political economy of abandonment and neglect. I see some similar, if less experienced and articulate, arguments in some student writing: asked to critique, many students opt to accuse a scholar or writer of leaving out, neglecting, forgetting, rather than to directly disagree with or argue against a claim or analysis. At the least, this often amounts to a weak, evasive or highly selective form of intertextuality: it lets the critic pose one text that they happen to know against the text they’re critiquing, without having to have anything like a systematic knowledge of entire genres, tropes or bodies of scholarly thought or without having to know whether or how the two texts being contrasted might plausibly actually be expected to be in relation to one another. (In the case of my students, I sometimes see them holding one author accountable for having neglected or forgotten another work which was produced at a much later date.) So looking at ruin photography in isolation without asking what histories of visual culture it draws from or is talking to, or whether the critic has the same expectation of ruin photography of other sites and places or even of non-ruin landscape and architecture photography (which often also leaves out people and political economies and processes), is a problem.

Two photographs that leave out people, histories and political economies, for example, and are almost entirely “aestheticized”:

spray

was

It’s a bigger problem when this strategy demands impossible culture–or it holds all culture accountable for not being a single idealized type of work which is (surprise) frequently either the kind of work that the critic does or is the kind of work that the critic professionally identifies with. This is when criticism starts to look like the worst Christmas list from the most overprivileged child, an endless list of unprioritized and urgent demands. There is no photography, no performance, no representational work, that can include totality and even the desire that they should is a terrible misfire.

This has always struck me at the least as the kind of thing that a dullard senior professor does when called upon to act as a discussant or critic, to compile a long list of omissions from the work on offer without any reading of the meaning of those omissions and the possibilities of their inclusion. Taken seriously, this kind of demand actually encourages the production of expressive and interpretative work that looks like a horribly literalist editorial cartoon with labels on all its images, less an analysis and more a catalogue. What does a photograph of a ruin look like if it includes all the people, all the processes, all the political economies, all the histories, that went into making the ruin? It looks like an archive or an exhibition, and an endless and imaginary one at that.

One of the wellsprings of this kind of wish for an impossible culture, I think, is the way that a sort of backdoor empiricism infiltrated humanistic practice in the academy via historicism. Much as the ‘hard’ social sciences have come to rest on an almost parodistically exaggerated distortion of the positivism that that they (wrongly) attribute to the natural sciences, some humanistic work trying to do useful sociopolitical work slowly but surely came to take on board the dullest kind of empiricism of the most literalist historian or sociologist to the point that this mode of criticism can appreciate or admire no text for what it is or does, only complain of what it is not.

2) The assertion of ownership. The critique of ruin photography and urban exploration often comes down to this: that its practicioners are carpet-baggers, cosmopolitan passers-by, tourists and short-termers, inauthentic. At best, these formulations are true but banal. Places, communities and people give up different stories at different time scales as well as in response to different styles of knowing. There are things you can’t know about a place after a day, a week, a year, a decade, a life–and things you can’t know about a place if your decade there was from age 10 to 20 or from age 60 to 70. But this proposition is always reversible. The longer you’re in a place, the less able you are able to see other things. Banality dulls the ability to see beauty and horror. Watching a building crumble slowly over the years, knowing intimately the processes of its abandonment, may make it hard to see how startling or interesting the results of that history might be to fresh eyes. At its worst, playing the authenticity game puts a very deadly weapon in the hands of repressive actors. It’s always something that can be turned back on the critic: there will always be an experience or subjectivity beyond the critic’s own boundary, a point at which they will also have to borrow from, rework, or rely upon accounts of experience or embodiment that the critic doesn’t have in themselves and didn’t live in, or they will have to distract mightily from the presumption of their own assumed authority. (Say, the arrogance of anyone speaking for what “Detroiters” in general think of Detroit’s history and landscape: I don’t recall there being a plebiscite or poll that documented what most of ‘them’ think, I don’t think that Detroiters as a whole think the same things or have lived the same lives in relationship to their landscapes.) Coupling the right to represent to a privileged subject position is a bad move, and it’s easy to fall into that from the simpler and often useful assertion that a particular representation has assumed its own privileged relationship to the truth of what it shows. But in fact ruin photography often makes quite clear its outsider status and to see something in ruins which locality and rootedness do not see.

3) Starting with accusation rather than curiosity. I think this tendency is especially deadly to humanistic study and writing, a point that is also made by the recent Harvard College report on the humanities. As the report puts it, “among the ways we sometimes alienate students from the Humanities is the impression they get that some ideas are unspeakable in our classrooms.” Or in expressive culture and critical interpretation at large. What is striking about some of the critiques of ruin photography is that they do not start with the question, “So why are these images being produced? Where are they being produced? What do their creators say about them? Who views them? Who likes them?” in a time when all of these questions are more richly answerable on a vast sociocultural scale than ever before. And I do not mean to suggest they be answered in the dullest or most literally sociological ways. Instead, ask them as if the answers might be a surprise, because when asked in that spirit, they often are. We’re not blank slates: what we see or know forms out of what we already see and know, and we’re not discovering an already-made ontological reality that only waits for us to ping it with the right gadget. Curiosity is a spirit, an attitude, a starting posture. It’s a way of tuning your instrument before the performance, or seeking ongoing inspiration for inquiry. Critique that follows something like curiosity, something like an ethnographic understanding of the thing we want to criticize, is more powerful both because it is truly earned and because it is more precisely targeted. When you have to fling around tropes like “hipster” to hit your target, the humanities that results is about one step above a Buzzfeed listicle.

4) The lost opportunties of anti-curation. Again, in a time when it’s possible to bring together large bodies of text and representation and work the resulting aggregations of “medium data” as a way to think to and invent new possibilities of seeing, it seems depressing to see some kinds of humanistic critique involved in the disembedding of expressive culture, in the assembly of cherrypicked galleries of grotesques, of being stuck in the flow of digital attention without ever making strategic decisions to go beyond or below the flow of the picture or meme that floods past our doors. That flood is a marvelous thing and there are incredibly fruitful ways of knowing and interpreting that are growing out of it. But we ought to be able to take whatever detritus washes up in front of us and then dive deeper into the wreck or look out at all the flotsam and jetsam around it. Sure, suddenly it seems that here are all these photographs of Detroit in ruins. But go wider and suddenly there are all these photographs of ruins in general, not just post-industrial North American cities. Or even wider and suddenly there are all these photographs period. Suddenly the critic’s perception of a boundary around one trope or expressive moment requires a much stronger defense. Go deeper and suddenly a fascination with ruins seems both more historically interesting as an affect of modernity or much more banal as a consequence of the movement of people and capital. None of the widening and deepening that curatorial practice entails forbids the critic to criticize, but it does place important burdens and challenges upon critique.

5) The mistrust of beauty and pleasure. It’s really striking in some criticisms of ruin photography to see the critics resist or defract the appeal of the images themselves, often via the term “ruin porn”. Like “food porn”, that can be a lightly ironic way for people who actually produce and appreciate the images to label their own desire, but some critics of the form seem to use it much more seriously as an indictment of aestheticization itself, that any image or performance or text which produces desire and pleasure is prurient and unsavory. This is again a kind of backdoor positivism sneaking into the picture, as if a more real and less aestheticized image would be both always possible and inevitably preferable. It’s not like that: street photography, for example, with its typical emphasis on naturalism and embodiment and documentary realism, is another aesthetic, whether it stages itself among decline or in the midst of wealth. The argument that we should have a critical or systematic preference for an aesthetic is the hardest one in the world to make in the humanities, and that difficult labor is something that polemicists typically bypass whether they’re self-declared conservatives insisting on the “Western tradition” or progressives complaining about hipsters aestheticizing ruined buildings in Detroit. It’s not that “this ruin is beautiful” is a sufficient justification for an image in its own right for anyone but the producer of the image but that it’s a possible justification. It’s not just positivism that sneaks into the picture here but productivism, the proposition that culture has work to do, and that the work of producing culture should always somehow justify the investment of time and resources not just of the artist but of the viewers.

6) Which I think leads to my last complaint: that this kind of critique doesn’t look to recuperate, reimagine, reinterpret but to forbid. In some sense there should be no image, no expressive work, no text, no performance whose existence we regret to the point of wishing it had never happened. This is where there is often a disjuncture between humanistic work on the past (which accepts the inevitability of the texts and performances of interest and is therefore often capable of interpreting them in fresh and novel ways rather than just wishing they had never been) and work in the present, which much more often attempts to instruct or set boundaries around the creation of culture and interpretation in the near-term future.

All of this, by the way, isn’t important just because of how it affects acceptance of work by scholars and public intellectuals, or how it affects the institutional status of the humanities. Much of this is also a good explanation for why contemporary progressive intellectuals struggle so hard to make headway in the politics of culture, as these tendencies both hobble any dialogue between critics and practicing artists, performers and producers of expressive culture and they inhibit humanistic thinkers from producing their own persuasive cultural artifacts outside of the institutional networks that provide secure guarantees of value and praise for their work. If a person read nothing but a diet of the strongest and most dogmatic critiques of ruin photography and was then handed a camera and told to go take a picture of a ruin, that person would have to have an extraordinary bulwark between their creative impulses and their critical training to ever press the shutter button. Or, more likely, they would find themselves refusing to take the picture knowing in advance of the inadequacy of the gesture.

Posted in Academia, Blogging, Popular Culture, Production of History | 8 Comments