Eeyore and the Unintended Consequence

I don’t expect much to come of John Holbo’s careful breakdown of the non-philosophy underlying Megan McArdle’s blanket antagonism to all health care proposals, but there’s one point buried in there that’s worth pulling out for its general usefulness.

Namely, that loosely libertarian (e.g., McArdle/Brooksian surface-level libertarianism used mostly to defend fixed programmatic commitments) fears of the unintended consequences of action by the state are empirically and philosophically messed up. They’re not really based on a comprehensive history of the consequences of state action, and they’re not really based on any kind of consistent view of structure-agency interaction. They seem to me to take really very specific kinds of analyses of how 20th Century states engaged in projects, especially high modernist projects, which had many undesirable and perverse effects, and generalize them to a universal law of history. Jane Jacobs and James Scott, to cite two examples of those specific analyses, are talking about concrete episodes and perhaps modestly generalizing them to certain movements, ideologies or specific bureaucratic formations.

I think you can insist on the importance of unintended consequences as a part of a generalized theory of the relationship between structure and agency. I think you can also look at discrete intellectual movements or episodes where important actors believed that they could eliminate or master unintended consequences. Certainly that’s what a lot of rationalist high modernism implied, that good planning inputs based on proper ideological premises could reliably produce exactly the intended systematic changes.

The thing of it is, as John points out, is that if you’re going to argue that unintended consequences follow on any major change or disruption to an existing system, then there are two additional points which have to be made. First, there’s not much of a difference in this sense between the state and other major institutional actors which have some degree of agency over a system or structured practice. If three or four large pharmaceutical firms decide to change the way that they interact with the existing health care system, all sorts of unintended or unanticipated changes may follow. I can’t see any reason why someone like Megan McArdle would be intensely anxious about government and yet be relatively sanguine about industry, civic organizations, and so on, if the issue is that it is impossible to account for or anticipate unintended consequences.

Unless at the end of the day, this is about a near-religious belief that institutions of the market somehow always produce a good unintended consequence. That’s a bit hard to work out if you’re that radical and simplistic a manichean, since either the market is the first condition of human history and therefore produced non-market institutions (whoops! not a good unintended result from that perspective!) or the polity is the first condition and therefore produced the market (whoops! the polity produced a good unintended consequence!). Of course, it’s not a good description of the contemporary American health care system, either, which has been produced by complex interactions between market, state and non-state institutions.

Second, if you’re really interested in the unintended consequences of structure-agency loops on complex systems, you have to allow that many of them may be positive, neutral or be spandrels of some kind or another, as well as being negative or destructive. If you believe that all or even the strong majority of unintended consequences are negative, then you don’t really believe in contingent or unintended outcomes at all. You believe in a kind of declensionist, entropic worldview that holds that everything will in time inevitably go wrong. You’re basically Eeyore. Which is ok, but it is hard to make an Eeyore-style defense of anything, whether it’s the current health care system or anything else. You’re sure it’s all going to go wrong, and in fact, must be sure that the current system is less good than something in the past. If you think the current system is acceptable or worth defending, then you must be sure that unintended consequences sometimes lead to good results, because nobody set out to build exactly the health care system that Americans presently have.

Like John, I’m all for accepting that the gap between intent and practice will inevitably be quite wide, and that in that gap, all sorts of devils can find room to dance. It’s just that those kinds of gaps also have thermals upon which angels fly. The mere existence of such gaps is not a reason to simply squat dully upon the status quo, or unchain the magic market mechanism to come along and sweep us to the promised land, nor is it reason to simply avoid discussion of what we ideally would like to see happen on the grounds that whatever we would like, unintended consequences will ensure that we never get it.

Posted in Politics | 3 Comments

The Limits to Shill

I continue to feel pretty diffident about the controversy among anthropologists about the Human Terrain Team and other uses of qualitative social science by the U.S. military over the past decade or so. The issue for me is not whether this is an intrinsic misuse of anthropological or qualitative research, blurring the lines between legitimate fieldwork and other uses of ethnographic methodology. That’s partly because I feel that anthropologists have a tendency to draw overly stark lines between their own disciplinary traditions and all other forms of fieldwork, usually in the process implying that all other kinds and styles of qualitative fieldwork are both ethically and methodologically suspect. Ethnography is spying of a kind. Or to flip it around, there’s plenty of kinds of intelligence-gathering that take place in plain sight.

The issue for me is not whether institutions like the police or the military can legitimately use anthropology. It’s whether those institutions are prepared to accept what they’d learn by doing that kind of research honestly, or whether the use of anthropologists or other researchers is just a way to put sugar on a brute-force shit sandwich. At least in some cases, what our military or other militaries might have learned in the past by doing genuine, sophisticated anthropological research is that there is no way to achieve the declared objectives of a military mission, and that you either alter the objectives or end the mission. In Afghanistan, for example, I’m not sure that it would lead to successful counter-insurgency if all ISAF Forces became profoundly culturally sensitive. It would help, as would the casual overuse of aerial bombardment or brutal intrusion into communities. But culturally sensitive or not, the ISAF don’t live there in the truly long-term sense, they’re not a part of villages where the residents have to make choices about whether to collaborate or tolerate Taliban forces who have an intimate knowledge of the social networks in that village and work with people who do live there.

So it’s fine to study something using any method or specialists you like. Just be ready to hear what that method is going to tell you.

——-

A very different example of the same issue. The New York Times today has a piece on how market researchers and others are using computer analysis to try and identify embedded sentiments within online discourse. You don’t need a computer to do that, just use the hermeneutical engine located inside your own head and do some reading.

Use computers if you like to spot trends, I suppose, but to really know what a surge in expressed feelings means in any given Internet forum means, you really have to be a long-term follower of the flow of conversation and communication at that forum or others like it. Saying you’ll do “sentiment analysis” with computers on online forums is like saying you’ll do “sentiment analysis” in a literary work. I don’t doubt that something roughly along those lines is possible, but to really make any sense of it is going to take a human being doing interpretation the old, slow and human way. An algorithm that spots a change in word usage in a long-running forum is only alerting its minders to an event: making sense of that event is another matter.

The issue is not at all that market researchers are trying to analyze sentiment at any rate, or even the methods that they’re choosing to analyze it with. It’s whether they’re prepared to understand what they find out, and to act upon it. Just to take an example from the article, when Wal-Mart looks at “Labor Force and Unions”, it finds that there is a lot of negative sentiment. The Times suggests a public-relations strategy as a response to that finding. Wrong answer. If you’re really doing sentiment analysis, you’d understand that this strategy is a great way to make more negative sentiment. Because the reason there is negative sentiment is that Wal-Mart’s labor policies are pretty bad.

That’s like suggesting that you send a shill poster into a fan forum to promote a bad new movie or video game, to counter negative sentiment you’ve detected. If your shills are new posters, everyone almost immediately detects the shill and all you’ve done is increase the negative sentiment. If it’s an old, established poster, you’d have to pay that person off a great deal to get them to shill, unless they routinely shill for things, at which point see, “increase the negative sentiment” above. A non-shill poster who starts sounding like a shill is obvious; a non-shill poster who somehow manages to promote something in an appropriately cynical, somewhat denigrating way is not really helping to counter negative sentiment in the first place.

What you really learn by doing sentiment analysis (by computer or the better human-brain-reading-things way) is, “Don’t do whatever the hell it is that is pissing people off if you really can’t afford to piss them off”. If you’re Wal-Mart and you’re seeing some impact from your bad labor-management, do better by your employees. If you’re sitting on top of a $100 million cinematic turd, try to avoid shitting out celluloid crap in the future. If you can’t or won’t change, don’t think that there’s a magic trick buried in “sentiment analysis”.

What happens when there’s a new raft of consultants selling some new thing like sentiment analysis is that as the new specialized service develops, the people selling it are increasingly pressured by clients to make the information come out in a way that is soothing and tranquilizing, that either says that the problem doesn’t exist or that there’s some superficial cosmetic strategy that lets the organization go on doing something flawed or self-interested while somehow getting rid of all the criticism and negativity that behavior has incited. The most avid buyers of that kind of information tend to be the mid-rank bureaucrats and managers who are less concerned with long-term missions and more concerned with their own short-term prospects, and they can dance a beautiful duet with consultants or pet researchers who are willing to whisper them sweet nothings as long as the salaries or payments keep on coming.

For institutions that really want to know something that they don’t already know, and think it’s important for the long-term mission, then there has to be some willingness and preparation to hear information that’s not palatable and to act on it in that form, up to and including ending the practices or projects involved outright. Because the issue is really not the use of novel methods: it’s what’s true in the world around you, and about what keeps some organizations from being able to hear or understand those truths. The world itself doesn’t conform to what middle-managers and bureaucrats sometimes need it to be. Sometimes it really is murky, or ambivalent, or confusing. Sometimes it’s exactly the opposite to the way that the CEOs and generals and political leaders pretend it is. Sometimes if you want to know how people feel, you’ll just have to join them down in the fogginess of daily life and grope your way around the hermeneutical mist like everyone else.

Posted in Information Technology and Information Literacy | 5 Comments

Ghosts in the Machine

Every once in a while, I’ll catch a colleague from the natural sciences playfully tweaking a humanist for the volume of prose they lavish on a routine task like writing an email or a committee report. (Or, yeah, a blog post. I am guilty, guilty, guilty.)

Humanistic scholars tend to cling to their words because that’s the alpha and omega of our scholarly work. I don’t enjoy listening to a full paper being read in a monotone during a conference panel, but good humanistic research often can’t be boiled down to a poster or a few slides. Persuasion, rhetoric, and description are the research, or at least they can’t be disentangled from it. This is also why humanists don’t collaborate or co-author very often, why their work tends to be private and individualistic even though it circulates through their disciplinary and topical communities, because voice and perspective are so much a part of carrying out and disseminating any scholarly study.

I think this is why I feel such discomfort every time I read about a case of academic plagiarism that ends up revealing that a scholar was making extensive use of research assistants to acquire archival materials or data, digest or summarize that data, and prepare drafts of work about that data. I’ve made some use of interviews conducted on my behalf by assistants (or by a co-author in the case of my work on children’s television), but that doesn’t seem radically different to me than quoting or citing interview transcripts deposited in archives by other researchers. I’d feel personally uncomfortable with telling an assistant to go into an archive or library and get me everything on a particular subject or topic, but as long as I read what they brought back myself rather than rely on their summary representation of it, I guess that could work okay in some cases. I know I wouldn’t be comfortable at all with drafts prepared by an assistant: if my name is on it, I wrote it or co-wrote it with a peer author based on extensive collaboration and conversation between us.

There are all sorts of acceptable variations and shadings of these practices, but I think a humanist or social scientist who is churning out assembly-line monographs, articles or reviews where all the work is largely done by unnamed research assistants has crossed an important line and isn’t really doing scholarship. It’s more being a brand-name than a scholar at that point. I’d really like to see most universities start to rein in cases of faculty who produce a large volume of work in this fashion. Even when it doesn’t lead to plagiarism or sloppiness, it breaks an important connection between research and the individual who produced it. That connection is why we trust research in the way that we do, because we know that a scholar has personal reputation capital on the line, has skin in the game. The more we accept excuses like, “I didn’t check the work of my assistants sufficiently”, the more that the work of all scholars becomes suspect. If you don’t clearly describe what is and is not ok when it comes to the use of research assistance, you end up making it possible for some research-factories to claim that they’re just doing what everyone else does.

The norms of co-authoring differ in the sciences (and some of the social sciences) because their work is genuinely and necessarily collaborative in a way that fundamentally differs from most humanistic scholarship. But here too it has become increasingly clear that the system is breaking down. I suspect it’s long been abused by predatory powerful figures who stick their names on work to which they are only nominally connected. The ghostwriting scandals of the last few months, which Margaret Soltan has tracked so well, are corrosive on a whole new scale.

Soltan’s most recent post on the subject is about the case of a McGill professor who put her name on an article written by a pharmaceutical-supported organization. The important thing about the details of this case is that they show how much the routine practice of scientific co-authorship has long since slippery-sloped its way to allowing serious abuses. Look at the official statement that the professor in question has issued. She says, “I wrote a portion of the article, but not all of it, although only my name was listed as its author”. She continues, “Other parts of that article were written with the assistance of DesignWrite, a firm which, it turns out, was employed by a pharmaceutical manufacturer to assist in the development of academic articles. I made an error in agreeing to have my name attached to that article without having it made clear that others contributed to it.”.

The statement aims to exonerate her by representing the practice as a routine act of co-authorship that went awry because the professor wasn’t diligent enough about looking at all of the particulars involved. She notes that she wasn’t paid for the article and that it underwent successful peer review.

Come on! Pull the other one! Did an article already partially written by DesignWrite fall from the sky onto her desk like the Coke bottle in The Gods Must Be Crazy, and she said, “Why, how lovely, this just happens to be a wonderful description of research I’ve already completed?” Who did she think employed DesignWrite? Santa Claus?

“It turns out” is a fantastic little phrase in that statement, and really needs to become a standard part of celebrity apologies. “A gun was located near my person, and it turns out that the gun had a bullet in it, which it turns out hit someone. I made an error in having my finger near to the trigger and allowing some of my nerve impulses to cause a muscular contraction in that approximate location at the same time that it turned out that the gun was near to that finger.”

Without getting in the way of legitimate practices, I think every university and college in this country can make a clear policy statement: if your name is on it, it’s your work. You did the research, you created a report, statement, abstract, essay or book that disseminated that research. We can work with other researchers or writers, we can get assistance of various kinds. But there shouldn’t be any ambiguity about working with a firm of ghostwriters paid by a business that has a financial interest in the outcomes of the research. That’s not scholarship. If a pharmaceutical wants to publish that kind of writing, let them put it out under their own name, let them hire in-house researchers if they like, but don’t pretend it’s independent scholarly research with the credibility of an academic institution behind it.

Posted in Academia | 2 Comments

How to Read a Curriculum

We already know that higher education applicants and their families have some difficulty decoding the information available about colleges and universities to find the institutions that they want to apply to. That’s the major reason that higher tuition prices in the past have sometimes produced a more selective applicant pool. If I’m trying to hold a dinner party made from the very best ingredients, but I’m making something I’ve never made before, I may choose the highest-priced ingredients, figuring that cost is a signal of quality. If I have experience with those ingredients, I may know better.

This is why college rankings flourish, even though most higher education insiders know that they’re highly manipulable, and in any event, don’t really tell most applicants the more fine-grained things that they’d really like to know. Various descriptive “insider’s guides” do a better job at describing the selective feel of an institutional culture for an applicant who wants to know what day-to-day life is like, but even those are often based on whimsical stereotypes of a given campus culture.

I sometimes join a faculty panel to talk to prospective Swarthmore applicants, and one of the first things that I say is that a college applicant and family can only have strong control over a few really basic dimensions of the choice in front of them. You can control the cost of tuition and board by choosing between public and private, near your family or far-away. You can choose between large and small. You can choose between institutions with unusual curricular designs (St. John’s, Hampshire, Bob Jones, the U.S. Military Academy) or institutions that are more or less variations on a common approach. An applicant and family can make some rough judgments about selectivity, quality, and resources using rankings systems. An applicant can decide if there’s a region or area of the country they really like or dislike.

Beyond that, if (for example) an applicant had decided that they wanted small colleges in the Northeast with a fairly standard curricular philosophy near the top of the selectivity hierarchy, there’s a good argument that they should just write out all the names on slips of paper, put them in a Hogwart’s hat, and choose six to apply to. The features that will really change your life or matter to you once those major decisions are made are almost impossible to predict: the friends you’ll make or lose, the people you’ll love or break up with, the professors you’ll connect to or be frustrated by, the courses that will excite or bore you, the majors that will grab or repel you, the professional connections you’ll make or wish you had, the institutional culture that will satisfy or disgust you.

It’s very hard to do more, roughly as it is hard when you’re a first-time home buyer to really understand and examine the things that will make the key differences in your life with a comparative understanding of all the possible choices. I don’t think I could actually have ever seen or accurately weighed the stuff that’s been good and bad about the home I bought five years ago even if I’d had the best counsel in the world. I don’t know even now if the other houses I looked at had better functioning windows or well-installed doorknobs or fewer horrifically stupid DIY jobs lurking under the surface. I don’t know if I would have had good neighbors at other houses like I do where we are now, or if my tomatoes would have grown well in other gardens.

It’s the same with higher education.

——–

Still, applicants and families understandably want to be diligent. One area where they often kick the tires a lot is the curriculum of possible institutions. There aren’t so many services and rankings that really look closely at curricula. You could turn to something as goofy as the current ACTA rankings, but if you’ve got that much of an obsession with a rigid vision of certain highly particular ideas about “traditional” general education, don’t worry about any of the other kinds of issues involved in choosing a college or university. (For example, if you follow that report, only one highly particular approach to teaching writing is legitimate, and only if it’s a general education requirement: everything else is an illegitimate failure within that ranking system, with nothing further to debate or discuss. If that evaluation satisfies you, you have enough fixed ideas about higher education already to drive any decision you might need to make. )

If you want a more interesting, open conversation about how to interpret what a curriculum has to offer a prospective student, this conversation at EphBlog is kind of interesting if you go down to comment #11 from Williams professor James McAllister. (The initial post is a pretty conventional bit of concern trolling of a kind I’ve talked about a lot at this blog, and it gets pretty well debunked by McAllister by the time comment #11 rolls around.)

So leaving aside the hand-waving about whither history departments and all that, let’s just say that you’re a prospective undergraduate who wants to study one subject more than any other. McAllister is talking about the study of U.S. foreign relations and comparing Williams to Amherst and Swarthmore, with an argument that Williams is the best place to be of the three for a student who has that subject as a primary interest.

I don’t really want to get too deep into the highly granular kinds of comparisons involved in that judgment, though I think McAllister’s assessment of relative strengths seems basically right to me. Instead, here’s how I think a prospective who self-identifies as highly interested in one topic or subject ought to work through the questions involved.

First, are you sure that you’re really that interested in a single topic or issue, so sure that you want to make that a primary axis of your decision about where to go to college? Why are you that sure? Do you just like the topic or are you thinking already of a profession narrowly based on it? Are you sure based on an understanding of what a likely undergraduate-level curriculum around that topic looks like, or based on what you know about it from your high school experience? Are you making that choice with a wider awareness of the subjects that even a small college will offer to you that virtually no high school curriculum can focus on?

Second, are you SURE? Really? Then you’re a really unusual applicant. Most of what prospectives think they’re interested in is not the same as what those subjects turn out to be, and most of their interests are based on a very incomplete understanding of the range of academic subjects even within a particular discipline.

Third, if you’re really that kind of unusual person, absolutely certain that your first, second and last priority is to comprehensively study a single subject area while you’re an undergraduate and that this priority is unlikely to change, then: a) don’t apply to any small undergraduate institution; b) pick a place with as few general education requirements as possible; c) find a program in your preferred subject at a large institution that is stuffed to the gills with faculty and courses and make sure undergraduates with a dedicated interest get access to the most prestigious or high-powered faculty in your area of subject interest. The relative difference between one small college and the next doesn’t really matter to you if you’re that driven, because in either case, they’re going to have a relative paucity of resources in comparison to a large institution. You don’t really care about any of the other resources at an institution if you’re that focused: just your area of study and whatever direct supporting skill areas you need (say, language or quantitative training). An undergraduate applicant who is this specifically focused is really more like a proto-graduate student, and should use selection rules much closer to what a graduate student might employ.

————-

Let’s assume you’re not so focused, just that you’d like to know whether certain areas of potential interest are well-represented in a given catalog. How can you tell? Some rough rules of thumb if you’re reading a catalog for this kind of information.

1) Look at four or five-year course cycles if you really want to know how many courses there are in a particular area. A single year can be highly misleading because of faculty leaves, short-term fluctuations in offerings, and so on.
2) Look over a range of possible departments: many subject areas are taught in multiple disciplines.
3) Look carefully at course descriptions. You may have one idea of what a subject area is about, and not recognize a course title as associated with that subject area. A description may give you a better idea. But be aware that there may be courses on your subject area that you don’t recognize as such at all because you don’t yet really know that subject area as it is commonly taught to undergraduates. Or a subject area you recognize may lead to a subject area you don’t recognize in the course of natural progressions of study. Be prepared to be surprised.
4) Look at the names of faculty who seem to teach a course in the subject areas that interest you. Search out the rest of their offerings and see whether this is a subject area that they commonly teach in, or whether it’s just an occasional or one-off course. I’ve taught courses on new media and interactive media, but I can’t regularly fit such courses into my schedule. I’ve taught courses that interested me on occasion that I decide for various reasons not to repeat, or I suddenly decide to retire a perennial offering and relaunch it with a new coat of paint on top.
5) Also look to see if the faculty teaching in a given area are tenure-track faculty or visiting faculty. The content offered by visiting faculty shifts a great deal at both small and large institutions. Count how many faculty are teaching in that area, of both types.
6) Read the fine print about who can get into the courses that interest you, and how often they’re taught. There’s a big difference in availability between a course taught every year that’s open to max enrollments and a course taught once every four years that’s only open to 12 hand-picked students.
7) Watch out for “paper programs”: a lot of institutions create named programs of study that are more or less ad hoc collections of courses being taught in service to other disciplines or degrees. There’s a difference between programs that have dedicated faculty lines and resources and those that express a general interest by faculty who are primarily associated with other departments. Look to see where faculty positions are actually invested. (Usually this is clear if you look at a program or department’s listing of faculty: there will be a distinction between tenure-lines in that department and affiliated or associated faculty.)

If you do a bit of that, you may get a rough sense of where a given institution has invested more or less curricular energy. Even that isn’t necessarily a good guide to the viability of study in a given area: one professor that really satisfies your interest in a subject is worth more than four who don’t. Whatever you do, though, don’t put too much weight on these kinds of judgments. You or your family may have one idea about what you need to study, but in a great many cases, you’re likely to be wrong in a variety of ways.

Posted in Academia, Swarthmore | 15 Comments

The Right Firewood

Back from an extended camping trip where I was blissfully out of range of digital technology for most of the time. (Though I weep now looking at my email inbox.)

tidepool hike

A good time was had by all.

One interesting experience I had involved our campfires. I didn’t bring a camp stove, preferring to cook off of the fires instead. I would have told you that I was a pretty good hand at starting up a fire under a variety of circumstances, but for the first two days, I was going nuts.

I bought was looked like okay campwood from outside the National Park-system campground we were at. Some small pieces suitable for kindling, some bigger logs. I cut into one log to check and it seemed dry throughout. The grain looked a bit odd, perhaps, and I couldn’t really tell what the source tree of the wood was. Conditions were slightly damp at our wooded site but nothing out of the ordinary. And yet, keeping a fire going was close to impossible: the logs would blacken, a few hot coals would form, and then the whole thing would fizzle. I got wood from the next household down on the main road, same issue.

I set up the next fire differently, I bought a little bundle of softwood kindling, same results. I was at that point where you conclude that you’re the problem, that you don’t know what you think you know, that you’ve got to go back and get genuinely educated.

And then the folks at the neighboring campsite, who had great fires each night, packed up and donated their wood to us, perhaps out of pity. It was a mix of beautifully seasoned birch and norway pine firewood with some birch bark for kindling. And it went up like the Human Torch, using single match lighting the kindling, with a normal stacking on my part. It burnt beautifully if somewhat quickly, providing perfect coals for cooking as well as for gathering around after nightfall.

This experience is a classic kind of problem in education (and lots of other repeated practices like competitive sports). Students (and faculty) sometimes hit a patch in the process of doing something familiar where nothing works the way that it should, but for some reason it’s hard to tell whether there’s a problem with the tools or the context. So you start to doubt yourself, and maybe you make significant changes in the way you do your work, whereupon things get even worse, and the situation spirals quickly to frustration. You don’t want to get into the habit of blaming your tools or the situation, because sometimes you really do need to adjust your own practices. But if something’s worked well before, the Occam’s Razor answer would suggest a closer look at your tools and your context first before you start fiddling with your own basic strategies or assuming that you really don’t know what you thought you knew.

Posted in Academia, Domestic Life | 1 Comment

Putting Syllabi Online

I kept meaning to get around to this last week, but I was snowed under with two other things that needed to get done before the academic year starts up again. Adam Kotsko asks an eminently sensible question: why don’t faculty routinely put draft syllabi online, looking for comments as they shape the syllabus. (He distinguishes this from putting drafts of scholarly work online, and I completely agree with him on this point, as I don’t think that’s usually a good idea.)

I’ve written some about this question before. Since I often put up both drafts of syllabi and completed syllabi for comments, I obviously think it’s a good practice. It’s been nothing but beneficial for me: I’ve gotten great suggestions, interesting critiques, a good feeling for how the syllabus plays with different intellectual communities. So why wouldn’t everyone do this? In fact, why shouldn’t everyone more or less be officially pushed to do it by colleagues or administrations. It’s not just a good thing for the person posting the syllabus, but for students who want an early view of what a course might entail and for larger publics who would like to get a sense of how much work and thought goes into an average course design. Since one of the handicaps academics have in the public sphere at the moment is that there are a number of people who think the work of college teaching consists of walking into a room, letting knowledge spill out of your head, and leaving, it might help if we gave a demonstration of what’s actually involved.

I thought maybe I’d try to collect all of the reasons I can think of that faculty don’t do this, some of which I’ve talked about before in this space.

1) Unfamiliarity with or serious objection to the technologies or media involved in posting syllabi online.

Not much you can do with this reason if this is the issue. If someone doesn’t want to know how to do it, or believes that they have a principled objection to everything online, that’s it. That kind of blanket rejection strikes me as bordering on professional irresponsibility for an academic at this point, but ok.

2) Anxiety about specific kinds of public or political hostility directed narrowly at a particular field or subject preventing useful feedback or requiring too much attention from the poster.

There are a few cases where I’m inclined to grant the legitimacy of this position. If you’re a historian who teaches about the Israel-Palestine conflict, you may honestly just want to opt out of any situation where you summon down the buzz of angry partisans on both sides into a comment thread, and that could happen if you are working on a syllabus that aims for the blandest, most even-handed presentation of that subject. But most faculty vastly overestimate the extent to which their specific choices on a syllabus are likely to provoke specific negative reactions that are in some sense knowledgeable if angry or unreasonable. There are plenty of totally ignorant kinds of vaguely political responses to academic courses, on the other hand, but you’re exposed to loony Horowitzian hacks who just skim course titles looking for targets whether or not you post your syllabi online. In my case, in fact, having a syllabus online helped underscore the stupidity of a lot of that skimming.

3) Anxiety from junior faculty about enabling the hostile scrutiny of senior faculty by giving them detailed looks at a process of course design, and especially anxiety about exposing an early draft for fear that it will make the designer look ignorant of key texts.

Junior faculty have every reason to worry that information can be twisted or misused by a hostile senior colleague. But look at it this way: when you’re looking for an academic position, you’re going to be sharing some draft syllabi (you should, at least) and you’re going to have to talk some about the way you look at your courses. Once you have a post, senior colleagues are going to be able to get information about what you’re teaching if they want to get it. Privacy can’t protect you from someone who is determined to make your choices in teaching into an issue. Transparency can’t hurt you, but it might help you. One of the things that a bad senior colleague who knows a little about your field can try to do when your dossier is being examined is to misrepresent the canonical content of your field to others. An active commentariat who responds positively to your choices in online posting is a shield against that misrepresentation. Give your friends and supporters as many tools as you can to help if you think there’s any problems.

4) One of the two key reasons why most faculty don’t post and don’t want to post. A lot of faculty suffer from some degree of the “imposter syndrome”, maybe to an increasing degree as time goes on. They feel they don’t really know their fields or subjects as well as they ought to, or imagine that everyone else in the field knows more. They worry that people in more high-powered institutions, or people with the top graduate students, know the true or proper canons. So many faculty don’t want to post syllabi because they’re afraid it will expose some lack or absence in their knowledge.

I get this, I really do. I feel it myself some days, and all the more with each passing year. I’ve been sprucing up my upper-level seminar in African history and feeling inadequate in the face of the piles and piles of published work that has appeared in just the past four years. I keep turning up a new book or article that I feel I should have read. Putting a syllabus up seems like painting a bull’s eye on yourself. But some things to consider: 1) almost everyone else you know in your field feels that way, too, and the few who don’t, who are certain that they know everything in the field, are generally jerks; 2) you know a lot more than you think you know; 3) putting up syllabi is the solution to the problem, not a further aggravation of it; and 4) someone who really does think you’re ridiculous for missing something isn’t going to say it to you in comments and already thinks that about you anyway whether or not they see your syllabus (see: “are generally jerks”).

5) The other key reason. Many specialized fields are a closed shop: they maintain a sense of expertise through limiting their circulation and exposure to other fields of specialized knowledge, other disciplines and wider publics. Posting syllabi online (or anywhere) interrupts the circuits through which these kinds of fields maintain their sense of authority.

This dovetails into some wider domains, such as the indifference of many practicing scholars to open-access publishing or the hauteur of some academics when they find themselves accidentally caught up in a public debate or controversy. I don’t think a lot of folks are overt in their belief that their syllabi should derive from a closed conversation, but it’s not hard to find a lot of cases where this is more or less driving the reluctance to expose the content of teaching to colleagues, let alone a wider public. You don’t have to push very far to find plenty of professors who think that there is almost no one qualified to judge the adequacy of their choices of material in a course, and that there is no public defense of those choices which could be comprehensible outside of a narrowly specialized discourse. I think this is really where the battleground forms, not just about syllabi but about the basic architecture of knowledge in the 21st Century. I think if we can’t make a strong case for the value of expertise in relation to the opening up of knowledge through online media, we’re screwed. So putting your syllabi up online is a good way to learn how to make that case, to demonstrate that there is some kind of authority that goes beyond Wikipedia and Google. Refusing to do so is a good way to accelerate the obsolescence of scholarship.

Posted in Academia | 14 Comments

Arresting Power

Some years ago, I got mugged late at night at an urban train station. I was pretty stupid: I had a friend drop me off to a nearly deserted train station on the most deserted side. Two guys started towards me the moment they saw me, and I remember thinking very clearly that they were going to rob me. I should have turned and headed the other way fast, but I was paralyzed. So I walked toward them, they whipped out a knife, poked me in the stomach, and demanded my wallet. I took it out, they took out the money and handed it back. Though I was basically feeling like it was an out-of-body experience, I asked if I could keep a buck for a phone call. They laughed and gave me a dollar.

I got a good look at both of them, but a second after they ran off, I literally could not see either one in my mind, at all. Not their face, not their clothes. I could tell you they were young and they were black and that’s it. I had no image of them at all. I could remember very clearly the knife poking in my stomach, a small dot of blood under its point. I could see my wallet, and the dollar I got back. I could remember the ground, the lights on the ceiling, the fare machine with its specific blemishes. Not them.

So I went out the other side of the station and there was a transit cop there. I told him I’d just been mugged. He took off and called for backup, telling me to stay put. About fifteen minutes later, he came to get me. The police had apprehended two guys and wanted me to ID them. They were the right age and build, they’d been running from the cops through nearby backyards and over fences and so on from what I could hear. I looked at them and looked at them and tried to tell myself that these had to be the guys. But I couldn’t do it, because I wasn’t sure. I told the cop that I couldn’t be sure, I couldn’t remember the faces or the clothes. He looked at me, plainly annoyed: they’d obviously had a time chasing these guys down. “You sure?” he said, “These guys were running from the back of the parking lot on the other station the moment we got over there.” I hesitated. I wanted the guys caught, and I didn’t want to put out the cop for nothing. But I couldn’t do it: I knew I couldn’t swear that I was right, and I wanted to be able to swear. So they let them go.

———–

I’m surprised that the arrest of Henry Louis Gates Jr. has had the legs that it’s had as a national news story. I actually think that the conversation about the case in a lot of places has been sophisticated and complex, talking not just about race but about social class and about policing and authority.

Broadly speaking, I come down where quite a few people have, which is to say that of course this was about race: the person who made the call probably wouldn’t have called for two white men doing the same, and probably would have known who a white man on the street was anyway. Or she would have asked someone else in the neighborhood: recently a neighbor who lives several blocks away stopped at our door to ask about something odd she’d seen two doors up, just to make sure that we thought it was ok. (It was ok, we recognized the car and the people she was seeing.) The cop probably would have accepted much more quickly that he was dealing with the actual homeowner and backed off. On the flip side, Gates’ anger at the cop’s presence would have been much more humdrum if he hadn’t felt like a black man who was being harassed in his own home.

But like many other observers, I think this is as much about policing and authority as it is about race. I’m not the only one to have seen a connection between the Gates arrest and the case of Philadelphia police officer Alberto Lopez Senior last week. Though the cases are very different in scope and scale, they underscore the extent to which police power in many instances is arbitrary and the extent to which police unions with tremendous political influence will successfully shield their own employees from oversight.

This remarkable thread at Crooked Timber built around the comments of NYPD police captain Brandon de Pozo on the case triangulates really well on the problem of police authority and police discretion. De Pozo’s argument is that the officer arresting Gates made a mistake, but an understandable one, that police need to have good judgement but that the public also needs to show respect for their authority. I’m ok with that thought only if I think that when a cop shows bad judgment about making an arrest, there will be consequences and there will be public clarity about there having been bad judgment. Maybe minor discipline with the officer in the Gates arrest, but in the case of Lopez in Philadelphia, that should have been the fastest firing on record. He not only attacked and arrested someone without provocation, but tried to tamper with evidence and obstruct justice. Instead, he’s back on the job.

The reason I think that even the Gates arrest is a serious case of bad judgment goes back to the story I opened my post with. See, for me, an arrest is a serious, serious thing. The power to make an arrest is the singular place where a free society lives and dies. When a cop gets someone to come out of their own home in order to arrest them for disorderly conduct, with the apparent motive to avenge insults and get even, that’s serious business, even if the cop and the justice system know it’s a nuisance charge that’s going to be dismissed. The New York Times survey of police on the question of disorderly conduct makes it pretty clear that a lot of cops feel pretty free to make an arrest whenever they feel annoyed by a member of the public.

I get it: police work is hard, and in many respects unrewarding. I get also that it’s important and that I rely on it. I respect the men and women who do it well. I don’t think it’s right to yell at police or be an asshole. But saying after the fact that Gates was an asshole is one kind of judgment about the civic conduct of another citizen of this country. Arresting someone, charging them with a crime, depriving them even briefly of their liberty, is another thing altogether.

I couldn’t say yes to identifying those guys that night because I didn’t know for sure that they were the ones, even with a lot of circumstantial evidence that they might be. I couldn’t say yes because to arrest and charge ought to be something that has an almost sacred weight. I couldn’t be a party to it unless I was prepared to swear to its justice and necessity.

That’s what worries me most around this incident and similar incidents: that some people approve of the use of police power as routine, as tactical, to make a point. De Pozo in the Crooked Timber thread suggests that police ought to enforce social norms. He’s clear that he means just through their presence and their persuasive words, but I think maybe other police aren’t so clear. The power to arrest mustn’t be used just to tell an asshole he’s being an asshole, or to dictate the proper attitude. The misuse of the power to arrest ought to be seen as an extraordinary violation, a matter of the utmost gravity.

Posted in Politics | 4 Comments

Red Herrings Overboard

Many of the criticisms directed at information technology in the classroom get hung up on a misattribution issue. Eric Rauchway makes this point very effectively: the problem with bad PowerPoint presentations is often not the software, but the presenter.

The professors who get up and drone their way through slides would get up and drone their way through written notes if you took away the technology. There’s some truth to the point raised by Kid Bitzer in the comments to the Rauchway thread, that PowerPoint exacerbates or aggravates some of the underlying issues that a mediocre or poor lecturer carries into the classroom. Still, dealing with the technology is just a case of treating a symptom, not the disease.

I’ll also leave aside the other contributing problems that make bad lectures even worse than they have to be, such as classrooms with two or three hundred people in them.

I think there a few really basic things that professors who make significant use of lectures in their teaching can do to improve them, whether or not they use any kind of presentation technology.

1. A lecture is not the right vehicle for problematizing a subject, exploring the finer points of scholarly debate on a particular issue, or indulging in the kinds of qualifiers and asides that are a part of scholarly writing in many fields. There might be a moment in a lecture where a student’s question or comment catalyzes a useful digression along these lines. You might build a lecture around describing two major competing schools of thought. But a lecture is a compressed format that requires clear, declarative statements. I know this was the first lesson I struggled to learn coming out of graduate school, because I felt like boiling issues down to some core principles was committing some kind of intellectual sin. Discussion is a good format for muddying the waters, lecture is not.

2) Don’t use a lecture to repeat or duplicate an assigned reading. This is a mistake in several respects. It wastes the time of those who did the reading, it incentivizes others to skip the reading, and it raises the question of why the classroom exists in the first place. A lecture that addresses a reading should be a dynamic response to that reading: putting it in perspective, going beyond its terms, comparing it to material that was not read. Lectures should be used more often to explain and explore material for which there is no useful single reading. Readings and lectures have to be complementary, not redundant.

3) A lecture needs to have a theme, a central idea, and you need to come back to it repeatedly. You can do that elegantly through what pedagogical specialists call “spiraling”, where each return to the theme is slightly different, or builds up. But a lecture that’s just a grab-bag of all the stuff you can think of about a topic makes for a lousy performance and makes for lousy pedagogy. A clear theme, repeated elegantly, is more engaging to listen to and it’s far easier to retain some useful knowledge from listening.

4) Performance counts. Maybe that comes down to vivid language, maybe it comes down to a great anecdote, maybe it involves humor, maybe it’s about great use of props or technology. Don’t be ashamed of a bit of schtick. At the very least, you’ve got to be excited and engaged by the material you’re covering. If you’re bored by it, or sound bored by it, it’s inevitable that the students will be as well. Performance is generally not spontaneous, either: it should be as much a part of preparing a lecture as anything else.

5) Personalization is important. This is one reason I find Horowitzian demands for a completely neutral, affectless classroom so completely clueless. I’m not saying that a good lecturer gets up and rants at students about his pet political beliefs, but if you’re not putting your own distinctive views of the material out there, you’re not giving a good lecture. If you’re just reading off a generic, lowest-common-denominator description of knowledge on a particular subject, you might as well just distribute your outline to the students and save them the trouble of coming to class to hear you read that aloud. If there isn’t some benefit to hearing you, a particular individual scholar, interpret the subject matter, then there isn’t much point to having a classroom in the first place.

6) A good lecturer has got to learn to pay attention to body language in the audience without being over-attentive to it. At first I can remember being incredibly anxious at the least sign of a student appearing bored. Over time what you learn is that there are some people who appear bored when they’re sitting and listening even if they’re actually highly engaged, and there are people who look perky and attentive who are actually busy daydreaming about going to the beach. After a couple of weeks with a given class, you should learn to watch the students who have responsive body language, who can tell you something about whether the lecture is going the right way or not. Use those students as your “standard candle” to decide whether or not to stop or change course in a lecture. If you’re not prepared to try some other approach to a topic or to take another tack altogether, once again, you’re undercutting the whole point of having a real physical classroom with you, a real physical person, in front of a real physical audience.

Posted in Academia | 7 Comments

A Tale of Two Game Movies

I’m pretty surprised that Sam Raimi has agreed to make a film based on World of Warcraft. I still enjoy World of Warcraft as well as find it intellectually interesting but the idea that its mashed-up, derivative, internally contradictory, heavily baroque game fiction could serve as a platform for an interesting film strikes me as unlikely. On the other hand, I like a lot of Raimi’s films, and he’s got a good sense of how to compress baroque pop culture properties into punchy narratives. So maybe he sees something I don’t in the treatment he’s looking at: maybe some Xena-like fantasy cheese or maybe some metatextual thing that plays with the idea of Warcraft-as-game. I can’t imagine a straight-up mock-epic treatment like Jackson’s Lord of the Rings films would be anything but a Uwe Bollesque stinkfest.

On the other hand, a World of Warcraft-based film makes a ton more sense than a film based on the game Asteroids. The announcement of that signing deal, which apparently followed on a four-studio bidding war, raised a lot of eyebrows among pop culture observers.

As it should: this is one of those stories where the surface p.r. explanations just don’t cut it. Let’s say you’re a mid-level studio executive at Universal and you say to yourself, “I bet we could make a totally cool movie about a lone spaceship doing some asteroid mining”. Only the most feral, predatory intellectual property lawyer is going to tell you to pay off the people holding the rights to the video game Asteroids if you want to make that movie.

You could even say to yourself, “I bet we could make a totally cool movie about how kids playing videogames here on Earth are actually controlling spaceships that are doing asteroid mining and other jobs.” You might want to lawyer up about infringing on The Last Starfighter and Ender’s Game, I suppose, but not Asteroids.

So what’s going on here? I think again this is something less about business and profit and more about organizational sociology of contemporary cultural, economic and civic institutions. Most of them tend to have a big, amorphous layer of middle managers who make all the serious concrete decisions about resource allocation. All of those actors have strong incentives to claim sole credit for successful resource allocations and to obscure their involvement in unsuccessful ones. All of those actors need to provide a constantly renewed account of their own accelerating productivity: it’s never enough to be maintaining or supervising existing activities. And in a lot of these institutions, middling figures frequently arrange (implicitly or explicitly) to collaborate with a counterpart at another institution to mutually enhance their prospects along these lines, to manage their institutional capital and engage in quid-pro-quo dealings that make the dealers appear productive.

Hence in many cases an interest in paying out money for intellectual properties that are completely non-necessary to making a new cultural work. If you buy my mothballed intellectual property out of the attic of my megacorporation today, I’ll buy yours tomorrow, old chap. If you pay off the lawyer-troll under the bridge today in order to clip-clop across, then we’ll pay off yours too. Licensed properties are also a great alibi for failures (the source property is the problem! the adaptation is the problem!) and a great way for a studio executive to claim a successful adaptation (it’s not the film itself, it’s that I recognized the value of the property itself!)

In a lot of institutions, those middle-rank incentives drive some actions that people accountable for the total institution find frustrating or perverse, and end up constraining the generative actions of people who actually have to enact what the middle layer decides upon. Not to mention that the hidden incentives that drive institutional action sometimes produce results that outsiders find completely laughable or baffling, like a film based on the game Asteroids.

Posted in Games and Gaming, Popular Culture | 7 Comments

Mine!

I didn’t catch the complaint of some philosophers against a small NEH course-development program called “Enduring Questions” the first time around, but picked up on it via Savage Minds.

The NEH program offers a small amount of financial support for faculty developing “predisciplinary” courses that deal with questions like, “What is the good life?”, “What are good and evil?”, “Is there a human nature” and so on.

The term predisciplinary is evidently the first provocation to philosophers critical of the program, as they (I think accurately) read it to say that courses taught by philosophers in philosophy departments which are part of the standard curriculum of disciplinary philosophy are not really what the grant aims to support. There’s nothing excluding philosophers from participation, but the clear implication is that to qualify for the grant, they’d need to teach the normal substance of their discipline in some kind of extended or extradisciplinary framework. Understandably, this is a puzzling request from their perspective, since they’re quite right that they really do address these questions day-in and day-out in their existing courses.

I’m sympathetic to the initial reaction of irritation. If the NEH set up a course development grant called “Time and the Past” aimed at supporting interdisciplinary courses that examined change over time but framed the grant so that ordinary history courses didn’t qualify, my first impulse would be to object. Why exclude the discipline that makes that question its central concern?

But hold on a moment. What might a grant solicitation written that way incentivize? Maybe attention to how thinking about change over time is a real problem for some disciplines: some forms of economics, for example. Maybe a course (by philosophers, even!) on whether history matters or is knowable, which history departments don’t tend to offer. Maybe a course in a natural science that asks how and when old science matters to contemporary science. The more I think about it, the more I can think of really interesting courses that respond to the prompt and aren’t likely to be taught by most historians. I can think of courses which might respond to the prompt that could be taught by a historian, but they’d be good additions to a curriculum in other departments as well.

The more I think about it, the more I also recognize that historicism or study of the past play roles in other disciplines, sometimes roles that intermingle with the way historians work and sometimes not. What skin off my nose is it if a literary critic is also an intellectual or cultural historian? That’s a good thing, not a bad thing. I might have some friendly nudging about methdology to make to a non-historian doing historical work, but no more.

Broadly speaking, I think anyone can study history, and that the methodology and theory of historical study are not highly technical. Academic historians certainly benefit from having studied history, e.g., from knowledge of particular places and time periods, knowledge of historiography, and actual experience of historical research. It does not require a lengthy apprenticeship to begin to think usefully about the past, and scholarly works of history can and should be read and savored and made use of by broad audiences with no special training.

The philosophers who object to the NEH grant largely argue that philosophy is a highly technical discipline with a history of progressively greater and more precise knowledge about its subject which cannot be understood without dedicated training. They also argue that the “enduring questions” the NEH proposes are by nature philosophical in this sense. They perceive no need to incentivize courses in “enduring questions” because they believe such courses already exist and are called “a philosophy major”.

Part of this reaction is just the enduring struggle within and around disciplinary philosophy over Rorty-like complaints that the discipline is too insular and technical. My long-term arguments against fastidious forms of disciplinarity, at least at small colleges, apply just as surely to philosophy as they do to my own discipline and most others. All disciplines, even the natural sciences, need some capacity to translate and disseminate their particular forms of knowledge outside of their discipline, without insisting that an outsider undergo some prior training in that discipline. However, I do think there’s something especially wrong-headed with saying, “Well, we have a pretty precise, well-resolved technical answer to the question, ‘What is the good life?’, but if you’re not trained in philosophy, I can’t explain it to you.” The question “What is the good life?” is really not the same thing as a difficult question in mathematics.

I also wonder, however, if the philosophical critics believe that there is no intellectually useful or legitimate response to many of the “enduring questions” outside of academic philosophy. For example, surely literature or art poses those questions and sometimes struggles to answer them in a manner quite distinctive from philosophy. This is not to say that philosophy, like literary criticism, cannot try to encompass or understand the way in which art or literature ask those questions: philosophers has done wonderful work along those lines. But interpreting how art asks enduring questions is not the same as the first-order posing of those questions by artists within their art.

Similarly, the empirical study of how human beings actually have been in past societies or presently are within contemporary societies tends to intrinsically generate some of those enduring questions. A comparative historian with an interest in political systems may not set out to answer the question, “What’s a good government?” as a political philosopher might, but I’d be surprised if by the end such a historian wouldn’t have some pretty interesting and wholly intellectual answers to that question which come from an accumulation of empirical cases.

If the NEH grant ended up drawing out some of those different ways of coming at big, deeply human questions and forging them into provocative courses, that’s a great outcome. Humanistic inquiry has a pressing obligation to legibility with wider publics, a need to pose its distinctive questions in broadly relevant terms. I can’t see why the federal government (or anyone, really) ought to subsidize humanistic study which is in some enduring or permanent fashion conceived as incommensurable with general intellectual discourse and incommunicable to anyone lacking highly specific training.

Don’t get me wrong: any study of “enduring questions” needs to enshrine philosophy, to acknowledge its central importance, both in its broadest and most specifically disciplinary forms. Anyone aspiring to teach to those questions who recognizes that importance and yet is not a philosopher is well-advised to be seek the counsel of philosophers with some degree of humility. This is where I understand and endorse with reservations some of the irritation expressed by philosophers at this grant program, insofar as it can be taken to endorse the idea that “enduring questions” can be asked wholly innocent of the long history of philosophers asking them.

Humility ought to work both ways, though: contemporary scholarly philosophy doesn’t own a comprehensive patent on those questions.

Posted in Academia | 9 Comments