Big Pharma, Big Wonkery

In the current issue, Discover has an interesting article on pharmaceutical testing. On one level, it’s consistent with many other critiques of the pharmaceutical industry and of academic and medical researchers who do its bidding.

However, the article also raises a question about how literate even medical experts and researchers are, let alone the press and public, when it comes to reading or judging published research. The article details a number of the ways that pharmaceutical testing and research are often much less than they appear to be: small study populations that conceal the potential magnitude and frequency of dangerous or fatal side effects, benefits that are marginal or at the edge of statistical significance which are implied to be far greater in size, reviews of existing research which are cursory or cherry-picked for supporting data, and so on.

What I find interesting is that I think a significant proportion of all quantitatively-based social science research also has these characteristics. These kinds of practices are far more consequential and dangerous if they involve medical issues, but more than a few social scientists know how to use sleight-of-hand to get media attention, argue urgently for a new policy or piece of legislation, or redirect institutional efforts. More importantly, many researchers do so without any intent to defraud or consciously manipulate their results. It’s simply the standard for professional work, that any pattern or finding which rises above statistical significance crosses quickly into being urgently important. This is precisely what Deirdre McCloskey called the “secret sin” of economics, but it afflicts more than just economics as a discipline: social psychology, political science, sociology, sociolinguistics, population science, any quantitative work that deals with human society and human individuals tends to have the same problem.

My own understanding of this pattern came through reading a lot of the work that has been done on the effects of mass media, most particularly on the effects of violent images and representations on children. Many researchers active in relevant fields of study will tell you that the negative effects of those images have been demonstrated beyond a shadow of a doubt, that there is overwhelming scientific consensus. Look again and you’ll find a far more complicated picture. Antiquated studies with transparently bad research design that were conducted fifty years ago but are still cited in literature reviews as supporting evidence. (Thanks to several readers here who recommended I look at Richard Hamilton’s The Social Misconstruction of Reality on this point in particular: it’s a good description of how this comes to pass.) Literature reviews that blithely cherry-pick and ignore any studies which contradict their stated premise. Laboratory studies and controlled experiments that are simply assumed to predict and describe large-scale social patterns without any discussion of whether they actually do or not. Correlations confused happily with causations. And most importantly, teeny-tiny effect sizes magnified to monumental proportions.

Again, this is mostly not done with conscious intent. It’s how professionals work, it’s how careers are advanced, it’s how scholars avoid slumping into a grey morass where all causations, effects and patterns are indistinguishable from one another, and nothing can be said about anything. Quantitative or qualitative, whatever the methodology, all scholarly work on human societies has to reduce and manage tremendous complexity. Any contemplated action or intervention into human problems requires simplifications.

Whether we’re talking pharmaceutical studies or social policy, however, we need a professional process which pushes back on this tendency. Peer review in its conventional form simply isn’t good enough, for a variety of reasons. Instead what we need is a circuit-breaker of some kind between research and action, representation and intervention. Statistical significance as adequate for reporting a research finding, but some other test of significance entirely for when you want to counsel action, recommend policy, implement concrete changes or disseminate new treatments. At the same time, we need to stop making the delivery of policy or treatment or intervention the gold standard for “research which matters” within academic institutions, and to enshrine a new respect for negative or neutral results.

Posted in Academia | 11 Comments

The People Are The Enemy

There’s not a lot to say about Zimbabwe that I have not already said. Things are bad, they don’t look to get better, they have the potential to get even worse, hard as that is to imagine. It’s not about Mugabe the person as much as the military, police and top leadership of ZANU-PF. The named political institutions visible on the surface of the Zimbabwean government are now completely hollowed out by the steady violence of the party elite and military-police leadership against any civic institutions, against anyone who actually tries to exercise meaningfully constructive administrative power, against anything but their own power. Under the circumstances, I think Morgan Tsvangirai had no choice but to withdraw from the run-off, though he and his party also never seem to me to have anticipated or thought through what they were doing.

There remains little that most outside interests can do. Even most sanctions don’t strike me as being potentially effective. I had to really stifle a thunderbolt of rage at one posting on a scholarly listserv that I read when one scholar proferred the argument that although Mugabe is a tyrant, it’s really the fault of the United States and Great Britain, and that the real political challenge is to keep them from interfering. That’s a tragic case of stupid addiction to old dogma, dogma that was analytically wrong-headed in the first place. If I could think of a way for the US and UK to usefully interfere beyond what they’re doing already, I’d encourage them to do it. Western intellectuals and scholars concerned with Africa often still treat sovereignty as an obsessive and magical political objective, as if its mere fact insures a better world.

Or more dubiously, treat some African states today as if they have yet to achieve sovereignty. I think it’s perfeclty fair to say that there are postcolonial states in Africa who have never had a functioning government, nor have ever achieved any kind of central control over the territory marked for them on the map. Zimbabwe is not one of those states. The people in power now, who have been in power for twenty-eight years, have long had a great measure of control over their territory. Zimbabwe is the opposite of the conventional “failed state”: its rulers have very significant capacity for violence and political control across most of their national territory, even with the economy in tatters. It demonstrates perfectly that the mere achievement of sovereign power and strong governmental authority guarantees nothing, improves nothing. When some contemporary Zimbabweans mutter that the last twenty years or so of Rhodesian power were preferable to the last decade of independence, it’s hard to disagree. That this statement alone is more likely to horrify concerned Western liberals than any number of ghastly utterances by Zimbabwean authorities in the last decade says a lot about the limited perspectives of those liberals. It’s not that we should have to choose between Smith’s Rhodesia and Mugabe’s Zimbabwe: the former was forever stunted, the latter an unending disaster. The problem is with those who believed and sometimes continue to believe that the mere fact of succession by Mugabe over Smith was progress in its own right.

South Africa’s leaders, and to a lesser extent other southern African governments, do have meaningful leverage. As I’ve written in a number of places, they’re not likely to exercise it with some important exceptions because they define their achievement of sovereignty in negative terms against the West, that they are only sovereign as long as they don’t appear to be doing the bidding of the West. Moreover, some, very much including Mbeki and probably Jacob Zuma, don’t want to condemn some of what the Zimbabwean authorities have done because they want to notionally reserve the right to do the same things at some future date. The Zimbabwean government violently cleared out urban populations that they saw as a political danger and a visible sign of disorder; other postcolonial states have done and may anticipate doing similar things. The Zimbabwean government has and is using violence to manage or curtail ostensibly democratic processes, to seize property, to crush the press. Thabo Mbeki made it clear in his time in power that he sees independent or critical forces within civil society as a temporary encumbrance.

The Zimbabwean state is not alone in the world in its undisguised loathing for its own population, as we’ve seen in the last year. One of the interesting problems for the 21st Century is, “How can such a state survive?” The tragic answer so far seems to be, “Rather easily”. The only states which seem in danger of serious, rapid political transformation in the present (as in the past) are those in which the rising expectations of social classes with some independence from governmental power and some measure of independent access to global circulations of money and information push back hard against authoritarian overreach. Zimbabwe or Myanamar are not in that kind of circumstance and for the near-term they’re not likely to be.

Posted in Africa | 34 Comments

When Wertham Comes A-Calling

I’m working through David Hajdu’s excellent The Ten-Cent Plague: The Great Comic-Book Scare and How It Changed America. While it’s a story that I already knew well, Hadju has collected a lot of interesting reminiscences from comic-book creators of the 1940s, 50s and 60s and from some of those involved in the public attack on comic-books in those years.

Hadju does a good job of capturing how the public attack on comic-books was something that the artists, writers and publishers didn’t really see coming or didn’t take seriously until Wertham’s Seduction of the Innocent pushed a steady mid-level chorus of attacks past the tipping point into a middle-American consensus understanding, which the Hendrickson hearings then drove home. Hadju also does a good job of distinguishing the leading edge of the attack on comic-books from the leading edge of McCarthyism, reminding us that McCarthyism was fueled by anti-elitism, while the comic-book critics were largely high-culture snobs or experts who were first and foremost offended by what they saw as the vulgarity and luridness of the comics.

When I finished the chapter in which Hadju recounts Bill Gaines’ testimony in front of the Hendrickson committee, I was struck by the depressing sense of deja vu, the feeling of helplessness I get when I consider the recurrence of panics about mass media and civic institutions. In Hadju’s account (as well as some other histories of Wertham and the comic-book panic), Gaines’ testimony is a kind of tragic denouement. He starts with a strong prepared statement, and then follows with an impromptu response to one of Wertham’s many stupid, deceptive and ham-fisted misreadings of comic-book text which incidentally undercuts the premise of Gaines’ initial statement. Gaines then is drawn deeper and deeper into a swamp of contradictory defenses of comic-books which end with a lame, half-hearted response to an explicit image of gory horror brandished to the mainstream audience, a response which pretty well surrenders the field.

But Hadju makes clear that in many other ways, the game was already up by the time Gaines appeared before the committee. The political and social fix was in, the common sense was already manufactured. There were really only three politically plausible counter-narratives ever available to the comic-book publishers, all of which were before their time in the early 1950s. The only one even in circulation at all was the one that Gaines and others (including comic-book readers) fitfully offered: a defense of free speech against any censorship, even the censorship of civil society. The other two would be a defense of the popular or middlebrow against high culture snobbery, and an endorsement of the importance of fantasy, horror, and speculative narrative for the development of children’s imaginative and creative abilities. It’s hard to imagine how any of these could have been mobilized successfully in 1947 or 1955: it took decades of cultural conflict to make these credible propositions to any degree, and all three are still very much wobbly foundations to stand upon. I know any time I’m talking to parents of my own child’s peer group, the odds are high that no more than a small fraction of them would agree that fantastic or speculative culture is good for children. Many of them would be perfectly happy to sign on to a renewed campaign for censorship or civic authority over some aspect of the cultural consumption of children.

So when you’re looking out at that hearing through Gaines’ eyes, it’s not really clear what he or any other comic-book publisher or creator should have done in the early 1950s. Plenty of respectable experts pointed out that Wertham was an intellectual fraud, or that his conclusions were dubious. Many of the politicians who agreed to give the complaint against comic-books some public airing were appropriately diligent and moderate and pushed back on the wilder claims made by Wertham, Sterling North and other critics. The judiciary was largely moving in the opposite direction from the pro-censorship activists in case law on indecent or obscene content. Lots of the readers of the comic-books developed able, literate defenses of their own tastes and preferences.

Moreover, though Wertham was a fanatic and demagogue, his particular exegesis of many comic-books was more or less on the money. The problem was with his arguments about media effects and with his latter-day Comstockian mania for control of cultural production, not with what he said was actually in the comic-books. William Moulton Marston’s version of Wonder Woman was loaded with bondage and fetishism, very much intentionally so. There was a kind of homosexual vibe between the Golden Age Batman and Robin. Superman did raise some weird, interesting questions about power fantasies. The EC horror comics were astonishingly vivid and grotesque and many of their stories were critiques of the nuclear family, adult authority and civic institutions. Mad Magazine was a broad assault on conventional wisdom and middle-class respectability. To feel outrage today about Wertham’s crusade is not to say that none of this was true; it is to say that all of this was in some fashion good and wonderful and interesting. We don’t want to retreat into the proposition that it was harmless or had no effect whatsoever on readers then (or later); we need to say that if some fifteen-year old in 1952 had pervy, unclean thoughts after seeing Wonder Woman tied up for the umpteenth time, good for him.

I think about the choices facing Gaines and his colleagues, I think about what any of us might do or should do when the fix is in, and our own cultural or civic practice is on the firing line. It’s a given that should that happen to us, at least some of the people attacking us will be malicious, marginal, on-the-make characters who couldn’t hack it in their own professional worlds, allied with unscrupulous politicians looking for scapegoats and diversions from the real problems of the day. You can’t have a reasoned, fair-minded debate with that convergence of interests. You don’t dare ignore them. You can’t just mock or sneer at them: that plays into their hands just as much as sitting down politely with them. You can’t call upon an opposing argument or moral consensus that has yet to exist. It’s hard to avoid the despairing sensation that should it happen to you, you are just screwed, that the best you can hope for is to go down periscope, head into the cultural fallout shelter, and hope that you’ll be able to poke your head back up above ground at some point. But if we’re in a better situation now than in the 1950s, it’s only because a lot of people fought hard and long to write, teach, speak and think as people in a free society should be able to.

Posted in Academia, Books, Popular Culture | 18 Comments

Mirror Mirror

I think if you did a search-and-replace on this David Brooks column, substituting the columnist’s own name every time he mentions Obama, it would be a pretty apt description of Brooks’ calculatedly dishonest approach to commentary.

Posted in Politics | 6 Comments

M is for Mandarin

Via 11D, here’s an article by William Deresiewicz that I like considerably more than his recent standard-issue world-we-have-lost complaint about contemporary English Departments.

Deresiewicz is concerned about the particular character of meritocratic elitism in contemporary higher education, and about the relationship between how higher education sees tomorrow’s elites and how it sees tomorrow’s middle managers.

As in his earlier piece, I think he overstresses the extent to which some of the problems he’s talking about are novel or contemporary. Take for example his opening paragraph:

It didn’t dawn on me that there might be a few holes in my education until I was about 35. I’d just bought a house, the pipes needed fixing, and the plumber was standing in my kitchen. There he was, a short, beefy guy with a goatee and a Red Sox cap and a thick Boston accent, and I suddenly learned that I didn’t have the slightest idea what to say to someone like him. So alien was his experience to me, so unguessable his values, so mysterious his very language, that I couldn’t succeed in engaging him in a few minutes of small talk before he got down to work. Fourteen years of higher education and a handful of Ivy League dees, and there I was, stiff and stupid, struck dumb by my own dumbness. “Ivy retardation,” a friend of mine calls this. I could carry on conversations with people from other countries, in other languages, but I couldn’t talk to the man who was standing in my own house.

I don’t think the best and the brightest of the Ivy League in 1920 were just hanging out with the working man over brats and brews. But little of their elite status was attributed to the transformative impact of their education, nor were there any pretenses that a university education was or should be available on an egalitarian basis. The Ivy League was part of a system of class and social distinction but it was not generating that distinction. It was a finishing process that gave an elite its manners and distinctions. Then (and now) there were both students and scholars who pursued learning and erudition with extraordinary seriousness and commitment, but they were the exception rather than the rule.

It was only after 1945 that higher education was considered both something that a democratic society should make available to most of its citizens and something which had profoundly transformative and aspirational implications. Earlier pressures towards practical or applied knowledge, whether technical or otherwise, also reached a full flowering in American and European higher education after 1945. The role of the university in producing a higher standard of living in the United States after 1945 is pretty well-documented: this is not just a story we imagine. Small wonder that the university is still expected to secure the economic prospects of its graduates, whatever else it might do for their minds, character or cultural outlook, because that’s what the university has done.

I think Deresiewicz is right about some of the cultural outlook and competencies of elite college and university students, and right to suggest that at least some of that has to do with the competitive pressures that get them into those institutions and the curricular and extra-curricular experiences they have when they get there.

Leave aside some of his transposition of his own own idealized preferences onto a universalized scale for a moment. (He worries, for example, that his current students can’t imagine solitude as Emerson did. I don’t see that, and frankly, even if I did, so what? There’s nothing universal or necessary about Emersonian brooding off in the wilderness somewhere, nor is that a prerequisite for living an introspective life.)

Where I think he’s right on the mark is that the template for the highly competitive, aspirational student produces a contradictory sort of performance of egalitarian concern for social transformation coupled in many cases with an exquisitely elitist affect, that many bright students become less socially and culturally literate rather than more so in their four years in college. Deresiewicz worries that the commitment to social transformation he sees in past generations of American intellectuals has disappeared, and that few students see themselves as being on an intellectual journey or pilgrimmage. They think for themselves, he remarks, “only because they know we want them to”. (So too do many speak of social transformation: because that is what a potential holder of power in the future is supposed to do.)

This is one reason I really like Swarthmore’s War News Radio. People involved in the program have heard my schtick on this project many times. What I usually say is this: I don’t care so much that the program helps students to understand the war in Iraq or Afghanistan, or to know more about the Middle East. I’m somewhat indifferent about whether it makes them better journalists or whether the shows are of a consistently high quality. What I like about the program is that the students who seriously participate in it gain confidence and experience in speaking conversationally to a wide variety of strangers, and they get a much better understanding of the usefulness and limitations of academic knowledge. This is a kind of education that Swarthmore and most institutions like it provide in limited (or nonexistent) ways, that some of our students may take a very long time to acquire (or may never acquire at all, particularly if they become academics themselves.) I stress that in some cases, participants may find out that academic knowledge is immensely useful or enlightening. The point is not to denigrate all that happens within the ivory tower and exalt all things beyond its boundaries as salt-of-the-earth goodness.

The process works for War News Radio because the students are operating beyond the protective screen we otherwise provide to them, and get a kind of feedback about what comes off well and what comes off badly that we might either not be able or willing to provide in other circumstances. Over the years, I’ve sometimes been struck that students involved in political or administrative business inside the college take stances or positions that are just weirdly disconnected from common sense or they have a totally tin ear for inadvertent insults they’ve dished out, and the problem is that nobody’s going to tell them so.

Deresiewicz flirts with a declensionist answer to his problem, with a hint that once upon a time, there were thoughtful intellectuals rather than 21st Century mandarins. As usual, I don’t buy it. I do think there’s a problem here, though. The answer might be trying to get students to see education more as a journey, though the more that gets put into the official “mission statement” of higher education, the more that will just be one more thing that tomorrow’s applicants learn how to game and perform for our approval. I’d rather concentrate less on the difficulty of transforming the subtle attitudes and predispositions of our students and more on concrete pedagogies and programs. One thing I get from War News Radio is that students can get a much richer sense of how to speak and self-present (and how to listen and understand) if we ask them to practice more modes of communication, to work with more than one kind of literacy or in more than one register. The problem with that insight, unfortunately, is our own limitations. It’s hard to ask your students to translate academic or intellectual insight into other forms and genres of public communication if you neither appreciate nor understand those genres yourself.

Posted in Academia, Swarthmore | 17 Comments

It’s a Mystery

Kind of on a nostalgic impulse, I decided to pick up the 4th Edition of Dungeons & Dragons, which came out on June 6th. I haven’t played so-called tabletop games, including D&D, for a long time, but this release caught my eye in part for its promised integration with an ambitious suite of online tools and the debate that plan had inspired among the game’s devotees.

Some of the design changes from previous editions of D&D were familiar to me through their implementation in digital games like Neverwinter Nights and Baldur’s Gate, so at least some of the distance between the old-style version of the game that I knew and the 4E ruleset wasn’t totally bewildering to me. One thing that was clear to me right away was that the 4E ruleset had been written with online functionality in mind. It’s much more precisely spatial and tactically focused. A lot of the world-building aspects of the previous rules have been set aside, at least for now.

So late Sunday afternoon, I thought, “Well, let’s go take a look at these digital tools that they’ve been selling as a part of this product”. The ads for D&D Insider, as it is called, are even in the back of the rulebooks. I knew that the producer of D&D, Wizards of the Coast, had planned to make these tools a subscription-based service with a price comparable to games like World of Warcraft and Everquest 2. This seemed ambitious to me, considering the scale, polish and service levels that those games provide. The tools had better be really revolutionary, I thought, or I doubt many people will pay that much.

It turns out that it’s hard to say whether they’re revolutionary or not, or whether the cost is justified or not. Because the tools don’t exist.

Not that you could tell that by looking at the company’s website (at least as of the morning of June 10th: I suspect this will change fairly soon), which seems to be describing a finished, available product. All of the marketing for the 4th Edition has been touting the digital toolkit for months. I wandered in confusion around the website for a while, wondering where the secret rabbit hole was that would take me to where the D&D Insider tools actually lived. Eventually I went looking for a FAQ on the official forums, and there I found out the truth, or at least a piece of it. It’s been “delayed”. If you read between the lines, I think the delay will be long. Maybe forever, I suspect. There was clearly a big disconnect between the project team and the marketing team. Also between the people who wrote the new rules for the game itself and the project team for the digital tools. And gosh, everyone was so busy that they sort of just, well, you know, forgot to tell anyone that the main selling point of the new edition was, you know, months and months from being available.

————

I’m torn between being cynical and fascinated when I come across this kind of commercial carwreck. Fascination wins out, even when I’m irritated by the consequences of failure (as I am in this case). I think this kind of thing happens more in cultural industries, and there are complicated reasons why that’s so. (At least some of which are laid out in an interesting book that Bill Benzon recommended to me, Arthur DeVany’s Hollywood Economics.

The cynical part of me has heard a lot of this before. There’s a standard narrative with software and digital projects that delays are inevitable, that management of technological projects is hard, that it’s better to get something good when it’s ready than something bad when it’s not. No one disagrees with the latter, but if you’ve been around the block a few times on these things, you know that this is rarely the choice with software or digital projects. “Good when it’s ready, bad right now” is usually code word for “We got nothing at all right now, and whatever we’re going to have later is going to be a kludgy disaster that only barely does what the client specified. We’d really like to just start over, but we figure you’d pull the contract and cancel the whole thing if we did that.” To some extent, when I hear “delays are inevitable” on a long-anticipated software project, I hear, “We’re underresourced, got the contract because we low-balled the estimate, and took advantage of the buyer because they had no idea just how ambitious the product was that they had in mind” or I hear, “Somebody screwed up really, really badly early in the development process and we’ve had to rip out everything they did and do it all over again”.

The thing that’s fascinating, however, is about how modern institutions work, whether they’re companies, universities, or bureaucracies, about the agency of institutional humanity. In a lot of cases, institutional actors don’t understand any more than outsiders do why a complex project or product develops the way that it does. In fact, they may understand less. When they offer up descriptions of their own innocence, when they seem confused or surprised at the reaction of outsiders to their misfires, I think institutional actors often are being entirely genuine and honest. This is especially true for cultural products, because they are made surrounded by a kind of hermeneutical fog, because no one really knows how or why they succeed or fail in any kind of final way. You can’t assemble a movie or a comic book or a game the same way you assemble an air conditioner or a jet airplane. Even a single author of a single literary work can be surprised by the gap between intention and product, process and result.

Posted in Games and Gaming, Popular Culture | 8 Comments

Neither Victims Nor Torturers

Alberto Mora was one of the speakers at Swarthmore’s commencement this spring. He gave a short, terse and I thought powerful speech about the decisions he had made as General Counsel for the U.S. Navy and about the consequences of the sanctioned use of cruelty and torture by the United States government.

Mora touched lightly but poignantly on one point that has been on my own mind a lot in the last year. As we look back on the last eight years with the hope of some different dispensation and leadership in the years to come, there is one development that haunts me more than any other.

If you had asked me in 1998 what I thought the consequences would be if the United States government was revealed in public to have officially sanctioned torture in its own secret as well as public detention facilities as well as through rendition of detainees to regimes employing torture, and that this sanction started from the Oval Office and was methodically reinforced throughout the executive branch, I would have said that this revelation would be an enormous scandal with catastrophic political consequences for any sitting President.

In the past, U.S. officials have assumed the same thing. It’s clear that from time to time from the 1950s onward, both intelligence and military officials in the U.S. government have employed torture, illegal detention, and similar tactics against agents of other governments, armed insurgents or against non-state actors. But there has always been considerable care taken to keep such actions secret, to not challenge their illegality, and to assume that if these actions were revealed, those who took them or sanctioned them would and probably should face legal and political consequences. Top levels of the executive branch were insulated from these activities, at times by cynical design, but mostly because there was a broad consensus that such actions should not be official, sanctioned or legal, not be policy.

In 1998, I would have told you that there would be an enormous political uproar in the case of the sanctioning of torture because I assumed that the American people and most of their political leaders would not stand for such a policy. I would have assumed that any political support for such a policy would be relatively small, perhaps no more than a quarter of the voting population at best.

I’m very unhappy about a lot of the developments of the last eight years. Nothing makes me unhappier than the discovery of how very wrong I was in my assumption about the political consequences of the systematic authorization of torture. I should have sensed where the wind was blowing when I came across clear “trial balloons” like Mark Bowden’s 2003 celebration of torture as a crisply professional activity. (Bowden’s later pathetic response to Abu Ghraib: those guys were bumbling amateurs, as long as you professionalize cruel interrogations, you won’t get pyramids of naked detainees and the like.) Bowden is a good reporter and a pleasant guy: when he agreed to flack shamelessly for torture, it was a sign that the moral consensus I assumed was present was in fact absent, that many seemingly decent people were going to endorse torturing detainees in the war on terror, with indifference to the standards being used to determine whether or not those detainees even had important information or were guilty of anything in particular.

Every successive day since 2003 has revealed just how thorough and explicit the commitment of the American political and military leadership has been to torture and illegal detention, even if there are those like Mora who strenuously opposed this shift in policy. The Administration is responsible for a great deal, but when it comes to the systematic endorsement of cruelty and torture by our government, the guilt is a lot more distributed because the popular support for such a policy has been a lot more widespread. If you want a sign of how far the yardposts have been moved, look at how carefully John McCain has had to maneuver on the subject of torture so as to not offend his political base, despite his own alleged opposition to Administration policy.

Whatever comes next, however much of the damage can be repaired (by whomever the next national leadership might be), this change will make me sad for the rest of my life. There’s no easy retreat from knowing that a lot of the people around you are willing to sanction torture and are indifferent about the guilt or innocence of those subjected to it.

Posted in Politics | 6 Comments

Vindication

Must remember to take my medicine tonight.

Posted in Domestic Life, Food | 1 Comment

Pitstop

Let’s look past Hilary Clinton to think a bit about how this primary season unfolded. I think there’s something important about the votes that went her way.

First, yes, there were people who preferred Clinton for sound enough reasons: either they judged electability in her favor, or found some very specific aspect of her policy proposals preferable.

Her support from some professional and white-collar women don’t require much explanation. Clinton became a powerful surrogate for their own frustrations and a symbol of their achievements. At some point, all of us will support a candidate for this reason, because they’re someone with whom we personally identify. There’s also not much point in arguing with someone who strongly supports a candidate for this reason, any more than you can meaningfully disagree with someone’s sense of who they are.

What I think is important is Clinton’s support from white working-class voters. A lot has been made of the racial basis for this vote. That was certainly an aspect of this vote, and I’m still fairly disgusted with the way the Clinton campaign played knowingly to that logic at times. I think it’s a more complicated matter than than race, however.

I don’t think it was about policy, that somehow Clinton was offering a set of ideas or proposals that white working-class Democrats found especially appealling as an address to their problems. Nor do I think it was anything about Clinton’s personal presentation or image that made her especially appealing. Without Obama as a foil, I think Clinton would carry about as much appeal for such voters as John Kerry, Al Gore or Walter Mondale did, which is to say some but not a heck of a lot. It took Obama to make her into something other than a liberal policy wonk.

What I think we saw was the real red state, blue state divide burbling up once again. One of the tragic stupidities of the way we’ve talked about that divide in the last decade is to see it as a divide between the parties or between the religious and non-religious. It’s not Republicans and Democrats, or liberals and conservatives. Nor is there as much or as stark a divide as the image implies. What I do think is present is the peculiar architecture of social distinction in early 21st Century American society, a matrix that links geography, size of community, local political economy, and habitus.

It’s not so much that Hilary did anything positive or affirmative to make her the “beer track” candidate, but that the clear enthusiasm of the “wine track” for Obama drew a social agonistics towards her like lightning towards a metal pole. Obama’s bitterness comment, for all that it was both badly timed and baldly articulated, spoke to what was going on underneath the surface.

I’ve written about this before here, and I’ve learned that there are people who get very angry about the basic proposition that ex-industrial and rural American communities are peopled at least in part by folks who couldn’t or wouldn’t leave when life and opportunity went sour, as well as by people who believe that there is something authentically preferable and beautiful about the life they’ve found in small towns and old industrial cities. Like it or not, the fact of depopulation and migration is a fact. You can argue that it’s because the choice has been made for communities against their will, as a kind of slow economic or social violence, or that this is another form of creative destruction, inevitable and in the long run positive. But it has happened.

The cultural gap that magnifies the social fractures is also real, but it strikes me as being far more bridgeable. Much of the depth and width of that chasm comes from the behavior of economic and social elites, from a continuing lack of proportionality and perspective about our own hobbyhorses and preferences. We can have a lot of purple states, and a general booze track where wine and beer drinkers raise their glasses to one another in comity. The cost of that on both sides is putting aside much of the culture war as trivial and needlessly divisive, and recognizing that there may be a great many issues that fall under that heading, some of them substantive–for example, guns and gun control. Moreover, we shouldn’t think that putting aside the culture war solves anything about the real social cleavages that have given it so much energy.

That to me is part of what Obama means by the audacity of hope: that there should be another way to win office, represent Americans and lead the country than playing on the culture war fiddle while everything burns down. To me, the most distasteful thing about Clinton’s campaign is that I don’t think she started out with any intention to play that tune (indeed, as I see it, it once was anything but music to her campaign) but that when the opportunity came, she performed an enthusiastic jig and reel for the sake of one more day on the campaign trail.

Posted in Politics | 2 Comments

The Garden of Earthly Delights

June’s arrived, which usually means that I’ve accomplished about as much as I can hope to with my garden for the year, except maintenance and any non-planting work I want to do.

When we moved in to our current home, most of the yard was in pretty shabby condition. A lot of it still is, because I can only handle one major area of improvement each season, both in terms of the energy and labor-time I can spare and in terms of the cost of materials and plants.

My first target was an area on one side of the front lawn where there was a single lonely pine tree and some scruffy grass that eventually gave way to some pachysandra, English ivy and a chaotic jumble of forsythia underneath a big maple and a magnolia. All of our trees were in bad shape, with some dangerous limbs, so we had a big trimming right when we moved in. I had both of the big pines cut down: I’ve never liked them much as isolated trees mixed in with maple, oak and ash in Eastern woodlands.

Over three years, I’ve done a lot of planting where the smaller pine in the front had been. First off, right where the pine had been, I planted three dwarf peach trees and three butterfly bushes in a circle around a birdhouse on a pole, with a few container plants scattered around that area. Everything has grown in fairly well.

The next spring I built a raised bed on the north side of the peach trees and planted a lot of lavender with a bit of tickseed and sedum mixed in. On the south side of the circle of peaches and butterfly bushes, I planted a mix of ornamental grasses and dogwoods (yellow-twig and red-twig), with a cheap bench overlooking the area. All of this has grown in pretty well, with the exception of some Japanese bloodgrass.

Late last summer, I began to build a rock border around the raised bed of lavender, and that’s where I planted this spring: thyme, lemon balm, heliotrope, rosemary, several types of mint, blanket flowers, beardstongue. Mostly that’s doing ok, though I’m having some problems with drainage and weeds. If I can afford the stone, I’ll finish building the small border wall later this summer, which I mean to take all the way down the property line into the shaded area where the maples and magnolia are.

I also plant a vegetable garden each year. For once I managed to get some peas in: March is usually so busy, and often there isn’t a good day to plant on the weekends where I have the time and the energy.

The main point is to get tomatoes and beans, though. For once this year I also got some sunflowers to germinate, though some kind of insect destroyed about half of them after they’d popped up above ground.

I had to replant a lot of the lawn on the west side of the house, as some kind of grass-like weed pretty much destroyed that whole area late last summer. I don’t really like dealing with lawns. They’re a hassle. On the other hand, I like the open green space they provide.

I’ve got a dead dogwood to take down myself later this summer, and a lot of fallen limbs to break down into firewood at some point. Another long-term goal I have is to get a good chipper/shredder so I can make my own mulch each year (I have a huge pile of deadwood in the most neglected corner of the backyard).

Our sour cherries and high-bush blueberries are coming along nicely, though we usually only get about one picking of blueberries before the birds strip the plants bare. (Nets don’t help: they just get under the nets, eat their fill and then freak out and panic because they’ve forgotten how to get out.)

The ambitious goal, if I can finish the wall, is to prepare the area in heavy shade for ferns, hostas and some other shade plants, and to build a treehouse in the same area, on one of our stronger maples. Getting rid of the English ivy and forsythia in this area promises to be an ordeal, though.

Obviously, I enjoy gardening. I don’t have that romantic sense that it brings me closer to nature, or any of that kind of thing. In fact, it mostly makes me grateful to live in a late-industrial civilization, because it teaches me more potently than any scholarly study might about the hard limits faced by any preindustrial agrarian society. We have so much tree cover in our yard that there are only a few patches where I can grow vegetables. I think that next year I will have to leave the best area for our vegetable garden largely fallow, as I’ve seen declining yields in the current patch, even with some mineral amendments and a lot of the compost from my own piles tilled in before planting. If I had to live off my own land, I’d be lucky to achieve subsistence even if I cut down all my trees and converted all of my yard to food production.

I do like having herbs and vegetables close at hand all summer, though.

It’s sobering to see how capricious any vegetable is, and how difficult it is to get many to germinate. Moreover, I’ve largely settled for growing vegetables that taste distinctly better from a home garden (tomatoes, beans) but that also don’t seem too interesting to squirrels, woodchucks and deer. I learned the hard way that whatever they want, they get, no matter what you do to stop them.

Pretty much everything I do in the garden is done “organically”, save for whatever the nurseries I buy from might do to grow the plants, but again, that’s not because of some profound philosophical commitment on my part. I do it this way because it’s less money and makes good sense, and because I have a phobia about hiring people to mess around with my own stuff. Why not stockpile deadwood and have compost? It seems much weirder to me to haul all the stuff to the front yard and call someone to cart it away. I don’t want a lawn service because I’m cheap, because I don’t want strangers all over my lawn once a week, because I don’t like the look of heavily treated lawns. So yes, my lawn has more weeds, is often overgrown, and has patchy areas. I’m ok with that. I’m going to run into some long-term problems with my most ambitious plans: I want to eventually build another rock garden with a water element in the backyard, but this time I want to use rocks that I couldn’t handle with my own muscles to build part of it. It should tell you something about my monomania that I’d almost rather rent a little bearcat to get the bigger rocks in place myself and dig the area for the liner.

Posted in Domestic Life, Miscellany | 14 Comments