Regarding violence, media and childhood.
My daughter’s comment about the first day of art camp, in which it was revealed that the theme for this summer’s creative work would be “peace”:
“Peace is boring”.
Regarding violence, media and childhood.
My daughter’s comment about the first day of art camp, in which it was revealed that the theme for this summer’s creative work would be “peace”:
“Peace is boring”.
An interesting response to my comments on Schechter’s book at Withywindle’s blog makes me think a bit more about the representation of violence in mass culture.
Withywindle suggests that there is a difference between violence for violence’s sake and violence which serves some moral or dramatic purpose in a cultural work. I’m hard-pressed to think of a cultural work containing violent representations that couldn’t be plausibly argued to have such a purpose.
Violent entertainments almost always present themselves as making both moral and narrative use of violence. Films like Saw, Hostel, and so on can always claim they are delivering some specific message about a particular group of people, a particular way of acting in the world, or more generically, revealing something about “the human condition”. Action films like Transformers or Live Free or Die Hard claim violence as a necessary part of telling a story about conflict.
A critic can cynically ignore those arguments and insist that a viewer who goes to see Hostel is just exulting in the pure violence of the film. But that cuts both ways. When does violence become respectably imbued with message or purpose? When can you safely be said to be watching it for a reason, and not the thrill or satisfaction or emotional power of seeing violence represented? Name me something that a respectably genteel critic would feel to be the legitimate use of violence in cultural representation, and I’ll bet I could make a plausible claim that this is just a mask for illicit pleasure taken in seeing violence and gore. Much as I dislike the sleazy dime-store postmodernism of Paul Verhoeven’s use of violence in Robocop and Starship Troopers, he’s pretty much got this problem pegged. As long as you give a respectable middlebrow viewer enough irony or moralizing disgust surrounding his uberviolence, he’ll furtively enjoy the grand guignol as well as anybody.
The contrast with pornography is instructive. Contemporary cinematic pornography may have started with a protective claim to be doing something other than what it appeared to be doing (e.g., the “educational” angle of I Am Curious (Yellow), or feigned moral disgust at the exposure of illicit behavior on film) but fairly quickly it assumed an alignment between the purpose of its viewers (arousal and sexual release) and the nature of its representation. Later on, other purposes may have started to appear (maintaining a sense of community among people with particular fetishes or interests, an argument about the ‘real’ nature of modern culture or individuals) but mostly it still doesn’t insist that the representation of sex is functional or instrumental to some “higher” purpose. In fact, in most contemporary culture, any time sex is represented as pleasurable and ordinary, viewers tend to take the purpose of such representation to be erotic. If sex is represented and the intent is not centrally erotic or pornographic, the sex almost has to be unpleasant, repellant, disturbing, empty.
We have very few real or experiential ideas about the pleasures of committing violence. Many of us have no experience of committing violence, much less taking pleasure in it. (Many people may have experience of being victims or targets of violence, on the other hand.) Johanna Bourke’s (heavily criticized) An Intimate History of Violence suggests that at least some soldiers in modern wars come to aesthetize and find pleasurable the experience of killing other men, without being aberrational psychopaths or monsters. There’s certainly a literature on boxing that argues not just for the pleasure of watching people fight but even for the pleasure of fighting. There’s a minor canon of cultural works that suggest that fighting or violence of a relatively non-consequential kind is a pleasurable component of manhood. Still, we can readily imagine that the pleasure of sex could be liberating and egalitarian for everyone involved in it, without victims. Violence is always asymmetrical: someone is always hurt. It seems as if we cannot take overt or unguilty pleasure in its representation unless it is purposive, so even the most low-culture work is going to claim somehow that its uses of violence are serving some purpose other than violence itself, that the violence is justified by its righteous or necessary ends.
I think that there’s a pleasure in seeing one comic-book character kick another in the face and I’d rather not churn out a tediously respectable domestication of that pleasure. Violence which is both spectacular and unreal produces pleasure because of its unreality, because it aestheticizes violence. When I was in fourth grade, I didn’t need any tediously righteous adult to tell me that violence was a bad thing: I was getting my face shoved into the dirt on a daily basis. But I also didn’t need a clueless adult telling me that Yosemite Sam getting blown up with a bomb was related to the real-world violence that I was experiencing. It was the lack of relation, its aesthetic purity, that made it funny and pleasurable to watch. Personally, I tend to get more and more squicked by violence in mass entertainment as its accompanying aesthetic makes stronger and stronger claims to realism, and I make more demands at that point for some accompanying purpose. But I also trust in the unarticulated critical intelligence of most viewers. What seems to me to be more distressingly mimetic in its intent or aesthetic may appear obviously hyperreal or exaggerated to some other viewer.
So apparently you don’t have to go to rural Vermont to see some interesting animals. In late May, we heard the absolutely blood-curdling vocalization of an animal of some kind prowling around in our front yard late at night. I had never heard anything quite like it. It wasn’t an owl: the sound was quite low to the ground. It was very loud. It was almost like an injured human in some respects, but in other ways very unlike any noise a person could make. I got a quick look at something darting across the neighbor’s yard that night: all I could see clearly is that it was a small mammal, a bit bigger than a cat, fast and low-slung.
So last night our mystery animal was back, right by our front door. I don’t think I’m easily rattled by such things, but this sound really does make the hair on the back of your neck rise. It goes straight to the primal part of your brain, like you’re a cro-magnon who hears some dangerous animal just beyond the periphery of the campfire. My wife opened the door and whatever it was actually growled at her and made a little intimidating semi-rush towards the door. I got a much better look at it this time from a window.
I’m pretty sure it was a fisher. It was far too big for a weasel. The fur was dark, it had a long tail, and the basic physiognomy of a mustelid. Now this seems a little unlikely, I know, as Pennsylvania reintroduced the fisher in 1994, and mostly at sites in northcentral Pennsylvania. Moreover, a lot of the older literature on fishers suggests that their habitat is limited to coniferous old-growth forests. But poking around a bit, I see that elsewhere in the Northeast, fishers have been aggressively moving into suburban areas where there are mixed-wood forests nearby–and in some cases, making a good meal out of the local cats. I also see that there’s a population that was reintroduced in West Virginia in 1969 that is thought to have spread into southern Pennsylvania.
I can’t think of any other possibility. Definitely not a skunk or a raccoon: I’ve seen plenty of both in my life, and heard all the sounds that raccoons can make at night. From my sighting, I’d say it was definitely not a fox, though we do have a red fox in this neighborhood that we’ve seen from time to time. The animal runs, moves and is built in a way very different from a fox. Not a coyote: I’ve also seen and heard many, many coyote. Not out of the question that it could be a bobcat, but the body was too elongated and low-slung for that, I think.
I notice that the wildlife specialists quoted in the NY Times article about possible fisher sightings in New Jersey are skeptical, and in a way that kind of annoys me. Partly because other species have turned out to have surprising adaptability to suburban conditions while supposed experts claimed that they couldn’t have until the evidence became too overwhelming. Partly because both of the people cited in the NY Times article say that they’ve never heard of fishers or martens vocalizing, but I’ve found a goodly number of sources just this morning that describe a wide range of vocalizations, including something that sounds rather like what we heard. I know people have a tendency to exaggerate, and so wildlife control specialists tend towards skepticism. Trust me on this one: it’s not anything I’ve encountered before. It might be that a fox could make a noise like this, but I got a very good look at this animal from the window, and it was not a fox.
When I started studying the history of debates over children’s television, I was struck at how the principal critics of kidvid in the 1970s and 1980s set the terms of their declension narrative. For them, Saturday morning television was destroying a blissfully innocent experience of childhood that was both recent and eternal, the way it had always been and the way that the anti-kidvid activists had chosen to remember their own childhoods of the 1940s and 1950s.
I can understand how a person who was 10 in 1953 might see the television programs of the late 1960s and early 1970s as different and in some respect more “violent”. But that perception requires a lot of forgetting as well, not just about some of the media available to children in 1953 but about the wider world that many American children were at least somewhat exposed to. Moreover, if you widen the historical frame even slightly, what becomes clear is that the 1950s were unusual rather than typical, that representations of extraordinary violence have been far more the norm in both popular and high culture in the West.
Harold Schechter’s book Savage Pastimes is a very compact, dense recounting of that typicality. It’s a very useful addition to a group of texts that I privately refer to as “media effects heretics” that poke holes in what remains a powerful and basically smug middle-class consensus about violence, media and mass audiences. Schechter goes right for the jugular in his first chapter: “If–as historical evidence suggests–people have always been entertained by torture, mutiliation, horror and gore; and if daily life in the past was far more brutal than it is today, then an interesting question is raised…The current uproar over media sensationalism rests on two premises: that popular culture is significantly more vicious and depraved than it used to be be, and that we live in uniquely violent times. Everyone seems to accept these propositions as the obvious, irrefutable truth. But what if everyone is wrong?” (p. 14)
As Schechter recounts it, everyone is indeed wrong. On the history of violence in art and entertainment, I can’t see how anyone could argue against the case he lays out. When we think that films like Saw or Hostel are somehow a unique sign of contemporary degredation, unprecedented representations, we’re forgetting a wide swath of European and American expressive culture in the last five centuries. We forget partly because we haven’t preserved or canonized penny dreadfuls or Grand Guignol in the same way as we do “literature”. But as Schechter observes, a lot of people are also editing their own memories of culture–forgetting how violent The Shadow was on radio, or the violence of Westerns (even those intended for children) in the 1950s. Schechter mentions the lyrics of “Tom Dooley”, an explicit song about murder that I remember listening to on my father’s Kingston Trio records. Sometimes we’re ignoring what’s in plain sight in the Western tradition. Mel Gibson was hardly the first person to visualize the bloody sufferings of Christ. The Iliad is full of gruesome scenes. Schechter spends a good deal of time reminding his readers of how popular executions were as a form of mass entertainment until very recently (and arguably, as he observes, they’ve never stopped being popular, but have simply moved into new media forms and genres).
The danger with a cultural history like Schechter’s is that such an insistent debunking of a declension narrative, an insistence that as it is now, it has ever been, can end up having a dehistoricizing effect. E.g., it can end up seeming to argue that nothing ever changes, and that’s the big weakness of Schechter’s presentation. A lot of cultural history gets flattened out here and the result is a simplistic kind of cyclical account.
I do think there can be highly “local” shifts in the way violence is represented in various genres of media. Saw or Hostel do seem to me to be different than I Spit on Your Grave or the original Texas Chainsaw Massacre, for example. But that comparison illustrates part of the problem: you have to figure out what the appropriate “lineage” for a current work is in order to accurately identify movement in the mode or form of represented violence. A lot of media critics take a film like Hostel and compare it instead to Psycho, which is like claiming that someone’s third cousin once removed is actually his father.
Schechter suggests that the recurrence of strong, grotesque or vivid images of violence in mass entertainment underlines the extent to which violence both experienced and imagined is a crucial part of the human condition, that we could no more do without violence in our media than we could do without love or family or sadness or sex or faith.
He also does a good job of demolishing the supposed “scientific” consensus about how violent media cause violent behavior. For me, that was one of the big surprises from researching Saturday Morning Fever, to find just how weak the foundation for those claims is both theoretically and empirically. But given that, I’m more and more curious about something that Schechter doesn’t really explore. If violent entertainment is a recurrent part of the Western cultural tradition, then so too is the consensus among educated elites or within bourgeois society that such entertainment poses a social danger. When I talk with parents who are anxious about violence in video games, I’m well aware that their anxiety isn’t exactly based on detailed readings of the scholarly literature about the effects of violent media. They may repeat or parrot those findings, but only because they support a consensus whose power and ubiquity derives from somewhere else. If the experts who fulminate against violence in the media did not exist, middle-class culture would invent them. In many ways, I think that’s exactly what happened: the experts did not create the consensus but were created (or at least funded) by it.
That to me is the interesting question that follows on Schechter’s cultural history: what is generating the discourse of “respectable voices” about violent entertainment that consistently shadows or trails those texts? I’m uncomfortable with some of the easy instrumental answers that occur to me: that it’s about the modern state’s attempt to assert power through a censorship function, that it’s about bourgeois attempts to surveil and regulate mass culture and mass audiences, and so on. Some of the critique of violence is also about the attempt to come to grips with the mysteries of representation and causality. Because if violent representation doesn’t simplistically cause violent action, neither is it a pure mirror. Expressive culture is an inventory of our possibilities, our dreams, our fears: it changes us. Not from innocents to monsters, but something far messier, more interior, less about action and more about subjectivity, about what and how we feel, about our idealized models of selfhood. (And about what we regard as demonic or shameful ways of being and acting.)
There’s also a question of how we can learn to voice our desires for particular kinds of culture without building sociological dream castles or mobilizing experts. I mean, I don’t like Hostel or Saw myself. Not just as films I don’t want to see, or have a personal dislike for. In some ways, I wish they had never been made. I don’t enjoy the recent local trend towards graphic disembowelings and so on in many of the superhero comics I read–but that’s an intensely local and historicized sensation for me. I don’t object to the violence in a comic like Invincible because that was the mode of the character’s representation from his very beginning; I object when a character whose entire history has been about a kind of idealized innocence is suddenly ripping people’s heads off. But it seems very hard for people to argue about what such representations mean, or to muster a criticism of such images, without rolling out the media-effects apparatus and waving the declensionist flag about how our society is going to hell in a handbasket, and therefore walking right into the mythographies that Schechter rightfully exposes as delusions. Maybe we could learn to ask more specific, modest interpretative and aesthetic questions of particular images of violence? To ask not “What is that image doing to society?” but instead, “What is that image doing in this particular work of culture? What aesthetic, thematic, narrative work is it doing within this text?”. Eventually from that question you can get to a question of reception and audience, but therefore a much more granular and specific kind of impact on a given audience and a given moment of reception rather than effects on the whole of “society”.
Back from a long stay in Vermont.
This is the first time as a family that we’ve rented a house for a long-term vacation. We’ve been thinking about trying to find a place to go in the summers for three weeks or a month, and northern New England has been high on our list of preferences. I don’t really like Mid-Atlantic beaches in part because of the hassle involved (traffic to and back plus crowds when you get there). I’d love to spend three or four weeks every summer in the high mountains of the American West but I don’t want to get on a plane more often than I have to at this point in my life.
So we thought we’d try the northeastern part of Vermont for our first go, and we picked a farmhouse along a quiet gravel road near to the town of Craftsbury. The house was great, the result of about 15 years of steady work by the owner. He has a small herd of beef cattle in the 35 acres around the house, and while we were there he added two young goats, which my delighted daughter was happy to goatherd around. (Also some geese who took a few days to settle in and find the pond in the pasture.) Fantastic southern exposure and view all the way down to Mount Mansfield, about 50 miles south. There was also a great barn that was set up as a workshop. (The house is for sale: if I had the money, I’d seriously consider it.)
At night, you couldn’t see any lights at all. If we turned off all the lights in the house, it was completely dark everywhere, in all directions. No planes overhead. During the day, there might be one car on the road outside about every 90 minutes or so. At dusk, we heard screech owls calling bloodcurdingly to each other at the tree line. Lots of local lakes with good swimming, and supposedly good fishing in the area, though my own experience with several highly recommended rivers was pretty disappointing.
There isn’t as much of an artisanal food scene in this part of Vermont as there is in southern Vermont and western Massachusetts. This is not to say that people aren’t producing great produce, meat and such for regional consumption, but it’s mostly flowing south and eastward of the area itself. (Reminded me a bit of how you couldn’t get really good coffee in some coffee-producing parts of Africa I’ve been in: it’s all packaged for export, because there’s hardly anyone nearby who will pay a comparable price for it.) The owner of our house was a really interesting, smart guy and we talked quite a bit about the local economics of farming. Upshot: not much, if any, profit in it unless you’re doing it at a large scale. (Though the profit on grass-fed organic cattle seemed a bit better.) If you’re not working for the government or for one of the few local businesses, you basically have to have a bunch of different small entrepreneurial ventures going at once.
It was also fun to take the dog along on the trip, another first for me. He particularly liked Stephen Huneck’s Dog Chapel. After reading the numerous moving eulogies of beloved dogs (and a few cats) put up on the wall by visitors, I thought of Chris Clarke for some reason–his dog Zeke belongs up on that wall, I think.
Our dog’s a little less happy now as he got a bad wound on his eyeball from a cat when we stopped overnight in Western Massachusetts on the way back. (He was just trying to have a friendly sniff of the cat, but the cat didn’t see it that way.) So he has to wear a cone around his head for a while. I’m cautiously optimistic but he may end up losing the eye.
It’s summer, so I’m trying to make a dent in a big pile of books sitting by my desk.
One of the first I’ve tackled is Chip Heath and Dan Heath, Made to Stick: Why Some Ideas Survive and Other Die. I found it an interesting and useful read, though I’m probably not the target audience, as a lot of it is aimed at corporate and institutional professionals responsible for communication or public relations. I see as part of a mini-canon of books that have an evolutionary take on culture without being caught up in the problematic claims of sociobiology or evolutionary psychology, or without descending to the epistemological weakness of “strong” memetics. (Gary Taylor’s underappreciated book Cultural Selection is a good example.) Call it memetics-light, I guess.
Anyway, the Heaths identify six attributes of “sticky” narratives and ideas. The attributes are: simple, unexpected, concrete, credible, emotional and story-telling. The Heaths do a good job filling out each of these attributes with specific examples and memorable anecdotes. (They’re obviously hyper-aware that if you’re going to claim to have identified attributes that made communication memorable and powerful, you’re going to have to demonstrate those attributes in what you write.)
I have three issues with the overall approach, though. The first, I suppose, is a case of me trying to impose my own intellectual preferences. They treat their six attributes as more or less transhistorical and fundamentally cognitive even though plenty of the examples they discuss have some highly specific, local, particular and often deeply historical element to them. Take “credibility”. What’s credible has a lot to do with what people already think they know, with the local, historically shaped and thus highly contingent character of “common sense”. They use the example of an urban legend about necrotizing fasciitis allegedly being spread by bananas from Costa Rica. They properly point out that the rumor got credibility in part through its clever discursive use of various authoritative sounding names and organizations (including necrotizing fasciitis, aka flesh-eating bacteria), but they don’t talk about the bananas from Costa Rica part, which taps into all sorts of racial and geographical coding in American society. If I tried to start a rumor that cheddar cheese from Canada was spreading a disease, I doubt I’d get much traction even if I used all the other authoritative tricks and turns of the earlier rumor.
This does strike me as important if you want to follow the Heaths’ magic recipe and cook up some sticky ideas yourself. It’s not good enough to just run down the magic checklist: that only tells you about the attributes a sticky idea has to have. It tells you what the container looks like, but to fill it up with something, you’ve got to have a good ear for history, for popular culture, for the sound of language. What was credible in medieval France to peasants is different than what’s credible in a World Bank meeting today. In a sense, they can’t provide any more guidance to an aspirant message-crafter than I could help people to write novels by writing a basic description of what a good novel is. The Heaths want to maintain that you don’t really have to have any talent to craft reproducible ideas and messages. Put me down as a skeptic.
The second issue is that they give zero attention to the political economy of media, rather like some of the people drawn to “memes”, “frames” and similar concepts. They don’t leave room for the possibility that many messages and ideas flourish because of a seventh attribute: incessant, forced repetition that is bought or commanded. I give a lot less credence to this factor than many “cultural leftists” or “cultural conservatives”, but there’s something to this point. Ideas spread sometimes because the powerful insist that they spread, or because wealthy interests purchase their dissemination. It may be true, as the Heaths conclude, that anyone with the right idea and the right hook can succeed in disseminating their message or their vision, but the Horatio Algerism gets a bit thick sometimes. Power matters.
Third, the strangest assumption they make is that everyone wants to communicate clearly and disseminate their ideas as widely as possible, and that most cases of bad or confusing communication are the consequence of ineptitude. They really don’t give any attention to something as simple as lying, which is a fundamental part of human communicative action, both interpersonally and institutionally. Sometimes human beings, particularly human beings in power or who speak for power, are socially required to communicate, but they have no interest in communicating forthrightly. For the Heaths, the most common reason that people fail to make their ideas sticky is that they know too much and thus overburden their communication in every respect (“the Curse of Knowledge”, as they put it). Sure, I agree, and there’s no institution more afflicted than academia. But I would say the most common reason that people fail to achieve stickiness is either instrumental or subconscious slipperiness. Take a look at these two recent discussions of quality failures in the manufacturing of the XBox 360. There’s no way that the Microsoft representative in the second of those two links is just failing to practice good “stickiness”. The guy intends to say as little as he possibly can. It’s not particularly effective as communication, but I don’t think it’s intended to be. It’s intended to put up a smokescreen, probably primarily at the advice of Microsoft’s lawyers. Sometimes, you really don’t want your ideas or words to be sticky. I’m sure George Bush the Elder wishes he hadn’t said, “No new taxes”, and equally that Bill Clinton wishes he hadn’t said, “I did not have sex with that woman”. Very memorable, both of those moments, but not in a good way for either of them.
There’s a very interesting entry by Bill Poser at Language Log on the issue of whether there is such a thing as citation plagiarism. (Poser argues no.) Inside Higher Education also links to a very interesting reply by Kerim Friedman at Savage Minds.
I agree with many of Kerim’s observations, but what I think he makes clear is that “plagiarism” is not a good description of the real issue. The real problem is two-fold. First, the rise of a mode of citation in many academic disciplines in which citations are not used either to identify the author of a particularly pithy, apt or powerful statement or as a pointer to material which provides substantive evidence for a claim made by an author. Instead, a lot of scholarly writing in the humanities and some social sciences uses citation as a marker of institutional sociology, as a performance of intellectual identity, as an affect of authority rather than the substance of it. So when these kinds of “marker citations” are simply copied from another text, they exaggerate a problem that is already present in an original usage of this style of citation. A disparate grab-bag of recent theoretical or especially au courant empirical works drifts like a raft on the ocean, cut-and-pasted into a thousand journeyman articles and conference papers. As Kerim observes, important concepts and ideas start to have a meaning that is simply about the trace of reproduction and replication, not about the original explanation of the concept by its initial author.
The second thing is that some citations are a mark of intellectual labor, that a historian went into this or that archive or that an anthropologist spent time in a particular field location. I had a case early in my career where I gave a paper to a prestigious working group that made use of some unique documents that I had read in the British Library and in southern Africa. Later on, another scholar reproduced the citations in that paper but also cited the paper itself–just not citing the paper each time that scholar cited the original documents I had looked at. I’m fairly sure that the scholar in question had not looked at those documents, and was using them to buttress an authoritative claim based on my use of them. (Partly because I think the other scholar misrepresented what several of the documents said.) That’s not plagiarism, not at all, but it is an attempt, I think, to appear to have done some work that one has not done. Considering that at least some of the embodied authority a historian has is still (properly) based in the assurance that we’ve worked our way through a particular body of documents or texts ourselves, that simulation of intellectual labor seems to me to be a legitimate issue, if not “plagiarism” per se.
On the other hand, however, I scarcely want to encourage even more use of citations, considering how many scholarly works are already too densely choked with footnotes. What I’d suggest instead is that for broad interpretative arguments, scholars should have enough confidence to make those arguments without the safety net of invoking legitimating theorists or disciplinary canons. Citations to secondary work should be either direct acknowledgements of specific intellectual debts or supporting specifically evidentiary claims.
I don’t know how it happened, but an article that is reasonably straight on the factual details somehow slipped past the vigilant demand for error at the Weekly Standard.
It’s an article by James Kirchick about the transition between Rhodesia and Zimbabwe and the role of the Carter Administration.
Kirchick focuses on the 1979 creation of “Zimbabwe-Rhodesia”, an attempted compromise by Ian Smith’s government designed to head off unrestrained majority rule. Kirchick’s argument is that the Carter Administration bears the lion’s share of the responsibility for ignoring the potential of this settlement and forcing a hasty, careless transition to the elections that put Robert Mugabe in power.
There is certainly a lot of shading of the facts and some hammering of square pegs into round holes going on in the article, but I think the main body of his argument has some validity to it.
Mostly, I’d just ask that people who ask for hardnosed assessments of one situation don’t turn around and demonstrate blinkered innocence about another comparable circumstance. This is what I think Kirchick does in evaluating the two versions of “majority rule” in the transition to Zimbabwe.
I think he’s right that the Carter Administration just wanted to get the Rhodesian situation resolved in the most expedient way possible, and that for them, resolution meant Smith’s capitulation and a basic majority-rule election. Kirchick finds some choicely naive quotes from Andrew Young about Mugabe, though to be fair, most observers were naive about Mugabe at the time. (Even Smith and his allies briefly praised Mugabe in the first two or three years after Mugabe was elected.) I don’t think that a secure or constructive transition in Rhodesia was very high on the Carter agenda: the only goal was to make the Rhodesians disappear. Kirchick also overstates to some extent the Carter Administration’s leadership role in that transition: the Callaghan government in the UK was equally important in pushing for it before Margaret Thatcher came into power.
So Kirchick faults people at the time for not taking a harder look at Mugabe, not questioning the way that the elections were handled, not pushing for a more extensive set of constitutional guarantees, and so on. Fair enough. The problem is that Kirchick holds up the 1979 elections that created “Zimbabwe-Rhodesia” as a perfectly adequate alternative without cluing his readers in on some of the problems with them.
Kirchick does summarize some of the ways that “Zimbabwe-Rhodesia” was something less than a model majoritarian democracy. Whites were to retain effectively permanent control over most of the key capacities of the government, and have a large enough permanent plurality in the legislature to block most initiatives undertaken by African lawmakers. The new arrangement specified that the government was not permitted to address land reform or economic redistribution in any respect. To a very significant extent, the capacity to govern was kept a white privilege, while Africans were allowed to be titular and symbolic representatives of the state.
Let’s get real here: that’s not majority rule or democracy. It wasn’t a lasting formula for resolving the legitimate aspirations of Africans for self-determination. I don’t even think it was a particularly promising formula for negotiating the way to some more satisfying transition: it looked cynical and half-hearted, and it was cynical and half-hearted. I can’t really blame Andrew Young, Cyrus Vance or any other involved party for viewing “Zimbabwe-Rhodesia” as a stillborn chimera. Kirchick seems to think that there could have been a Muzorewa government with these kinds of restrictions that survived with the backing of the United States and that indefinitely kept Mugabe and Nkomo out of the picture. I can’t see it no matter how I twist the counterfactual. In purely realist terms, Muzorewa’s government had little more military or economic capacity than the Smith government that preceded it, and therefore no greater ability to defeat entrenched insurgents. Mozambique and Tanzania weren’t going to give up support for Mugabe and Zambia for Nkomo, regardless of what the Carter Administration did. Even the South Africans were ready to cut Rhodesia loose, seeing it as an indefensible liability. Mugabe and Nkomo had genuine support from many Africans in the country: you couldn’t have a final settlement that didn’t include them, any more than the apartheid regime in South Africa could have had a transition that excluded Mandela and the ANC. Western support couldn’t have kept Zimbabwe-Rhodesia alive any more than the nearly blank-check endorsement of the South African government by the Reagan Administration could keep back the political and economic rot that was forming underneath apartheid during the 1980s.
Kirchick doesn’t even go into some of the other depressing details. Yes, many Africans voted in the 1979 election, but virtually as many did not. Kirchick claims that’s entirely due to violent suppression of the vote by guerillas in 1979, but let’s be fair here. A great many Africans chose not to vote because they saw “Zimbabwe-Rhodesia” as a phony settlement, with good reason. In 1979, African voters weren’t given a full choice of possible candidates, given that the two major nationalist parties (and some minor ones) were not on the ballot. You can’t praise some version of a plebiscite when it produces Muzorewa and then ignore or dismiss the fact that a majority of African voters did choose Mugabe in the 1980 election.
In 1979, many of the voters in more secure rural areas were more or less commanded to vote in particular ways by chiefs who were controlled to a significant extent by the Smith government and backed by the Rhodesian military. (Yes, in 1980, a similar situation pertained in the opposite direction in that the guerillas were allowed to intimidate rural voters near their assembly areas. But that’s the point: Kirchick can’t use one fact to ignore the 1980 result and then overlook a very similar issue only a year earlier.)
I agree with Kirchick that Sithole and Muzorewa were not simply “stooges”. In what I’m writing now, I’m arguing that even Chief Chirau, another ally of the Smith government, was more complex in his views and loyalties than was commonly thought at the time. But if Kirchick wants us to be hard-nosed retrospectively about Mugabe, there’s no reason to regard either Sithole or Muzorewa as saintly liberal democrats, either. Muzorewa was driven by untrammeled personal ambition for power, and Sithole certainly made some deals with various devils both before and after 1979.
Neither transitional moment had much promise for making a better postcolonial future. No one involved in either government really seemed to have a clue about how to restructure the state, which is fundamentally what was needed. Smith was an autocrat, too, and in “Zimbabwe-Rhodesia” he sought to retain most of his autocratic prerogatives. Perhaps he might have allowed Abel Muzorewa some share of those capacities over time, but that’s hardly a liberal democracy in the making. The transition that didn’t happen, the one that might have made a difference, wouldn’t have been a question of who got to retain control over the military, or who was in charge of repressing civil liberties, which is really all that the difference between 1979 and 1980 amounts to. The transition that might have mattered would have been the one that abolished the Central Intelligence Office, that guaranteed personal and political rights, that strengthened an independent judiciary that both Smith and Mugabe governments treated with disdain, that vastly shrank and restrained the ranks of the military and police forces, and so on.
However, while I’d like Kirchick to apply his hardnosed assessment more evenly, I don’t disagree with him that enormous mistakes were made in both 1979 and 1980, and that the Carter Administration made a goodly share of those mistakes. History is the art of hindsight, and when you practice its art, you’ve got to be honest with yourself. Anyone who knows the full political history of Robert Mugabe knows that there was plenty of reason to regard him as a destructive autocrat before 1980. If you gave him a pass then for political reasons, you ought to own up now. (I know I was certainly very credulous about Zimbabwe and Mugabe when I began my own involvement with southern Africa as an undergraduate involved in the anti-apartheid movement in the early 1980s.) Certainly there isn’t much reason to think of him as a “good leader gone bad” when you look at the early 1980s, considering that he wasted almost no time sending troops into southern Zimbabwe to murder civilians.
Stirring myself now from my traditional post-graduation coma + catching up on deferred life maintenance. One thing that caught my eye today as I did my bloggy rounds was a very nasty blogspat involving Acephalous.
As one of the comments on a thread at Crooked Timber about the whole thing observes, this is the kind of episode that makes it hard to explain blogs to non-blog readers, and worse, makes you feel like they’re going to get the wrong impression. Only it’s not the wrong impression, not exactly. One of the basic issues with online communication from its outset is that people with a very big need for attention and almost zero ethics have been able to provoke numerous responses when previously they would have been casually ignored. In this case, the author of Acephalous, a graduate student whose real name is readily available at his blog, has been compelled to deal with a pseudonymous attacker who emailed numerous people at his institution with spurious charges of racism.
Whomever the emailer is, I don’t think there’s much question but that he/she is a grade-A tool. I hope everyone who got those emails is smart enough to delete them efficiently, but so many academics are basically innocent of what goes on in the wider online world that I fear that even if they don’t pay attention to the stupidity of the emails themselves, they’re going to feel that somehow the author of Acephalous is doing something wrong to attract this kind of crankery to himself.
Any author, academic or otherwise, who addresses a wider public is going to draw that kind of attention. It’s just that in the old world of publishing, you likely never knew about it–you might occasionally get a weird mail, but that was to you, and you’d throw it away after a chuckle or two. Your publishers similarly discarded whatever they got. The old-media outlets had reputation gates: a pseudonymous jerk with a stupid complaint wasn’t going to get a letter published in the New York Review of Books or anywhere else that mattered. The online world only changes this in one critical respect: it allows people to magnify and multiply weak, mean, stupid or cruel voices. Not just because of technology, but because as readers, we’re all still learning how to built filters for ourselves that once upon a time we relied upon cultural brokers to build for us.
The messy question in all this is how much the main authors of blogs are responsible for the communities which form around those blogs. I can’t help but think that the author of Jesus’ General has a bit of responsibility for this episode, though he’s been good enough to read the riot act to his community–because that’s the table he sets in what he writes. There are other political blogs where I think the hosts and authors pretty much egg on a lot of nastiness in their comments sections and then blink in surprise if they’re called to account on it. But at the same time, blogs aren’t reducible to their communities, and if you want healthy pluralism among the people who respond, you actually do not want to be in there constantly chiding or banning those who don’t speak or think as you do.
I’ve already had a couple of recommendations to look at Angie’s List and Checkbook Magazine since posting earlier today about my homeowner blues. There have been some interesting discussions about Angie’s List in the past six months, particularly at Greg Knaddison’s blog.
Part of what makes that discussion interesting is that it peels back a hidden layer of the service economy in the online era. It’s clear that both small and large businesses have tried, with varying success, to manipulate online flows of information, while others are uneasy, angry or clueless about the potential impact of the online world on their livelihoods. I think that probably was the case in a pre-Internet era as well: some contractors knew how to manipulate the Yellow Pages and various social networks, others didn’t. However, the flows of information are now more rapid, more powerful in their effect, and more heterogenous in their composition.
It’s also clear to me reading the comments at Knaddison that something like Angie’s List is one of those places where very different kinds of online users intersect, where people who have four blogs and make all their calls using Skype are reading and contributing right alongside users who only know the Internet as a place to read email, see videos of cats playing pianos on YouTube and use Angie’s List.
That’s a good thing as far as creating a strong body of reviews and a strong community of users, though I have share some of Greg Knaddison’s doubts about the basic model of the service. However, there are some problems that are common to all review-collation sites, no matter their revenue model, including the kind of trust-based social networks that Knaddison advocates.
I’ve talked about these issues before here with regard to Rate My Professors, for example.
Even the best review sites are usually very thin in the kinds of information they provide. (Of course, sometimes a huge accumulation of information is not helpful in making consumer definitions, but is instead the sign of a protracted struggle over the object or service being rated. Look at the books with the most reviews on Amazon, for example.) Any review site can run into trouble with the incentive structure it provides for people to rank their service. With eBay, for example, I’ve found that making a negative comment on a seller (in my case, for failing to send an item) leads to enormous pressure from the seller to withdraw the comment, including the thread of reciprocal attack on your reputation as a buyer.
In particular, getting users who have detailed knowledge of the subject of a review to contribute when they do not have an axe to grind is a real challenge. Epinions, I’ve noticed, has a few star “expert reviewers” who pop up in some durable-good categories, but not in sufficient density in many cases to create anything like meaningful information for decision-making.
What you want, it seems to me, is a lot of people who have balanced or mixed experiences with services and goods to contribute to a review site, rather than just people who are highly aggrieved and people who simply say “A +++++ great seller!” or some such. I don’t think anyone has yet figured out how to reliably get that kind of information into an aggregated central location, either online or offline.
Moreover, it seems to me that this kind of information source wouldn’t just be a guide for consumers, but also a way for a national economy that has become centered on service to think about how to improve the quality and productivity of service goods without looking to expensive consultancies and middlemen firms. Right now, there is almost no way for ordinary, non-aggrieved, constructively critical information to pass from customers to smaller service-oriented businesses. The first pest control contractor we saw, for example, is from a fairly local company. The estimate was fairly well-priced, but the exterminator was so brusque and hostile in his manner and so unwilling to answer questions that I couldn’t really form any opinion about the potential reliability of his service. It would be good for him and me if that information could pass between us anonymously, without me demanding anything from him or him feeling that the assessment could damage his commercial reputation.
I think the first group or company to figure out how to combine the best of bottom-up content-creation and some kind of authority-driven or editorial practice to create dense and high quality information about service providers and consumer goods is going to make a lot of money. In fact, that’s the kind of thing that newspapers should be looking at as a replacement for the revenues they used to make from classified ads. The kind of reputation capital they could lend to a really well-designed system might make a big difference. I don’t think what’s out there now has achieved the necessary mix of features, usability and informational critical mass, however: it won’t be enough to just partner up with Craig’s List, Angie’s List, Epinions or any existing service.