Trope Trove, or Colonial Fairy Tales

I’m teaching my “Image of Africa” course again next semester, and I’ve decided to try and work some more on a pedagogical strategy that has had pretty good results in several of my cultural history classes. Basically, I take a catalogue of persistent images or tropes that are recognizable, persistent presences in post-1950s American or global popular culture and ask the students to do some detective work leading to a long research essay in which they track down the genealogies and if possible origins of these discrete images. In what kinds of texts did they travel, how did they change over time, what sorts of audiences consumed these images at different moments, and so on.

For one example, in my “History of the Future” course, I had students look at the history of tropes like “the flying car” or “the total elimination of disease”. In those cases, I often had very specific works in mind that I could direct them to–for the flying car, “Metropolis” and “The Jetsons”, to cite to examples. Nevertheless, the students often managed to pleasantly surprise me by turning up all sorts of information that was unexpected, describing the diffusion or transmission of the trope in question in more detail than I would have thought possible, or showing that it was a much messier or more intricate image than I suspected.

For “The Image of Africa” class, I think most of the tropes I’ve thought of so far for inclusion on a suggested list of possible assignments worry me a little as I think their deeper histories are going to be harder to track. I’ve got some good guesses about most of them, but fewer highly specific texts to direct the students towards, as I think some of these images began their cultural life very early in European history, have more diffuse origins rooted in specific colonial encounters (some of them not originating in Africa, but applied to Africa at a later date), and so on.

One of the things I’m hoping to do in the course is put Said’s Orientalism in critical perspective from the outset, to shove it off its canonical perch a bit and put it in intellectual jeopardy, partially by demonstrating that “colonial discourse” was constructed less instrumentally, with more of a correspondence to actual dialogues and fractured understandings between Europeans and non-Europeans, than Said’s study and the many other works that use Orientalism as a methodological blueprint commonly assume or assert.

That places a pretty complicated burden on the students doing research, however. One reason that so many academic studies of “colonial discourse” and the images found within it tend to stay mired in imperial or metropolitan texts and contexts is that the methodological challenges involved in stretching beyond those contexts, out into colonial societies themselves, are entirely different. Kipling comes readily to hand if you’re in London or New York, but the correspondence of local colonial officials or missionaries are harder to find and process. So I know that the students are going to be pulled one way by some of the arguments made within the course itself, and another way by the practicalities of the research assignment.

With that in mind, I’m crafting a tentative list of tropes that I will hand out as “pre-approved” topics of study. In a very few cases, there’s a very focused, topically dedicated secondary literature available to help them, as in the case of images involving cannibalism or the image of Europeans being perceived as “gods” by non-Europeans. In other cases, the scholarship is sparse.

I’m interested in any additions that might be made to this list, and also any ideas you have about originary or primary sources that would help locate the image in time and space. These are all tropes that have appeared in some form in 20th Century representations of Africa, but some of them originated somewhere else–the image of the chief who offers his daughter to an unsuspecting European visitor, for example, I strongly suspect is rooted in encounters between Oceanic/Pacific societies and European travellers, and followed an arc from being fairly straightforward descriptions of encounter to being eroticized fantasies to being largely comedic, mostly the latter by the time this image pops up in representations of Africa. A lot of these are highly mobile to all sorts of colonial encounters, drifting easily from representations of Africa to other contexts.

Here’s my current draft list:

1) Hidden city/lost civilization deep in the jungle. Often civilization of whites or non-Africans.

2) Missionary/explorer in a cannibal cooking pot; general tropes of cannibalism.

3) Mysterious ritual that turns out to have been marriage to chief’s daughter

4) Superstitious bearer/guide

5) Evil witchdoctor

6) White man “gone native”/Tarzan figure

7) Kurtz-style descent into madness

8 ) Sexual Africans/repressed whites (especially missionaries)

9) Gold/treasure obsessed white explorer

10) Modern technology/material culture vs. “clueless” Africans (lots of subtropes–“this is my boomstick” images of powerful weapons; mirrors; beads and trinkets; scientific knowhow such as the ability to predict an eclipse

11) Avuncular but clueless chief who is easily manipulated by white visitors

12) Wise, spiritual elder or “witchdoctor”

13) Africa as the abode of unspoiled, primal nature/wildlife

14) African warrior with generic iconography (spear, long shield, tall and thin)

15) Noble, kindly, elderly medical missionary ministering to Africans far from European settlements

16) Iconography of famine and starvation

[crossposted at Cliopatria]

Posted in Africa, Popular Culture | 14 Comments

Fisk’s List, Jargon, and Conceptual Brevity

Sharon Howard has a great response to Robert Fisk’s raving about academic jargon.

I can only echo it. I’m very friendly to the regular complaint that scholars in the humanities and the social sciences use unnecessary jargon and communicate poorly in their writing. I think it’s a fair charge in many instances. I don’t think that this lack of clarity even stems from anything as straightforward as science envy, or the desire to have a specialized language that keeps non-initiates at arm’s length. I think sometimes it’s just a way to add words and pages and simulated novelty to scholarship that would otherwise seem to be banally reiterating arguments made by others–a response to the pressure to produce. I think sometimes it’s also a form of simulation–a largely unconscious attempt to read like highly valued works, without understanding why or how those works were written in the manner that they were written.

But Fisk shows clearly where you can go too far, where this kind of anti-academic critique slides easily and comfortably into anti-intellectual sloth, the lazy man’s justification for certain forms of sustained ignorance.

Let’s take the first term that Howard notes is on Fisk’s hit list of ridiculous jargon: “matrilineal”. If you’re going to talk about kinship in human societies, surely a topic that anyone could concede to be empirically important, not some postmodermist theoretical conceit, you’re going to want to talk about the empirical phenemonon described by the word matrilineal. (A word, by the way, which is in the dictionary, my dear Mr. Fisk.) Now I could, when writing or speaking, say each and every time, “a line of descent from a female ancestor to a descendant which may govern property rights, inheritance, or mere tranmission of lineage identity”. It’s a coherent, concrete, discrete phenomena: why on earth shouldn’t I have a single word or term to describe it? Sure, I have to define it for people who haven’t encountered it before, but that’s what education is about.

Most of the words on Fisk’s list are comparably concrete. Some are more dispensible, or have drifted from their original use. “Problematize”, for example, is now just a big-word, somewhat jargony way to say that you want to complicate an issue. The original theoretical idea of a “problematic” in Marxist thought was much more specific, but even at its origins arguably not particularly necessary or discrete. At the very least you had to be pretty deep into a particular theoretical conversation to get much specific mileage out of it. So it’s not as if Fisk’s complaint is completely invalid. But he does a pretty good job of outlining the dangers involved in lazy anti-intellectual or anti-academic rhetoric (dangers well in evidence elsewhere): legitimate gripes slide very easily into blanket ignorance, and the deep persistent value of scholarship gets tossed out the window along with its baroque exterior.

Posted in Academia | Comments Off on Fisk’s List, Jargon, and Conceptual Brevity

“Demonstrably False”

I don’t even want to read various weblogs this morning about the Newsweek “Qu’ran in the toilet” story.

I know what most of them are going to say, and I’m bored in advance with what I know. Various anti-war blogs are going to assume the story really is true, and various pro-war blogs are going to scream for the heads of Newsweek’s editors to be mounted on a pike and assume the story is utterly false.

Probably I’m just as predictable and boring. Because the story for me is not whether the report is true or false, but the fact that it is impossible to say confidently whether or not it is true or false. Not because it is never possible to make such determinations, but because the specific policies and mistakes of the Bush Administration in the conduct of the “war on terror” simultaneously have made it impossible to determine while also making it conceivable that it is true.

Pointing a finger at the Bush Administration is probably too narrow and partisan. The problem goes deeper, to a belief that rights accorded to criminal suspects, prisoners of war, possible terrorists, exist entirely for the benefit of those suspects. This is a deep theme in post-Miranda American popular culture, popping up both in light entertainments and fairly serious work.

Even some defenders of civil liberties respond too one-sidedly that those rights exist for the benefit of the innocent. That’s true, but they also exist for the benefit of the state itself. It works both ways. When the credibility of the state is challenged, a demonstrated scrupulous respect for rights and a public process of justice allow the state to maintain its integrity while also clarifying individual official responsibility for error and misconduct. Even civic, non-governmental institutions are far more trusted when they’re relatively transparent, or permit outside oversight. When they misstep, they’re much better off if they’ve been showing all the cards in their hand from the outset.

It’s entirely possible that the Newsweek report is false, that it came from a source who was repeating mere rumor, or was even being actively malicious in passing on the story. When the Pentagon replies that the story is demonstrably false, well, no, it’s not. And it’s their own damn fault that it’s not.

If the Gitmo prisoners had always been held openly within the sight of the world, allowed access to counsel, allowed open visits with the Red Cross, then the story might be demonstrably false. Credibility and trust are hard to build, easy to lose. This, at the end of the day, is why it mattered that the United States often held itself to a higher standard of international behavior, even when that made certain kinds of actions more difficult, less expedient. Sure, a rumor like this might have sprung up in any case. But if Gitmo had been in the light of day all the time, if Abu Ghraib had never happened, if there had been no torture memo, then it would have been much easier to stand up and say, “This is false” and have most people around the world in their heart of hearts believe that it was false. There are rumors that people disseminate because they have a sort of symbolic coherence to them (say, for example, the recurrent rumors in many parts of the world that mysterious individuals steal the bodily organs of locals for shipment to the developed world), which I do not think are necessarily believed as literal statements. Then there are rumors which are taken as a different kind of concretized truth, and incorporated into local knowledge as such. For that to happen widely, globally, the rumor has to be empirically possible at several levels –and who of us can say, with confidence, that this is not an empirically possible story of interrogative policy, of a system actually employed at Gitmo? If we hadn’t sought to protect what happens there with obsessive secrecy, if the executive branch had not tried to justify torture and the disavowal of all human rights of prisoners, I would say that it was unlikely, or possibly just one bad apple among the jailers. In this case, who can say? And if we cannot say, we are helpless as a fire burns in hearts around the world.

Transparency protects everyone, even the institution which opens itself to scrutiny. There are very few goals so urgent that they are worth foregoing that protection.

Posted in Politics | 1 Comment

And Then What Happened?

The pull of narrative is very strong. Once I’ve started a story, I need to know what happens next even when the story becomes quite bad or irritating. I stuck with Robert Jordan’s awful, awful fantasy series for six books, which amazes me considering just how wretched they are.

This is all by way of trying to explain why I’m going to see Revenge of the Sith Wednesday night at midnight. I could regale you with tales about how transfixed I was in 1977 and all that (I think I’ll make a separate entry on that at some point) but at this point, it’s just about compulsion. I need to see what happens.

I even watched The Phantom Menace again recently, with the ostensible justification that my daughter might like it, but sort of more on the premise that I might find I liked it better in retrospect. I didn’t. It’s actually worse than I remembered. My daughter only found two things interesting: the pod-race and Darth Maul. Good instincts, yes? Even the very good final lightsaber duel is screwed up by the flabby general stupidity of the whole plot and the intercutting with the other dumb battle scenes. One new thing that struck me is that awful as Lucas’ dialogue, staging, pacing, plotting and everything else is, Natalie Portman is also actively, aggressively bad in the film. You get an overwhelming psychic sense that she was regretting ever having gotten within 10,000 miles of George Lucas. It’s almost as if she’s deliberately sabotaging whatever she can. Even Liam Neeson, who clearly hated the whole thing, puts some effort into it.

With that as the nadir, I suppose “Sith” could actually appear ok in contrast. We’ll see.

Posted in Popular Culture | 23 Comments

Built to Last

One of the few blog discussions that I followed pretty closely during my blog-vacation was the spiralling conversation that came out of initial reactions to The Valve. It drew a number of comments from me at various sites, and I don’t want to relive most of them, but one supplemental point that came out I thought was of continuing interest. George Williams criticized the idea of the “time-tested” work of literature, the proposition that the canon consists of works which achieve sufficient universality and depth of meaning that they continue to have value long after the historical moment in which they were initially published.

I agreed (and still agree) that this understanding of literary value is pretty silly. Demonstrably, empirically, it’s not how canons get built or literary value gets appraised over time. There’s more to it, though. Part of my response to that entry was that you shouldn’t just treat Enlightenment-derived concepts of the universal human subject as “mere” constructions: it’s possible that they’ve actually created universalities that are real for all that they are also historical.

One of the consequences of that might be that modern literary canons are built around ideas of temporality and universality which are not “real” but neither are they really false–that we may be selecting for works that we read as having “timelessness” or works which seek to achieve or communicate that. Doesn’t mean that literary works survive the “test of time” naturally, in some kind of cultural selection process.

There’s a more modest idea that comes to me as a result of that proposition, maybe one that a broader range of literary and textual critics could accept. We talk about metaphor, rhyme, and so on as intrinsic technical attributes that literary or cultural works may possess. Is it possible to add “plasticity” to that list? That some works have, in largely technical terms, more interpretative fluidity, less temporal grounding in a specific time and place, less dense referentiality or embeddedness in existing genres and discourses? All literary and cultural work has a history and exists in history, and I think you always need to know a lot about context to read and understand anything well. But “Freakazoid”, let’s say, seems to me to be much less plastic than your average “Road Runner” cartoon.

I’m not saying that plasticity is always something better: a text with a great deal of it might be highly generic and formless. I’m very aware that readers misattribute plasticity to some texts by reading them in fairly superficial or ignorant ways, by overlooking material which is densely alien or unfamiliar in its historicity. I’m aware that talking about plasticity and fixed referentiality could easily become a kind of deferral of the modernism/postmodernism divide. But it seems to me that there really are texts which travel farther in space and time than others, and that this is partly enabled by something intrisic to the texts themselves.

Posted in Academia, Books, Popular Culture | 3 Comments

A Case of One

I’m planning to watch “Alien Planet” on the Discovery Channel tomorrow: it looks like fun.

But I was struck by a quoted opinion from the show that appeared in today’s New York Times review of the show, that should we ever encounter an alien intelligence, it will almost certainly have evolved from a predatory species.

I’m not even entirely sure that’s an accurate gloss on omnivorous humanity’s evolutionary roots, but it seems to me that this is a case of unwise generalization from a single case. Not that we have much choice, and not that this sort of speculation is anything more than idle, but still. Scientists who study octopi are still trying to figure out why a creature with as short a lifespan as an octopus is as intelligent as it is: certain paradigmmatic predictions of intelligence or learning ability suggested before such study that an octopus is exactly the kind of organism that shouldn’t be as intelligent as it is.

Extrasolar planets have pretty well upended old theories about planetary formation: we’re finding planets in places that we didn’t think they should be at sizes we didn’t think they’d have. It’s looking more and more as if our solar system may be in various ways odd rather than typical. I would expect extraterrestrial life to be the same, if I were going to make a prediction. Even if you assume that way life works elsewhere will broadly be the same (life getting energy from passive environmental sources, or life getting energy from consuming other life), it seems like a failure of the speculative imagination to assume that only life forms that consume mobile, evasive life forms would “need” intelligence. It almost seems that’s part of the problem: a particular way of thinking about evolution that uses words like “needed” or “necessary”, when it’s possible that intelligence in general and consciousness in specific are epiphenomenal accidents of other traits being actively selected for.

Posted in Miscellany | 3 Comments

Pandora’s Plan

The low rumble of discussion over the last few years about the process of assessing the needs in various departments has finally broken out into an invitation to faculty to send in their own thoughts about future needs to the provost. As any academic knows, there’s only one process that can create more ill-will than full-scale academic planning of this kind, and that’s planning for some kind of institutional contraction. It’s why many faculty and administrators at both large and small institutions often prefer to carry out such planning in a more ad hoc manner. It allows everyone to come away from any given decision about any given faculty line with their own assumptions and visions of the future intact.

A process of planning, on the other hand, requires either one coherent vision winning out or the construction of some kind of chimeric vision and a Rube Goldberg process for making choices–the latter being the more common result. You either end up with people feeling like losers–and sometimes those people turn to long-term guerilla warfare against the new plan as a result–or with some unwieldy and programmatically incompatible process that will just serve as a fig leaf for future ad hoc decision-making.

I’m being overly gloomy: it’s possible, especially at a small relatively consensus-oriented and friendly institution like Swarthmore, for almost everyone to take the high road and end up feeling comfortable and enthusiastic about the results.

My temptation is immediately to climb up on my own favored soapbox, because I think there’s really only one sensible way to think about institutional needs. Before I do that, it’s worth exploring the alternative underlying systems one might use to coherently assess what a liberal arts faculty needs to have in the future. (Some kind of system is how you avoid pure political brutalism, in which a few strong departments mobilize and get what they want first and foremost–at least on paper.)

System 1: Herd instinct. Find out what the most commonly represented fields at peer institutions are, find out which ones we don’t have, and let those absences guide you. The Swarthmore Department of History has no historian of England, for example, but it’s extremely common field to have even in small institutions.

Problems: defers the explanation of why those fields are important to other institutions. Still leaves you with a huge set of unrepresented fields to choose from, and with no winnowing procedure save relative degree of commonality at other institutions.

System 2: Consumer demand. Look at which departments have the most majors, or the most challenging ratios of faculty to majors, and service their needs first and foremost. Or look at which fields prospective students most seem to want, or find most attractive. Or look at which specific fields of study are drawing the most demand and get more faculty in those specific areas.

Problems: Easily for both faculty and administrators to manipulate demand. Considering that you hire a faculty member to a tenure line for a lifetime potentially, dangerous to effectively make a 30-40 year commitment around a short-term spike in demand. Students tend to demand particular majors on false premises (e.g., the belief that an economics major is the only way to move on to an MBA). If taken seriously as a premise, should also lead to reductions or eliminations of unpopular programs and subjects even when there is strong consensus on the part of faculty that those programs are intellectually important to a well-balanced curriculum. (Classics, for example.)

System 3 Urgency and Utility. Collectively decide what unrepresented fields of specialization are lacking that the faculty, administration and students recognize as having a kind of “perfect storm” convergence of general importance, newly urgent necessity, and significant practical and intellectual importance. This is pretty much what Swarthmore uses as a principle governing both ad hoc and long-term planning for new positions or redefinitions of old ones, with the addition that we also regard diversification of the faculty as important.

Problems: I think this is ok, but I do think that you get some pretty chimeric end results at times–significantly divergent metrics of urgency and need being used to justify a range of planning decisions. That’s largely because it’s impossible to see how we or anyone else could have a meaningfully shared yardstick that would allow us to talk about the relative weight of a new position in Engineering on one hand versus a new position in Middle Eastern politics on the other. At some point, you tend to get appeals to “demand-side” issues thrown into the pot to try and slide past the fact that there are going to be relatively incommensurable ideas on the table about what’s important. The end result tends to reinforce disciplinarity in the end–divisions tend to say, “We’ll give you one if you give us one”, and unusual or dissident positions within any given academic division tend to get pushed aside. Also, again, taken seriously, this kind of process should lead not just to addition but reallocation of positions–that you look at fields whose importance has declined and try to move the resources invested in them elsewhere.

System 4 Money. If a department or division can bring in the money for a position (either from grants or alumni), let them have it. Otherwise, let them eat the crumbs from the table. Not really applicable here–but this is effectively how some large research universities do academic planning.

Problem: This is where academic politics get brutal and unrestrained–a race to capture any and all sources of funding, and a lo tof things of value fall by the wayside inevitably in the process.

My own soapbox preference isn’t any of these. I think my colleagues get tired of hearing it from me, I know I’m bloody predictable, but this is where a small liberal arts college ought to be doing something completely different. I think every one of those other systems, even attempts to come to a community consensus about a small number of urgent needs, consigns a small liberal arts college to being a bad little university. Because each of those processes except for the last one takes you into planning thinking about all the things that you lack: they end up being a catalogue of absences.

I much prefer to think that a place like this is the size that it is on purpose, that its size is a strength that is not just pedagogical but intellectual. What do we need that we don’t have? Not more specialists in various fields, however urgent those specializations might be. We need people who help to knit us together, people who connect specializations, people who create connections as an instrinsic result of the kinds of research and teaching they do. “Science studies” or science policy scholars. Big-picture specialists on human evolution, population genetics, sociobiology (of the subtle kind). Broadly humanistic intellectuals whose specific areas of interest range over philosophy, literary criticism, history, linguistics, and so on. Cognitive scientists. Experimental economics. Fields of research that are interstitial and connective by their nature, pursued by individuals whose own projections of their development are towards generalization and broadening.

I don’t think you can just name such fields as specializations and be done with it, either. If you plan for such positions, you had better plant them in fertile soil. If you just dump such faculty in established departments, chances are some of them are going to hit a lot of hostility from people who have much more disciplinary ideas about specialization. You really have to plan to go in this direction, have an overall design, not just tell two departments to get together and build their own Frankenstein monster out of bits and pieces. That kind of process tends to end up with the resulting positions getting re-dismembered at tenure time.

I think I’m going to have to wear one of those shock collars people put on barking dogs to keep myself from saying this kind of stuff too much once the fall rolls around. Most everyone round these parts knows I think this way: the trick is going to be to persuade some of them to think this way too.

Posted in Academia | 6 Comments

The New Thing

So.

Here I am more than a month later with a WordPress blog with comments and all that new-fangled stuff. What took me so long? Well, partially I found myself at the time I went on hiatus just plain crazy busy. Which I still am.

Partially I was also feeling, for reasons I don’t clearly recall any longer, very irritated with blogging. I think there were some discussions going about a month ago which annoyed me and I’ve long since learned that writing online when you’re feeling really irascible is a bad idea. I think at least some of it was various entries about Andrea Dworkin’s death. I think I needed a bit of a blog vacation.

But a lot of it was simply that I wanted to learn CSS and do my own WordPress theme and not feel just completely clueless about what was going on underneath the hood. Many many people keep telling me this is easy. Well, I don’t think it is. Yes, just setting up a plain old Kubrick-theme blog is not so hard, though I suspect I’m going to find out I screwed up something even in the basic administrative set-up. But doing a theme of your own? Holy crow. Yes, I readily understand conceptually all the many advantages to CSS but it is, at least for me, vastly more difficult to use than clunky old HTML. I can’t even figure out how to put images up anywhere here, let alone really customize it. There’s a modest local difficulty here also with my gaining access to the shell directories where this thing lives that’s probably soluable once we get past the chaos of the semester.

So I apologize for the plain-Jane look, but I wanted to get back online and blogging again. It’ll be interesting finally having comments: I’m a bit nervous about it, as one of the main motivations I took into blogging was to get away from the call-and-response rhythms of asychronous conversation in my own online writing.

Posted in Blogging | 13 Comments