One-A-Day: Felipe Fernandez-Armesto, Pathfinders: A Global History of Exploration

I’m going to start trying again to write comments on the reading I’ve been doing over the last six months. It hasn’t been quite one-a-day, but there’s a lot of books and articles in my backlog to talk about.

Pathfinders is like other works of global history by Fernandez-Armesto: a readable, pleasant synthesis that doesn’t add much to a historian’s analytic toolkit, but it puts narratives and information that have often been told in a markedly Eurocentric way into a broader comparative perspective.

I’m going to focus on one specific thing I noticed in this book that I think speaks to a wider problem in the writing of global histories of this kind. Fernandez-Armesto works very hard to draw in examples and cases from most regions of the world, particularly when he’s talking about premodern exploration. When he gets to 1500, he serves up a modest amount of Iberiocentrism, but he’s honest about that, and rather charming. Plus it’s hard to argue against the centrality of Spain and Portugal in maritime exploration from 1500 to 1650.

However, this drive to globalize history often draws world historians into a complicated tension with area studies specialists. I’ve been very clear about my dissatisfaction with the tendency towards intellectual parochalism among Africanists, but some of that tendency is rooted in some genuinely important priorities.

Here’s an example drawn from Pathfinders. Early in the book, Fernandez-Armesto is talking about early cartographic or navigational practices in human societies, and reasonably concludes that there must have been some practices employed in some early premodern societies beyond dumb luck. Absolutely: it’s completely fair to infer that premodern human societies may have had all variety of interesting mnemonic and representational techniques for remembering or communicating how to go from here to there, and some of these may be of profound antiquity.

In this discussion, he uses a few African examples that unfortunately illustrate the intellectual predicament of Africanist historians.The way he uses Africa is as many world or comparative historians do: by citing recently observed practices that suggest what was likely done in antiquity by all humans. This is the conventional logic of using 19th or early 20th Century Africa as if it were a window into prehistory, unchanged tradition. The examples Fernandez-Armesto uses specifically in this case are Marcel Griaule’s work on Dogon cosmology with its map-like constructions, MacGaffey’s work on Kongo cosmographs, and scholarship on 20th Century Luba chiefship ceremonies which include geographical knowledge of sacred and ritual sites.

The problem is that all of these examples, Africanist scholars know, are anything but unchanged windows into the distant human past: they’re practices and ideas that have almost certainly changed considerably over time AND they were described by and in some cases actively reinvented by Western officials and scholars with very bounded ideas and preconceptions about African history and society. There’s a considerable mini-literature on Griaule and his intellectual history that makes it impossible to take anything he wrote or said about Dogon cosmology at face value. MacGaffey has noted that cosmographs as he describes them strike him as both recent and highly dynamic, changeable inventions.

These examples tell you about the distant human past about as much as saying, “Contemporary Americans often give directions to important locations by citing commercial landmarks rather than formal maps, e.g., ‘Take a left at the Kentucky Fried Chicken, if you pass the Wal-Mart you’ve gone too far'”. That actually does tell you in a universal, general sense about how people might have navigated in early societies, but it doesn’t tell you any specifics. Fernandez-Armesto regards recent African examples as directly suggestive of practice in antiquity (e.g., not just general illustrations, but “Maybe here’s how the Dogon did it back then” or “The Luba probably valued this kind of geographical knowledge back in prehistory and remembered it through chiefly investiture and oral tradition.”) Africanists know that there weren’t any Luba as such in the early premodern period that Fernandez-Armesto is working with early in his book, any more than there were suburban Long Islanders with lawns and backyard swing sets in Neolithic times.

But when the Africanist gets all snippy about this problem, insisting in almost cliched terms that African societies were also dynamic and historical and changing, the world historian or comparativist says eagerly, “Great, so tell me about what West African societies were like around 200 AD or so, especially any details on their cartographic or geographical practices, so I can give some examples that compare to Rome, China, Arabia, and so on”. And here the Africanist mostly has to say, “Sorry, don’t know. Pretty much can’t know, not at that level of specificity.” Or even more sheepishly, “Well, I can tell you what a handful of Mediterranean, Middle Eastern and South Asian sources say about those places, but a lot of that is little more than thrice-told stories from merchants.” Plus the Africanist can make use of some archaeology and linguistics, but as Fernandez-Armesto points out, there are a great many imaginable geographic or cartographic practices in prehistory or antiquity which may have left no material or artifactual record behind.

So now the world historian has to ask, “So, how about it, guys? Should I just leave Africa out of what I’m talking about unless I have highly specific, properly historicized examples, which means it’s going to be left out of a lot of my account, or should I use whatever I can find, and without a lot of methodological song-and-dance each time I do so, because that’s going to mess with the readability and coherence of my synthesis”.

Africanists often seem to reply, “Either one, you choose, and we’ll complain about it either way”. The problem is that the complaint in either case has some validity to it, but the global historian’s choice either way is also fairly valid. I don’t have an easy answer. On some subjects, there may well be a chronologically appropriate comparison for a global historian to use that comes from Africa or Mesoamerica or Oceania. On many others, the details and specificity in premodern world history are going to come from literate, record-keeping societies even when we can be fairly certain that there were examples of the same practices or institutions or ideas in non-literate societies elsewhere in the world.

Posted in Academia, Africa, Books, The Mixed-Up Bookshelves | 4 Comments

Information about Information Technology

I got tagged to fill out a meme about five “guilty pleasures” on my iPod. My iPod is a pretty chaotic little beast and I don’t really use playlists much, so to some extent each time I listen to it, and that’s not all that often, it’s a pretty different experience. Ok, so I have Enya on there, how’s that? Also Modern English “I Melt With You”.

It just so happened that I was thinking about my iPod when this meme wandered across my doorstep, however. And I was not thinking nice things. The iPod was a gift, and I’ve enjoyed having it. I doubt I will get another one when this one (inevitably) dies, though a small digital music player like a Nano seems like a good idea. What really turned me against Apple’s version of the technology, and made me wonder once again why on earth Apple has a reputation for higher ethical or design standards than the PC-Microsoft world, was trying to move my iTunes collection from an old computer to a new desktop at home.

I buy almost nothing from iTunes: the DRM in use there strikes me as pure poison, a lousy deal. I do have a big collection of converted music from our own CDs. I’ve never downloaded music from any other legal or illegal source in my life: it’s just not my thing. Before we had our iPods, I’d digitized about 25% of our CDs in mp3 format. I was already a bit annoyed when iTunes wouldn’t recognize some of those digitizations and I had to do it again.

However, I basically viewed the iPod as a small hard drive that was built to work fairly well with a particular software ensemble. Because that’s what it is. I’m not fussy about music formats or sound quality, and not especially knowledgeable about music as a cultural system overall. It just isn’t where my intellectual interests lie. So most of the debates about the quality of the music played by iPods didn’t draw me. Because the device was a gift, I also didn’t think that carefully about as a technology.

So imagine my surprise at discovering something that most iPod owners have long since known: you can’t easily copy your whole iTunes directory onto a new computer from the iPod itself, even though technologically, that should be as easy as moving files of any kind around via flash memory. At first I just figured that I was being stupid, but after being stymied for a while I went looking and found out the ghastly truth.

I grasp what Apple’s reasoning is for crippling their own technology: this was to reassure the music industry that users would not spread their personal music collections around to other computers like so many Johnny Appleseeds. As with so many alleged safeguards around digital culture, however, this decision largely just inconveniences legitimate users pursuing legitimate purposes. It’s not as if you can’t get around this bit of stupidity–there are third-party programs, but I just went into the totally freakshow file structure that iTunes employs and manually copied over all my files to the new desktop. Of course, that meant that a job that should have taken 15 minutes with no hassle took me considerably longer.

The larger lesson here, however, is once again that people who view information technology with ambivalence, both inside and outside the academy, are fairly right to do so. When you’re adopting something new, it’s often very hard to get clear information about the real work processes, difficulties and shortcomings involved with the technology. In some cases, that’s because nobody knows yet what those will be because no one has worked with the technology long enough. In other cases, it’s because there is a lethal combination of enthusiasts and salespeople working to obscure the long-term costs and issues. And in still other cases, it’s because you adopt the technology accidentally or on the fly, without a lot of thought, and it seems simple enough on initial use.

This doesn’t mean that you shouldn’t make assertive use of new software applications, new hardware, and so on. A lot of the precautionary knowledge about both common and less-used technologies, however, is buried at least several layers deep inside specialist conversations or it’s obscured by long-running partisan battles over platforms, companies, and design philosophies. We find it only when we suddenly have need of it, and by then, it’s sometimes too late: data is locked into a proprietary format, information has been lost, labor time has been wasted on an application that’s about to abandoned.

Posted in Information Technology and Information Literacy | 4 Comments

Software Bleg

So. I’m thinking of seriously rebooting my rather shambolic note-taking, research and organizing practices, and I want to think consciously for a bit about what kinds of software or IT solutions out there fit my habits as well as some of my new aspirations best.

The first criteria for me really has to be forward-compatibility, relatively mainstream and well-supported products. I’ve got a bunch of dissertation notes that I took in a custom-created Hypercard database, so let’s just say I learned a hard lesson about not venturing off the pathway too much.

What I’m really looking for is:

1. The best way for me to keep a record of research materials consulted during research that can then be exported as citations in a variety of publication formats while also keeping very clear when and where I consulted these materials.

2. The best way for me to keep tightly connected notes on those materials where I can link to the citation record but can also keep notes of varying length and detail, including direct quotations with page numbers where it is clear to me that those notes are quotations.

3. The best way for me to keep a record of books, articles and other materials I’ve read which are not directly connected to research, with some kind of searchable tagging system that correspond to some of the subject categories I have in my own head. This isn’t important for a specific research project, because everything I’m looking at there, even speculatively, is by definition connected to that project. But when I’m reading more broadly, or in my field of specialization, I do want to have some way to tag or mark my thinking about where this resource fits in my own head (is it good for a course? helps me thinking about a problem I’m concerned with? links distantly to a research project?).

4. The best way for me to take notes connected to this more speculative kind of reading. This is where I’m a bit frustrated right now. I just don’t take these kinds of notes any longer, but I used to. I have a filing cabinet full of articles I read in the late 1980s with no searchable index, just tabs on files, and notebooks full of thoughts about these articles which I’m sure I will never put into electronic form. If I’m going to start taking these kinds of notes again, I want them to last and be useful.

5. The best way to log incoming work like requests for recommendations, connected to a calendar or schedule.

6. The best digital “to-do” list, linked to a calendar. I want room both for ongoing reminders that need to be checked off each day (“write a hour a day on X project”) and specific projects or meetings that have deadlines.

Posted in Academia, Information Technology and Information Literacy | 17 Comments

It’s All Part of the Plan

So. The Dark Knight? Great. Though my wife hated it, I think maybe because she expects her superheroes to triumph in the end and thus found the film too psychologically relentless and unpleasant.

—————–

[spoilers to follow]

If you’re deeply familiar with the character, you don’t have the same expectation: Batman is a romantic enemy of the way things are like Zorro is an enemy of the established order, only unhappy and doomed never to triumph, which is to say not at all like Zorro. Because Batman lives in a world where the way things are is not a matter of a few mustache-twirling oppressors.

Some of my favorite Batman stories in recent years have at least tried to raise the question of how Batman, a driven genius, can understand so much and so little all at once about how the world is and how it might be, how he can delude himself that the best thing he can do to make the world better is dress up like a bat-ninja and beat the shit out of criminals. Sam Hamm did a three-parter a long while back that I really liked on this theme, and more recently, Grant Morrison nodded in this direction as well. Nolan is really savvy to this aspect of the character.

Batman Begins took the now-standard twist in the mythos of sending Bruce Wayne around the world to learn the skills that make him Batman and added something crucial: that he himself becomes a criminal for a time, to learn what it’s like. This is clever for the character as a whole, and it grounds him more complicatedly in the fictional subgenre of “rich man who descends into poverty and suffering to discover what it’s like, emerging a crusader”. But it puts new pressure on the Nolanverse version of Batman: how can he pretend for a minute that being Batman is anything more than a weird bit of theater? Answer, as of The Dark Knight: he knows it’s not. He’s got a plan of sorts: use the theatricality of Batman to shock Gotham City back towards being a better place, and then hand it off to good citizens and officials. The film does a great job with arguing that this shock and awe theory of personal action has unintended consequences in the person of the Joker, and that Batman is left helplessly reacting to circumstances spiraling out of his or anyone’s control. That makes the film sound more crudely allegorical and simple in its morality than it is.

Like a lot of fanboys, though, I’m really left wondering where Nolan can go with this version of Batman in a third film. You could imagine something like the last two issues of Miller’s Batman: Year One, with Batman pitted largely against the police. But that doesn’t seem enough. Maybe if the mayor or the political establishment was someone who appeared like a noble, charismatic crusader but was actually completely corrupt. That could be the only way to make the Penguin work as a meaningful character in Nolan’s version of the mythos, I suppose. The more conventionally super-powered characters like Poison Ivy or Mr. Freeze don’t really work. The Riddler would have to be a low-rent Joker in the context of Nolan’s version. Catwoman has been done about as well as she can be in Burton’s rendition, I think.

So here’s an idea: Professor Hugo Strange, specifically the version that appeared in Moench and Gulacy’s storyline “Prey” in Legends of the Dark Knight. Strange is a psychologist with hang-ups of his own who nevertheless brilliantly deduces that Batman must be Bruce Wayne and proceeds to try and screw with Bruce Wayne’s head while impersonating Batman himself during various criminal acts. This puts the focus of a third film back on Batman as a character and Bale as an actor: in The Dark Knight, character and actor are almost wholly reactive, reduced to being the straight man in an endlessly nightmarish sick joke.

Posted in Popular Culture | 6 Comments

The Thing That Matters

The planned timetable for American withdrawal from Iraq doesn’t tell you much about either Presidential candidate, though I grant you there’s a meaningful difference between McCain’s declaration that we could stay for a hundred years and Obama planning to leave by 2010. If it comes down to a situation where the current holders of political power in Iraq genuinely want us to leave (as opposed to posturing that they want us to leave in advance of their own elections), then I think that’s going to trump the plans of either candidate in any event.

What’s really important is what the candidates say about what went wrong in the first place. Not that they admit or agree that something went wrong, but their concrete plans for repairing the political and executive processes that shaped the decision to go to war and the execution of national security policy through this Administration. Obama could say a lot more about how he specifically intends to govern in this area, sketching process as well as policy, because process is policy. McCain, on the other hand, has made it very clear that the process of decision-making by Cheney and Bush is more or less how he will conduct business, that while mistakes may have been made, each of them stands alone as a specific miscalculation rather than as the systemic consequence of a philosophy of deliberative process and executive authority.

Withdrawal is the wrong issue for opponents of the war to worry about. The real question to the candidates is: how will you deliberate and decide? Will you reverse the flow of power and prerogative to the executive?

Posted in Politics | 5 Comments

The Revolution of Letting Go

I’m a little late in my remarks on Nelson Mandela’s 90th birthday, but it’s the thought that counts.

It has been fascinating to watch Mandela’s name becoming the synonym for the best combination of political power and ethical commitment, the way that “Einstein” signifies science or “Gandhi” signifies righteous protest. That apotheosis tends over time to smooth away the humanity and particular history of that person: the fact that Einstein was wrong about some key issues or that he was not just a loveably kooky thinker gets lost, the historical peculiarity of Gandhi’s syncretism and the tactical problems with his formulation of nationalism gets lost.

The complexity and humanity of Mandela’s life, which Mandela himself has always insistently emphasized as a key part of his political strategy, is at equal risk, and in this case, it’s vital that the world and South Africa not excise that from the way we remember him.

By his own account, Mandela’s time in the ANC Youth League located him within the standard histories and traditions of African nationalism at the time of its greatest flourishing, and by his own account, had that been the alpha and omega of his political experience, he would not have been the leader that he became in the 1990s. His time in prison forced him to accept that the achievement of freedom was a long game rather than a hasty political settlement. It required a steel-edged pragmatism both about negotiating with his oppressors (whether prison authorities or the apartheid state) and about accomodating ideological divergence among his allies. The prison years gave depth and reality to Mandela’s courtroom stand against tyranny both white and black, to his understanding that a free society would take more than changing the race of the prison guards or the ruling bureaucrats.

Before 1990, Mandela was no more than symbol except to those who actually came to know him within Robben Island. He could have walked out of prison and stumbled quickly on feet of clay. If Mandela has become the patron saint of the ethical use of political power, it is because of what he did when he came out of jail.

Two things especially stand out for me. First, that he agreed to testify on the stand about the Browde Commission of Inquiry’s work on rugby when he was called to do so by a judge, even though the case was motivated by political mischief. This was the real revolution: establishing that the executive in South Africa is governed by the law, rather than floating immaculately above it. All free societies must constantly revisit that moment. Every new leadership carries within it the threat to cut loose from constraints on its power, whether we’re talking about South Africa or the United States. But Mandela established this precedent in South Africa under circumstances where almost no one would have criticized him for failing to do so.

More importantly, perhaps, is that Mandela left power eagerly at the end of his appointed time, again under circumstances where hardly anyone would have stood against him had he chosen to ask for more. He did not merely leave office: he let go the political reins entirely. It’s equally important that he has carefully and selectively exercised his moral authority since entirely within the bounds of civil society, given his successor’s inability to value the independence of civil society.

Mandela is a genuinely extraordinary person, and the world is right to regard him as such. It’s important that in the rush to sanctify him we do not forget that his most admirable achievements ought to be a burr and irritant to his successors rather than a balm, a caution against the arrogance of power everywhere. Much as I think the leadership of ZANU-PF in Zimbabwe has demonstrated that at least some postcolonial failures in Africa rest on the moral, philosophical, and personal failures of a small handful of individuals, Mandela demonstrates the opposite, that in some cases, it’s possible to have done better because of individual determination and commitment. It doesn’t discount the importance of social history to recognize that occasionally, the Great Person Theory of History has some real analytical force to it.

Posted in Africa, Politics | 9 Comments

Some Cattle to Go With the Hat

The rising cost of petroleum is doing more work on behalf of environmental sustainability than many campaigns designed to promote virtuously green or sustainable lifestyles. When sustainability means substantially lowered costs, or consists of a simple alignment between two kinds of repeated labor or expense, it’s an easy sell.

This takes an honest accounting of costs, however. If you’re trying to sell a major institution on a more sustainable energy infrastructure, such as geothermal cooling and heating (aka ground-source heat pumps), you’ve got to be upfront about the costs that will be incurred in terms of the complexity of the installation and maintenance. With a lot of environmental technologies the people and institutions that really see the benefits are often the lag adopters who wait and study the problems encountered by early adopters.

Sustainability advocates sometimes confuse aesthetic preferences for sustainable choices, in part because they discount labor costs to individuals and institutions by burying those costs behind the haze of “lifestyle”, as if philosophical commitment obliterates time.

I’ve pointed out that with my yard and gardening, I’m perfectly content to do many things that environmental advocates argue for. I allow parts of my yard to overgrow, I don’t use chemicals on anything, I don’t put out yard trash but instead collect it myself in a large pile in one corner of my yard, I rarely water except with new plantings, I have compost. Most of these things I do because they’re easier and cheaper to do given the yard that I own. I have the space to pile up branches, sticks and leaves, and it’s easier to pile them than it is to bag them up for collection. I have a chainsaw to cut up large branches and an axe and wedge to break them into firewood. When I can spare the money, I’ll buy a chipper/shredder and spare myself the need to buy mulch in the spring. Composting already saves me having to buy fertilizer.

When the stereotypically sustainable practice is a markedly bigger hassle, I don’t do it. I bought a good-quality push mower because it seemed cheaper and I figured it be good for me physically to use it. Problem: it doesn’t cut worth a damn unless I constantly sharpen the blades, which is either an expense or a major ordeal in labor time. I’d think about ripping up at least one major portion of my front lawn, only it’s almost entirely shaded by a big ash tree, so a vegetable garden or simple wildflower meadow is out. A huge shade planting would actually be expensive to put in and difficult to maintain. Allowing the entire front to go wild doesn’t just bug me aesthetically, it would actually incur costs (lower home value, more deer and dog ticks, eventually potential root damage to surrounding infrastructure, the social labor of dealing with neighbors who may object to the look of a completely wild front yard, and so on). In some climates and environments, lawn once established is cheaper and easier: the preference for it isn’t all some unsustainable mania or mindless conformity.

————–

I guess I’m most struck, however, at how some arguments for sustainability skip over huge areas of institutional and personal practice where there are ready alternatives to business-as-usual which have big cost benefits as well.

I’ll take academia as an example. Leaving aside the use of sustainability as a Trojan horse that permits the introduction of a much wider array of political issues and commitments, a lot of academic advocates for sustainability focus on familiar targets, such as promoting recycling, increasing the use of organic and locally-grown food on campus, and cutting water and energy use in buildings. Some of these commitments make good sense from every angle. Some of them mix aesthetic preferences with more ambiguous claims about short-term sustainability. A preference for organic and locally-grown food, for example, seems to me to potentially muddle together a very complex terrain of costs, environmental models, ambiguous terminologies and personal preferences. I’d personally prefer it if our campus food services made use of high-quality organic and locally-grown, seasonally-available food. But that’s because I’m willing to pay a higher price if such food is made available and is prepared competently, because I’m willing to alter my eating habits to some extent to follow seasonal availability, and because I have transport and a nearby residence and can thus opt out when I’m inclined to do so. Students with more limited personal budgets or parents paying room and board might feel differently if the costs of this kind of food service were passed on to them. I’m not sure that it’s clear that locally grown food is always more environmentally sustainable at either large or small scales. With some food, I don’t care much if it is, because locally grown is better. (Summer tomatoes, for example.) With some food, it depends very much on who is producing it. If I have to choose between poorly made local cheese and reliable if undistinguished mass produced cheese, I’ll choose the latter. For me (and I think for some of those most committed to sustainability) that choice is not primarily determined by price–but for many, it is.

A different example. Right now, many academics (at Swarthmore and elsewhere) travel to the meetings of professional associations, to workshops, to conferences. I don’t know if the average annual travel of tenure-track professors has been estimated either within a single discipline or across the whole of academia. Taking myself for an example, I’d say that my average travel over the last decade has been to three professional conferences or meetings a year and two individual speaking engagements a year. This year is going to be above average: if I take July as the beginning of the coming academic year for this purpose, I’m presently committed to attend five meetings and I’m contemplating going to another one. If this coming year is typical, there will probably be a few unplanned additions to that list. Add to that travel to archives, fieldsites or other travel connected to the creation of data or research material, some of which should be a big part of my coming year.

Of that travel, what is absolutely necessary? First, at least some travel to research sites. Many archives are not and will not be digitized for the foreseeable future. Ethnographic fieldwork can’t be done remotely unless it’s focused on online environments in the first place. Experimental or observational science may have to be done at large, unique facilities and may require the investigator’s personal presence.

Second, very small-scale workshops seem to me to have some of the same character as a small class does, requiring face-to-face conversation and close reading. So do some formats for professional discussion and conversation. A good roundtable is spontaneous, performative, impromptu, and so I think requires the presence of a live audience and a live panel.

Third, interviews for job candidates require personal presence both for the interviewees and the interviewers. Phone interviews rarely do anybody credit.

How about a big or medium-size meeting like the Modern Language Association, American Historical Association, or African Studies Association? When academics go to these meetings, we’re there for one of four major reasons: 1) We’re on a panel; 2) We’re being interviewed or interviewing; 3) We like to see our friends and we like to travel and the meeting combines both; 4) we’re an officer of the association.

As I’ve said, I think the second reason is a good one. As far as the fourth goes, if I’m insane enough to be an officer of one of these groups, I have to go, though I’ve likely seen the other officers at other times during the year.

On the other hand, much as I like to see my professional friends and travel, it seems to me that I can accomplish this purpose without a large association meeting. A small workshop, a speaking invitation or just personal travel are all more satisfying ways to see friends. I got a lot more out of seeing friends at GLS, which is relatively small, than I probably will be able to get out of a huge meeting like AHA.

What I’m really thinking about are the panels. I’ve written before about my frustrations with the standard model for panels in the humanities and some of the social sciences: three or four papers to a 90-minute panel, none of the papers precirculated, all the panelists trying to read some excerpt (artfully chosen or hastily selected) in 15 minutes, a discussant who then takes up 15-30 minutes, and a few questions from the audience. The format is ostensibly intended for spectators, but it’s about as lousy a spectatorial experience as I can imagine. It’s certainly very far away from the norms of reading, discussion and publication that scholars are supposed to hold dear: we don’t get to see the details of what the speaker is claiming, and the speaker doesn’t get to hear detailed responses. If the point is to learn about new ideas and arguments in our field, that should involve a much more comprehensive format like poster sessions, but a lot of major professional associations in the humanities use poster sessions (if they have them at all) as an afterthought.

Most scholars I know agree that the conventional formats are of limited use. Some people tell me that the major reason to have them is to maximize the number of people attending who are giving some kind of formal presentation as many institutions will only grant travel allowances to presenters. Others suggest that panels are an important part of the professionalization and training of academics.

In any event, both conventional panels and poster sessions would be far more useful if they were in whole or in substantial part conducted as online discussions. Not just “slap it up and see what happens”, but as carefully organized, focused sessions. Solicit papers just as you would now, have a selection committee, put the papers up as .pdfs, and solicit three (not one) commenters who write a three to four paragraph critique. Then create a discussion thread, and tag or categorize the presentation so that readers can find it. (Some disciplines, especially in the sciences, are already operating this way with much of their publication and conversation.) This strikes me as a better model in particular for professionalization, almost like a public, conversational form of peer review.

What of the people who need to give a presentation to be reimbursed? That’s where the sustainability part comes in: let’s just kill off that part of professional associations. Reduce the annual meetings to job interviews and nothing more. Look for ways to regionalize those functions. Move the intellectual life of the association into online interactions entirely and redirect the budgets of the associations to reducing the cost of travel for job candidates while we’re at it. Cut the membership fees, which are substantial in relationship to value returned.

Like my compost pile, this practice would have the simple virtue of saving money. Well, for everyone except the hotels, convention centers, restaurants and cities that economically depend upon this kind of event, much as my compost pile deprives some fertilizer manufacturer of a sale of six to ten bags or so to me every year. Also like my compost pile, this would be by most conventional measures a vastly more environmentally sustainable way to carry out one aspect of our professional business. Less jet fuel consumed, less ancillary travel, and so on.

Some of the campus groups committed to sustainability do argue for an increase in telecommuting, but this is often regarded as a more exotic, rarified proposal. Here’s a context where at least some of the business that academics carry out through travel could be carried out in a more sustainable manner where doing so would (in my humble opinion) be an enhancement of that business, not a degredation of it, where the frequent costs of travel aren’t part of our necessary, daily business.

I’d almost think this makes this kind of advocacy an easier sell than major alterations to the underlying technological infrastructure of many universities, some of which may involve unforeseen liabilities or complications. But at least some of that advocacy comes out of a clean-room context where cost is thought to be someone else’s problem, to be passed on to future students, faculty, administrators and trustees. Talk of sustainability sometimes seems to me to be both too audacious and not nearly audacious enough.

Posted in Academia | 5 Comments

The Manufacture of Culture

You know, we worry too much about the Punch-and-Judy show of political blogging, not to mention the quiet, relatively cobwebbed corner of the Internet occupied by self-declared academic blogs.

If you want a look at what blogs are really for, there’s a fantastically engaging dispute unfolding over a clash between a customer and a barista at a Washington DC coffee shop. The basic issue: the customer ordered espresso over ice. The barista said that espresso over ice was against store policy. The customer, irritated, ordered espresso and some ice. This was reluctantly given to him. When he prepared to pour his espresso over the ice, the barista said that what he was about to do was “Not Okay”. Customer is angry. Customer returns later and orders an iced Americano, which is ok by store policy. (Really not much different than an iced espresso, in my humble opinion.) Customer pays with a dollar bill upon which he has written a message for the store.

Here’s the original post from the aggrieved customer.

It turns out someone else witnessed the exchange.

The store owner replies (and then doesn’t allow comments, unlike the other two posts).

————-

The comments, though, are the real payoff of the whole exchange. You get the inevitable smattering of metacomments from people who think the debate itself is irrelevant, sure. (I find this kind of comment incredibly annoying, by the way: the person who shows up to say, ‘How silly that you all have the energy to post about such things, or beat dead horses, etcetera.’ How silly does that make the metacommenter, then? He’s got the energy to post about people posting.) But mostly what you get are people making strong statements about the following subjects:

1. How espresso should be consumed.
2. How coffee in general should be consumed.
3. Whether businesses should have policies that dictate how customers consume what they buy.
4. How a service employee should behave.
5. How a customer should behave.
6. What the “real” motivation for the policy might be (to prevent something called a ‘ghetto latte’, where a customer orders espresso over ice and then adds 6 ounces of half-and-half himself for free)
7. The particular history of this particular business, including their problems with DC taxes.
8. Witnesses offering their reading of the way the two individuals in conflict actually acted (I think we’re up to three self-proclaimed witnesses, though the barista himself hasn’t said anything yet, I think.)
9. Whether it’s ever worth getting pissed off enough to write confrontationally on a dollar bill.
10. Whether the owner of a store should reply to a clearly non-serious threat of arson with a slightly less non-serious threat to punch a former customer in the dick.

Once you get into the thread, I think you’re going to end up with an opinion yourself. (For the record: I think it’s right that it’s not the best way to drink espresso though I don’t like iced coffee of any kind; it’s none of the barista or store’s business what someone does once they’ve ordered something and it’s stupid to have a prescriptive policy of the kind that the store has in the first place; the barista himself handled the situation badly; it was over the top to go back and hand in the defaced dollar: that’s what blogs are for.)

——

This is how culture gets made, transformed, and is made meaningful. An incident or moment breaks into the assumptions, ideas and orientations that govern everyday life and reveals that there are wide disparities between different people about shared experiences. The accidental character of the particular incident shapes the debate that follows. If Jeff Simmermon hadn’t reacted visibly to the barista at the store or had passively accepted the store policy while quietly fuming about it, the blog entry wouldn’t have drawn attention from BoingBoing. If the barista had initially suggested an iced Americano or shown good humor about the store’s policies, Simmermon probably wouldn’t have been irritated. Simmermon’s quotation of “Five Easy Pieces” gives readers a cultural anchor, and gives further nuance to the different reactions coming from readers.

Sometimes social scientists or humanists argue that stories and incidents serve as mirrors or as synecdoches, that they are a smaller, more concentrated way to view the whole of society. I think this story shows the problem with that perspective. Stories like the “Iced Espresso Incident” don’t reflect underlying social reality: they make it. People discover their own assumptions when reading about such an incident, discover that other people may have very different assumptions, and then modify, rethink, or strengthen the mental software that guides them through everyday life. The particular contours of the story that pulled back the casing of everyday life to reveal the wiring and infrastructure underneath lends unpredictable shape to those reactions. Change the particulars of the story, and you change the way that culture transforms in its wake.

Posted in Academia, Consumerism, Advertising, Commodities, Food | 10 Comments

Mad Science

I heard a compelling segment on This American Life over the weekend. It was a profile of a self-taught electrician and engineer named Bob Berenz who had become convinced that Einstein was wrong, specifically in his formulation of “E = mc squared”, and had taken a personal sabbatical to develop his own comprehensive theory of physics based on his findings. The punchline was skillfully deferred until most of the way through the segment, when listeners found out that what the man believed instead was that the equation should be E = mc, no squared, because squaring mc was needlessly complicated. This view turned out to drive his entire antagonistic vision of modern physics, that it was taking what should be simple and elegant and making it mathematically and theoretically complex and baroque, largely as a corrupt way to extend the stranglehold of professional academic scientists and experts over a wider public.

I found myself moved and disturbed by this man’s perspective, because I sometimes express sympathy towards similar criticisms of academics and experts. In a way, what the segment reminded me is that overindulgence of that perspective carries its own dangers.

I start from the premise that at least some of the hostility between academia and some of its publics is driven by social cleavages or experiences that aren’t directly captured in the substance of any particular instance of hostile or critical expression. Beyond that, I think sometimes anger at academics is a product of a particular issue or topic where academics have extended an unwarranted degree of magisterial control or authority over policy formation, or have intolerantly defended expert opinion that turns to be at the least debatable, at worst simply wrong.

A lot of the worst examples of malpractice by experts have lodged deeply in global popular consciousness, whether we’re talking the Tuskegee syphilis experiment, corrosively faddish applications of academic views by development organizations in impoverished societies, or crackpottery like the “posture photos”. Stories of scientific and scholarly progress sometimes feed into the same consciousness. When we recount how plate tectonics, the Big Bang, genetics or quantum mechanics were once regarded as eccentric, impossible or wrong by mainstream consensus, we invite people like Bob Berenz to imagine themselves in the same way, martyred by an arrogant establishment.

The key moment for me was that Berenz’ friend, who produced the segment, brings him into contact with a physics professor, who explains that among other things, Berenz has confused momentum with kinetic energy. Berenz simply doesn’t accept that, and comes pretty close to sticking fingers in his ears and humming real loud. (There’s a really interesting moment where Berenz complains afterwards to his friend that the professor didn’t speak in a sufficiently educated manner: the language of class distinction and elitism gets turned on its head. That, too, seems to me to often happen to a professor who engages a critical public from a less-Olympian perspective: the critics then say, ‘This guy can’t be for real, because he talks like a regular guy’.)

This is the point where sympathy has to end. There isn’t anything past that moment that the professor can do to convince Berenz that he’s wrong, because Berenz reveals that he’s operating from beyond the frame of persuasion. There are epistemological arguments in contemporary academia which hold that we’re all beyond that frame, that persuasion and reason are just cloaks for power, but I don’t buy that. Nevertheless, when the participants in a dialogue are this far apart in their purposes and outlook, there’s not much good that can come of it.

In this case, Berenz just doesn’t want to hear the truth: the physical, material world is complicated, or at the least, the methods we need to understand what it really is are complicated. (I think it’s both.) I don’t think that the humanities and most of the social sciences are complicated in the same way: our descriptive language and tools don’t have to be fundamentally difficult to use and understand. But the reality we’re interested in may be even more complicated and hard to comprehend than the physical world: understanding the human experience, whether one individual’s heart and soul or the collective reality of an entire human society, is never going to be simple.

There are academics who would like it to be relatively simple. Virginia Postrel has a column in a recent Atlantic issue about “inconspicuous consumption” that profiles the work of some economists who would like to reduce consumer behavior to “rational” forms of emulation. Postrel has a paragraph in which she concedes that maybe particular consumer behavior is just the complex result of cultural and social history, but “economists hate unfalsifiable tautologies about differing tastes”. Not unfalsifiable, guys, just takes a lot of detailed work on a particular case to evaluate any given argument. Sorry, but that’s the way that the lived human world really is a lot of the time. Not to say that the economists’ general suggestion is without merit, it’s just that it’s a low-bandwidth description of a high-bandwidth reality.

At various points, non-academics would also like the human world to be simple. Either because they’re grinding an ideological axe or because they’re coming from some settled worldview that wants to see it that way. In a conversation with someone driven by that belief, virtually any intellectually serious historian or literary scholar or philosopher or anthropologist is going to end up just as frustrated as the physicist who tried to talk with Berenz.

Everybody comes away unhappy from that confrontation, so it really raises the question of whether it’s worth it to even try. I don’t think scholars have to try and talk with each and every person who seems to fit that description, but I think that all scholars have the responsibility to try to do so every once in a while. Sometimes the Berenz’s of the world have a completely valid point even within narrowly defined scholarly parameters. Sometimes they’re telling us something important about our limitations, weaknesses and failures. And sometimes they confirm that the world really is complicated and that pursuing the how and why of the world really does take training, discipline, and commitment to a cumulative effort that grows and deepens across time and space.

Posted in Academia, Production of History | 20 Comments

Why Some Learning Games Continue to Suck (Games + Learning + Society Liveblogging)

This isn’t a reaction to the panels I’ve attended at the meeting: they had some educational or learning games that I thought were terrific, smart, well-designed, and educationally effective. It’s more a reaction to some of the games I saw in the poster session or heard about in conversations.

The obvious thing to claim would be that some learning games suck because the people making them are untalented or underresourced. Or that there is some particular design failure in the game that can be avoided or fixed. That’s not it at all. Some of the bad learning games come from perfectly competent, well-meaning people who have sufficient resources to build what they’ve planned to build and they’ve made perfectly decent design decisions about the game.

The problem is that some learning games get made because someone charged with the mission of education (teaching students, educating publics, training professionals) needs to accomplish one of two things (or both of them). They have to justify their own professional existence and institutional responsibilities when they are tasked to endlessly improve educational efficacy, tied to endless progress. Or they’ve been assigned responsibility to address some small persistent reservoir of incapacity, some subject that a small number of students don’t learn or some issue that some small fraction of the public doesn’t understand the way that advocates believe they should.

Professionals who are in these circumstances cannot say, “Well, we teach what we can with the methods that we have, and whatever we can’t teach or isn’t being learned, too bad”. Many of them are responsible for producing a sense of eternal progression. When there’s no more traction on one pedagogical method, it is possible to produce that sense of progression through shifting to a new method, such as “educational games”.

So, for example, let’s say you’re thinking about public health and you want to address the problem of people who take too many aspirin to manage non-recurrent pain. Honestly, you’re not going to improve on clear written warnings on aspirin itself, regular advice from doctors with reinforcement from pharmaceutical professionals. It wouldn’t hurt to formally study people who have had medical consequences from one-time overuse of aspirin and maybe you could have some tentative sense of just how many people have done so.

But designing a game, whether simple or complex, open-ended or narrowly didactic, which is narrowly tailored to substitute for or complement written and verbal warnings about aspirin, is if nothing else a serious misuse of resources. There is nothing a game can do that simpler, terser, cheaper methods of communication cannot do, no one that game will reach who was otherwise unreached or unpersuaded. All that building a game in this case will do is give the designers a sense that they have tried to make further progress towards educational perfection. The player in “Don’t Take More Than Two” just has to sigh and endure a time-consuming exercise in the screamingly obvious.

The same thing goes for K-12 or college classrooms. There are many kinds of information and learning that a game can only deliver in a hopelessly Rube Goldberg manner compared to more conventional pedagogies. The educational games that work (and I saw plenty of those) are the ones that use games to help students understand processes, procedures, ways of being and thinking, in an open-ended way. If you’ve got one ideal or perfect “state of mind” that you didactically insist your students or the public should hold about a particular issue or question, a game is a truly crappy way to get them there. (I’d suggest that there’s a more fundamental problem with that basic objective, but that’s another issue.)

I know I’m preaching to the choir in terms of the scholars I know best and talk to most at a meeting like GLS, but even as an outsider to this particular subfield, I’m frustrated by the stubborn persistence of didactic and unnecessary games that mostly satisfy the professional needs of their creators rather than the needs of the unfortunates who will at some point be compelled to play or experience those games.

Posted in Games and Gaming, Information Technology and Information Literacy | 3 Comments