An Oath For Experts: First Principle

I’ve been thinking for a while about trying to develop and push out some simple statements of principle that anyone claiming to be an expert or authority should follow, in the mode of the Hippocratic Oath. The reason I think we need something of the sort is that the value of highly trained individual experts is increasingly questioned on one hand because there are now alternatives, most notably crowdsourcing, and on the other because “experts” increasingly sell their services to the highest bidder, with little sense of professional dedication or ethics. I’ll be testing a few of my ideas along these lines over the course of the spring.

Let me put out a first suggestion:

An expert giving advice about a course of action must always be able to cogently and fairly discuss the most prominent critiques of that course of action and readily provide citations or pointers to such criticisms.

The goal here is simple: to establish a professional standard. You should not be able to claim to be an authority about a particular issue or approach if you are not conversant with the major objections to your recommended course of action. You should not force an audience to hunt down a critical assessment afterwards, or wait for an adversarial voice to forcibly intrude on the discussion. This responsibility goes beyond simply providing an assessment of the positive and negative attributes of an argument, interpretation or recommendation: the expert should be able to name the work of critics and generously summarize their arguments or analysis.

For one example: if you’re Jared Diamond being challenged in public about whether “tribal people” are all markedly prone to warfare compared to settled societies, you should be able to: a) give a fair summary of the long-running arguments between some cultural anthropologists and some sociobiologists/evolutionary anthropologists about societies like the Yanomami and b) talk dispassionately about how different scholars approach the characterization of hunter-gatherers in relationship to what we actually know about them past and present, and what the limitations of different approaches might be. That should be the first priority before defending a particular thesis, claim or recommendation.

For this reason, experts should always avoid being placed in situations where they are required to show strongly adversarial preference for a particular interpretation to the extent that they cannot even review or describe other schools of thought, just as judges must avoid conflict of interest.

Update: worth reading on into the comments here, as I try to set out more I what I mean in response to an objection from Brad DeLong. “Highest bidder” is a crude and exaggerated way to put it–what I’m concerned about is more the consequence of fashioning one’s expert advice or analysis in order to sell your own brand name, to push an almost-trademarked interpretation, rather than providing a road map for understanding an issue or problem.

Second Update: Diamond was a bad example. It’s going to distract from a discussion of the principle. I’ll try to write more about the issue I see in Diamond (and Pinker) in a separate entry.

Posted in Oath for Experts | 26 Comments

Shell Game

Louis Menand’s talk at Swarthmore on Friday pushed me towards some additional thoughts on the liberal arts, higher education and the humanities.

My first response is to pick up on one of a number of statistics that Menand reviewed in the first part of his talk. We already know that small undergraduate-only liberal-arts colleges like Swarthmore, of all types and selectivity, educate a very small portion of the overall undergraduate population in the United States. But Menand pointed out in addition that all of the disciplines that fall under the typical heading of “arts & sciences” at colleges and universities, the disciplines most commonly identified as part of the ‘liberal arts’ (very much including disciplines like biology, economics, and computer science), grant a shrinking minority of all undergraduate degrees in the United States. The majority are degrees that are fully conceived, taught and taken as pre-professional training in a specific field or vocation, with business and finance-related degrees being the largest plurality of those.

So you might legitimately wonder: why are we talking about the suitability of liberal arts subjects for the contemporary U.S. job market? If the growth in expressly, explicitly pre-professional and vocational degrees has paced almost exactly the growth of structural underemployment in the United States, shouldn’t that be the issue? If there are more students than ever at every kind of institution pursuing degrees that promise specifically to prepare them for a career and they are either not getting jobs or not getting the jobs they were allegedly prepared to get, shouldn’t that be the problem?

There are complications, there always are. As Menand observed, what the elite private institutions do, whether in their curricula or in other institutional policies, often drives other institutions, and so they understandably are the focus of a particular kind of concern. Liberal arts graduates may be a smaller group but they also form the bulk of the meritocratic elite, who may justifiably be the target of social criticism–and some vocational graduates might protest that the social privileges of that elite lead to them getting positions which ought to go to people with specific appropriate training and qualifications. Liberal arts institutions in public systems may be the so-called ‘flagship’ campuses which take more than their share of the resources available (though I feel compelled to add that there are ‘flagship’ public institutions which in fact rely on public finds for a very small proportion of their budgets, like the University of Virginia).

Nevertheless, if it’s true that there’s a connection between underemployment and the skill and knowledge developed through investment in higher education–a proposition that is accepted in less noxiously politicized form even by the Obama Administration and most other mainstream policy experts–‘liberal arts’ majors really shouldn’t be the first thing we’re talking about unless somehow we can demonstrate a very direct correspondence between underemployed recent graduates and those majors in particular. What I suspect we should be talking about instead are students incurring high levels of debt to pursue highly vocational degrees aimed at professions with very few available jobs and/or very low relative salaries, especially from universities and colleges with poor graduation rates and other issues of underperformance. Provocations about “useless majors” are a distraction from that conversation.

Posted in Academia | 4 Comments

The Usefulness of Uselessness, Redux

Faculty who believe in the liberal arts approach and who think this means that there ought to be some kind of firewall between what students study and what they do in their careers or anything else in their lives after graduation have a growing number of antagonists to contend against, most recently, several conservative governors who have announced that they will push their state’s public university system to eliminate or de-emphasize majors and departments that don’t have direct vocational objectives.

I’m one of those faculty. I’m working now on a long essay about why I think a liberal arts approach is still the right thing for most of higher education, but here’s a shorter thought.

Unlike some of my colleagues, I have no qualms about saying that a well-designed liberal arts education offers the best outcomes even in terms of employability–but “well-designed” in this case means, in part, that the connections between what you study and what you do in life must be indirect, flexible, and unpredictable. At least some liberal arts faculty tend to believe so fastidiously in that proposition, rising against any talk of usefulness or skills or careers, that they entirely cede the field of struggle to the most malicious and manipulative critics.

And yet it is very rare to find a faculty member who believes so wholly in this understanding of the liberal arts that they reject any possible life outcomes for students which do not tie them tightly to academic or scholarly institutions. Virtually every faculty member I know delights in the wide range of careers that former students have undertaken, and treasures in particular any story of how some aspect of their experience as a student relates to their later work–even when a former student describes how work and life have called into question the relevance or accuracy of something that student studied in college.

I understand–and share–fears about designing an undergraduate program of study which specifically anticipates a particular vocation or career. There’s one thing that I think we could do for our students (former and current) far more than we are doing, however.

I don’t think it’s possible to convince the current wave of Republican governors that anthropology is not “a useless major” or that the problem of employability in the contemporary American economy is not a result of inadequate vocational training, largely because I don’t think the governors in question are genuinely trying to deal with underemployment or engage in a careful argument about what education should be. They’re returning to a favorite scapegoat, the useless liberal professors, and blaming them for structural unemployment in addition to all their other sins.

But there are important actors that we can convince–or maybe support is the right word. Many employers are already convinced that a graduate who can write, speak and think well, who has learned to ask open-ended questions, who can find the tools they need to deal with problems both known and unknown, who knows how to know, is worth far more to them than the graduate who has memorized some rote procedures to perform on preset challenges. Now one problem might be that liberal arts institutions aren’t actually producing those graduates, and another might be that there aren’t enough students prepared to become those graduates. Yet another might be that there is some better way to couple the spirit of the liberal arts to practical or problem-based learning. Those are different issues.

But what we could do is give those employers more reasons to believe that they’re generally right, to tell more specific, concrete or illustrative stories about how almost anything that a liberal arts student studies can have a payoff–sometimes in what that person does directly at work, sometimes in how they approach life and its catalyzing relationship to work. Not as a promise that a particular major has a particular utility, but yes, as a series of assurances of the generativity of liberal arts for the economy, for the society, for the world. Those stories have to be more than vague hand-waving or enigmatic koans in order to give sympathizers something to fight back against the push to reduce higher education to a meanly-imagined vocational core.

Even specific vocational training needs something to suggest the unexplored possibilities, the unexamined norms, the reasons why and wherefore, all the more so in a moment of technological and economic disruption where no career or life can be taken for granted or seen as secure.

To tell the many stories of the diverse consequences of liberal education to the many people eager or ready to hear them, professors and administrators not only have to be unashamed and unafraid of those stories but also be out there in the world more among our former students–not just the ones we’ve taught personally but the ones that all our colleagues have taught. Being out there means, sometimes, that we’ll hear about the students we confused or disappointed, about the unnecessary or unhelpful limits that our curricula imposed upon them, about the ways in which we haven’t always lived up to our own belief in liberal education. We need to hear that, too, alongside the stories which more closely fit our belief that it all turns out for the best.

We need to hear it all so that we can speak those stories back–and help others to tell them as well. Staying above the fray when someone is sawing down your perch is a bad idea.

Posted in Academia, Swarthmore | 10 Comments

Moore’s Law (Munitions Edition)

Let’s say twenty years ago I’d written a science fiction novel about how a futuristic nation has a massive force of flying robot bombs that are programmed with some target parameters and just fly around 24/7 on patrol looking for anything that fits their specifications. Catchy premise, classic bit of robot-overlord dystopianism, one of those things like flying cars that seems amusingly improbable in retrospect…

Oh, dear.

As with everything else that has come to pass which actually matches the science-fictional imagination, the reality seems so banal and inevitable that we scarcely pause in our everyday lives to consider its implications. The imaginary electronic clipboards and pads in various incarnations of Star Trek were always bristling with fetishistic futurosity, always signalling that a far future had arrived. The iPad I use every day has quickly become about as exotic as a toaster or a ballpoint pen.

That doesn’t stop us from having furious debates about the generality of the changes that actually-existing future technology brings. The overall idea and reality of drone warfare is getting some attention, just as the sweeping consequences of digital technology have. But the debate over drones is so far either about the abstractions of moral philosophy (is ok to kill a combatant who has no chance to kill you back?) or it is about a particularized kind of ‘numbers game’ (do drones cause more civilian casualties than we’re being told? more civilian casualities than other kinds of bombing?) A few folks are also beginning to think more carefully about what might happen if there is further automation of drone strikes.

All of those conversations matter. But I’m also struck at how much this nascent public conversation doesn’t include the possibility of proliferation and retaliation. In many ways, drones are being treated as the Maxim gun of 21st Century hegemony: something the hegemon has than its subjects have not, and that is being assumed to be a stable part of the overall picture.

Among the many explanations for Europe’s sudden assertion of imperial control over most of Africa, the Middle East, and much of Asia in the second half of the 19th Century, the importance of a brief moment of stark asymmetry in the relative ability of polities and elites to mobilize military power has sometimes been pushed aside or downgraded as a self-sufficient explanation, even in ‘technologically determinist’ interpretations. In some measure, that might be because European colonial propaganda, when it addressed military advantages, tended to push that advantage back in time all the way to the 16th Century and treat it as a single manifestation of some overall Western superiority in technology and science. Either that or European colonizers engaged in ridiculous self-puffery about the cultural and organizational superiority of their militaries as opposed to the relative disparity in their armaments.

The asymmetry, if it was an important factor, was incredibly brief. At the beginning of the 19th Century, European-controlled militaries had very few systematic advantages in their ability to enforce administrative power and overwhelm local military resistance in Africa, South Asia or the Middle East. They could win single battles or conflicts but not persistently maintain a presence or capacity that could meet any attempt at military resistance. That wasn’t just about their armaments, of course, but also about the financial capacity and political organization of their sponsoring nation-states. For a brief time at the end of the 19th Century, however, industrially-supplied European mass armies with guns and munitions could generally overwhelm non-Western military power (though the latter were often armed with guns as well: William Storey’s new history of gun trading and ownership in southern Africa makes clear how complicated the local picture often was.

The thing is, by the end of World War II, that era was comprehensively over, which I think means that asymmetry in force capacity is as much a contributing explanation of decolonization as it is of the spread of 19th Century imperialism. By the 1960s, insurgencies all around the globe were capable of fighting occupying Western armies to a standstill, if not capable of winning in a straight-up battlefield conflict between nation-states. And this has become more and more the case over time. Whatever doctrines or surges or equipment the US or its allies may bring to bear to support an imperial occupation or administration, they can’t succeed in doing more than what Russia did in Chechnya: turning a territory into a wasteland and keeping it under a harsh authoritarian regime. And even the most determined 21st Century hegemon can’t afford to project that kind of military power in more than a few small territories proximate to its national borders, nor can it count on that power to pacify such an opponent for any substantial length of time.

Drones clearly seem to some American military planners like the answer to their prayers in such a world, with a lot of other collateral budgetary, technological and political benefits. No pilots exposed to enemy fire (and no human limitations to the speed and mobility of a flying weapons platform). Cheaper by far than modern warplanes. Much easier to keep their operations secret, much more deniability about consequences. Much easier to extend operations into airspace of unfriendly or uncomfortable sovereignities. Nearly impossible to defend against with existing anti-aircraft technology and imposes serious limitations on the freedom of movement of enemy combatants and leaders. Explicit legal sanction from all three branches of the US government for the unilateral use of drones to kill specific targeted individuals, including American citizens, coupled with grudging acquiescence to this practice by most other nations.

And as with the Maxim gun, they have none.

But that is not going to last. So before we get into the moral philosophy of the general idea, or the morality of their current use, just consider for a moment what is going to happen in a world where:

a) Drone warfare is an exceptionally active domain of rapid technological progress due to continuous investment by the United States and other major national and transnational actors.
b) Drone warfare is normalized legally and geopolitically as a domain of unrestricted unilateral action by hegemonic or dominant powers (much as the unrestricted use of military force against non-Western societies was briefly something that went almost entirely unquestioned in Europe and the US from about 1870 to 1905).
c) The use of drones by the US and other major actors proliferates on a global scale rather than stays confined to a few unusual theaters.

With a), investment in technological progress, consider also:

a1) that drones with lethal capacity will almost certainly get smaller, cheaper, and harder to detect both as they seek targets and at their points of origination and operation
a2) that drones will almost certainly be given more sophisticated systems for automatic navigation, target selection and decision-making over time
a3) that integrating the cheap, improvised lethality of explosives used against international forces in Iraq and Afghanistan into drones will become readily possible in the future

Think about that for a bit. Now imagine a world where non-state actors of all kinds, at all scales, can with relative ease unleash many automated or semi-automated drones armed with enough explosives to kill a few people or damage local infrastructure, in a way that may be as hard to trace back to the individuals responsible as it is to find someone who made a computer virus or malware today.

The moment I lay that scenario out, many people doubtless think, “So that’s going to happen, it’s inevitable”. But I don’t think it is. There are cases in modern world history where national militaries and their civilian administrations have thought twice about the wisdom of proliferating the use of weaponry or technology that gave them enormous short-term advantages after the long-term implications of their generalized use became clear. Chemical and biological weaponry is perhaps the best example, since nuclear weapons may be a special case. National militaries still have this capacity, it’s occasionally been used by repressive regimes against civilian opponents, but sufficient effort has been poured into making their use moral anathema and cause for serious coordinated global action that there are very powerful inhibitions against their use.

The appallingly causal and short-sighted use of drones right now by the US military bothers me for all sorts of reasons. But first and foremost, it bothers me because no one in authority is giving any public consideration to the consequences of legitimating their unilateral, undisclosed and unreviewed usage, or the consequences of becoming so reliant upon drone strikes that we vastly accelerate their development. If there is any hope of avoiding a world where small remotely (or automatically) guided explosive drones routinely pose a danger at almost any location or moment, that hope is in this moment, this time, and no other. By the time the AK-47 went into mass production in 1949, it was far too late to ask whether it was a good thing or not for almost any organized group that wanted automatic rifles to have automatic rifles, even if it took some time for the weapon to disseminate at a global scale.

Posted in Africa, Generalist's Work, Politics | 21 Comments

The State of the Art III: Facebook (and 500px and Flickr) as a Window Into Social Media

III. The Business Model as Belief and Reality

Why is Facebook such a repeatedly bad actor in its relationship to its users, constantly testing and probing for ways to quietly or secretly breach the privacy constraints that most of its users expect and demand, strategems to invade their carefully maintained social networks? Because it has to. That’s Facebook’s version of the Red Queen’s race, its bargain with investment capital. Facebook will keep coming back and back again with various schemes and interface trickery because if it stops, it will be the LiveJournal or BBS of 2020, a trivia answer and nostalgic memory.

That is not the inevitable fate of all social media. It is a distinctive consequence of the intersection of massive slops of surplus investment capital looking desperately for somewhere to come to rest; the character of Facebook’s niche in the ecology of social media; and the path-dependent evolution of Facebook’s interface.

Analysts and observers who are content with cliches characterize Facebook as sitting on a treasure trove of potentially valuable data about its users, which is true enough. The cliched view is that what’s valuable about that data is names associated with locations associated with jobs associated with social networks, in a very granular way. That’s not it. That data can be mined easily from a variety of sources and has been mined relentlessly for years, before social media was even an idea. If an advertiser or company or candidate wants to find “professors who live in the 19081 area code who vote Democratic and shop at Trader Joe’s in Media” they can buy that information from many vendors. If that were all Facebook was holding, it wouldn’t have any distinctive wares, even imagined, to hock. All it could do is offer them at a bargain rate–and in the global economy, you can’t undercut the real bargain sellers of information. Not that this would keep Facebook from pretending like it has something to sell, because it has a bunch of potentially angry investors ready to start burning effigies.

What Facebook is holding is a type of largely unique data that is the collaborative product of its users and its interface. But if I were a potential buyer of such data, I’d approach my purchase with a lot of caution even if Facebook managed to once and for all trick or force its users into surrendering it freely to anyone with the money to spend. If my goal is to sell something to Facebook users, or to know something about what they’re likely to do in the future, in buying Facebook’s unique data, what I’m actually learning about is a cyborg, a virtual organism, that can only fully live and express inside of Facebook’s ecology. Facebook’s distinctive informational holding is actually two things: a more nuanced view of its users’ social networks than most other data sources can provide and a record of expressive agency.

On the first of these: the social mappings aren’t easily monetized in conventional terms. Who needs to buy knowledge about any individual’s (or many individuals’) social networks? Law enforcement and intelligence services, but the former can subpeona that information when it needs to and the latter can simply steal it or demand it with some other kind of legal order. Some academics would probably love to have that data but they don’t have deep pockets and they have all sorts of pesky ethical restrictions that would keep them from using it at the granular level that makes Facebook’s information distinctive. Marketers don’t necessarily need to know that much about social networks except when they’re selling a relatively long-tail niche product. That’s a very rare situation: how often are you going to be manufacturing a TARDIS USB hub or artisanal chipotle-flavored mustache wax and not know exactly who might buy such a thing and how to reach them?

Social networks of this granularity are only good for one thing if you’re an advertiser or a marketer: simulating word-of-mouth, hollowing out a person and settling into their skin like a possessing spirit. If that’s your game, your edge, the way you think you’re going to move more toothpaste or get one more week’s box office out of a mediocre film, then Facebook is indeed an irresistable prize.

The problem is that most of us have natively good heuristics for detecting when friends and acquaintances have been turned into meme-puppets, offline and online. Most of us have had that crawling sensation while talking to someone we thought we knew and suddenly we trip across a subject or an experience that rips open what we thought we knew and lets some reservoir, some pre-programmed narrative spill out of our acquaintance: some fearful catechism, some full-formed paranoid narrative, some dogma. Or sometimes something less momentous, just that slightly amusing moment where a cliche, slogan or advertising hook speaks itself from a real person’s mouth like a squeaky little fart, usually to the embarrassment of any modestly self-aware individual.

Facebook could, probably will, eventually wear down its users’ resistance and stop labeling or marking or noting when it is allowing a paying customer to take over their identities to sell something to their social networks. We’ll still know that’s happening to a friend up until the day that an AI can use all that data to so convincingly simulate our personal distinctiveness that there’s no difference between the AI’s performance and our own. At which point, so what? Because then my accurately simulated self will just be selling or persuading on behalf of that which I would, with all my heart, sell or persuade, in the voice I would normally use to persuade with.

As long as Facebook’s potential customers want to use my social networks to sell something I wouldn’t sell, in a way I wouldn’t sell it, most of the people who “know” me through Facebook will know that it’s not me doing that, and they know that better and better proportionately in relation to the amount of information I’ve provided to them all through Facebook. (E.g., the best protection from being puppeteered is paradoxically more exposure rather than less.)

So what of the other unique information Facebook holds, a record of everything I’ve “liked”? Surely that’s information worth having (and thus worth paying Facebook for) for anyone desperate to sell me products, persuade me to join a cause, or motivate me to make a donation? Not really (or not much), for two reasons. First, because existing sources of social and demographic data are generally good enough to target potential customers. If you know who the registered Democrats with graduate-level education making more than $75,000 a year are in Delaware County Pennsylvania, you have a very good understanding of their likely buying habits and of the causes to which they are likely to donate. If you’re selling something that has a much more granular target market, it’s almost certainly more efficient and cheaper to use a more traditional media strategy or to rely on social networks to sell it for you simply because they’re interested in it. If you’re the budget-photography company YongNuo, you don’t need spend money to mine my Facebook likes and posts to see I’m interested in moving into studio-based strobist photography: existing networks of hobbyists and professionals are sufficient to acquaint me with your products. If you’re trying to sell a Minecraft pendant necklace, your potential customers are going to do a fine job of notifying each other about your product.

More to the point, if I’m trying to sell you a product or a cause and I find you through data-mining your pattern of “likes” on Facebook, what is it that I’ve found? Maybe not the “you” that actually buys things, shows up to political rallies, writes checks to a non-profit. I’ve found the algorithmic cyborg that clicks “like” on Facebook, half-human and half-interface, formed out the raw stuff of things that are clickable and linkable and feed-compliant. Which is sometimes a set that overlaps with what can be bought and done and given in the rest of our lives and sometimes is very palpably not. If my sales or success depended on the liking of Facebookborgs reliably translating into behavior elsewhere, I’d be on very thin ice. And I’d just as soon not pay much to get onto thin ice.

—–

So what about the rest of social media? Do they have something to sell, something worth investing in? Sometimes they do, and that brings me back to Flickr and 500px, where I started this series. What Flickr and 500px have to sell, first and foremost, is not information but services: data storage, a form of publication, and access to a community with a narrower focus than “all of life”. Both of them have at least a rough model for how to make a bit more revenue on the backend, through facilitating the sale of community members’ images to any external buyers (while giving the creator of the image a cut of the revenue). That is not a business model that is going to make them megabillions, but it’s very likely a sustainably profitable enterprise when all is said and done. It rests on a fragile foundation, as Flickr in particular has discovered. Your paying customers have to care enough about the social capital they have invested in the service to pay for it, the publishing interface has to be updated to look contemporary and run on contemporary hardware, and the archive has to be searchable enough that external buyers (whether it’s someone looking for a canvas to hang on their wall or a media organization looking for stock footage) can sift through it. All of which takes work for a labor force that has to be kept lean and cheap. One slip and your users, the real source of your value, are going to pack their bags and content for the next new thing. When that starts to happen, it can cascade quickly into collapse. If you do something to try and slow the flight of content and participation, by making content difficult to extract or erase, you might spark the equivalent of a bank panic.

There’s one other social media business model that demonstrably works, if in the spirit of 21st Century financial capitalism: it’s the digital version of a pump-and-dump. Set up a specialized social media service, lure a venture firm or investor in that’s looking to bet a bit of money on the next new thing, spend a bit of money on an interface design, put on a dog-and-pony show that gets the restless digerati in the door and providing some kind of content. If dumb luck is really with you, maybe you stumble into the next YouTube or Twitter, you somehow provide a space or tool in a niche that no one knew existed. If dumb luck is sort of with you, you’re Instagram and you get bought up by bigger fish who need to prove to their investors that they’re working towards a profitable business model and are using acquisitions as a distraction from tough questions. In that case, your business model is to be someone else’s business model, only you can’t say as much without shining a spotlight on a naked emperor’s private parts. In the worst case (probably) you burn someone’s money, earn some salary, get some experience, and have a story or two to tell to your next investor–or at least build a resume that gets you hired at a real company.

Social media that provide a service that is sufficiently valuable that people will pay for it, however little, have a business model that is not only sustainable but that doesn’t require them to constantly breach the trust of their users or work against what their communities want.

Social media that have no business model except trying to monetize the information that users provide to them will, sooner or later, be required to breach trust and demolish whatever is useful in their service, to come back again and again with new interfaces and terms of service that lie or conceal or divert. Even if they get away with it for a time, they’re selling a product that is far less valuable than the innumerable info-hucksters and digital prophets (or even protectors of privacy) think it is. In some ways, it might be best if Facebook just got it over with and gave itself permission to sell every last scrap of information it’s holding: what we might all discover is that there’s hardly anyone at all who will pay for that service at the scale and price that Facebook needs them to pay.

Posted in Cleaning Out the Augean Stables, Digital Humanities, Information Technology and Information Literacy, Intellectual Property | 3 Comments

The Slightly-More-Longue Duree

Historians and anthropologists studying sub-Saharan Africa are especially sensitive, for good reason, about linking current events on the continent to deep or precolonial histories. We’re all too intensely aware of the deep, sustained way that European colonialism represented African societies as afflicted by history, an unchanging and static backwardness that could be described as a series of discrete ‘traditions’.

One good example of this reluctance can be seen in the way that scholars have approached the 1994 genocide in Rwanda. Debate has centered largely on whether the cause of the genocide is most strongly vested in the design of postcolonial African states, on 20th Century nationalism as a whole, on the influence of development institutions or the geopolitical rivalries of the Cold War, on the competitive interrelationships of postcolonial states in East and Central Africa, on postcolonial competition for regional resources, or on the colonial policies and attitudes of Belgium and France. (Or some combination thereof.) Precolonial political formations and cultural practices are always reviewed in scholarly writing as a necessary part of the background, but most scholarship rejects or at least de-emphasizes the importance of precolonial experience for explaining the genocide. In very large measure, this rejection is specifically aimed at representations in U.S. and European mass media that explained the genocide as the expression of primordial, ahistorical hatred, as a Hobbsean nightmare erupting out of a primitive or backward culture.

I think that’s completely the right way for the scholarship to go. But it does mean that it is more difficult to talk about the specific precolonial histories that most scholars would acknowledge have some importance. Not primordial hatreds or static tribalism, but the very specific history of state-building and social organization in the area in the 18th and 19th Century. That history presents all sorts of complexities in the language of universalizing social science (were Tutsi-speakers a caste? a social class? economic specialists who emphasized pastoralism? or are “Tutsi” and “Hutu” an anachronistic imposition on subtle languages of difference that weren’t a big deal until Europeans made them a big deal?) But as a history, it’s not irrelevant to the recent past or the present, both as a history that is represented within recent conflicts and as something “real” in the historical memories and social structures of present-day societies.

Whenever we try to talk about that relevance we end up having to put so many caveats and snares in everything we say in order to avoid giving comfort to flatly wrong characterizations that it sometimes seems easier to just stick with colonial and postcolonial histories instead. A great example of how complex and neologistic this kind of historical account almost has to be is Paul Landau’s recent work Popular Politics in the History of South Africa, which dramatically rethinks ethnicity and state formation in southern Africa and very much argues for the reduced influence of colonial categories and institutions in modern outcomes. Landau has to be complicated because it’s a complicated history in simple empirical terms. But also to thoroughly sabotage any reader who might say, “Oh, I see, male violence is an ancient problem here” or “ah, I see, so colonialism didn’t invent tribalism after all, that’s just the way Africans are”.

Our collective avoidance is a problem both because it’s our job to talk about all of that history in some fashion and because it does cut some explanatory information out of the loop of present-day discussions.

Two of-the-moment examples. Journalists and diplomats speaking about eastern Congo are a broken record of frustration with the seemingly interminable recurrence of violent conflict in the region, most of it involving small groups of armed men who have various degrees of formal association with state-controlled militaries and administrations in the region. The reportage often involves trying to sift through the rumors and signs to find the “real” reason for conflict: is it a Rwandan or Ugandan bid for security through destablizing of Congo? Corrupt Ugandan, Rwandan, Angolan or Congolese military leaders protecting their illicit profits from resource extraction? Covert meddling by major geopolitical powers or institutions? The inevitable backwash of Cold War flows of weaponry into local hands?

All of that matters. But I also think there’s some reason to think that there is recurrent structure of political authority in the region that goes back to at least the mid-19th Century that local actors are drawing upon to organize their current activities, that assembles armed young men in highly mobile, fluid groups that support and sustain their political authority and sociocultural coherence through banditry. The most famous example of such a group in eastern Congo would be Hamad al-Murghabi’s (aka Tippu Tip) tributary empire, and further south the activities of Yao chiefdoms in the same period (19th Century) would be another example. But I think there were smaller local examples, and I suspect that some of these social groups remained significantly intact in some respect even during the colonial era. There were “insurgencies” in eastern Congo almost immediately right after independence in 1960 that strongly resemble the groups that are often seen as being created by the fall of Mobutu. So there’s some kind of repertoire of sociopolitical practices that has recurrent force in this area, that has a local coherence and intelligibility to it. That repertoire expresses very differently depending on all sorts of circumstances, and it has a complex relationship to many other sociopolitical and cultural histories in the same region.

Another example: Mali. I would never for a moment want to fall back on a pure restatement of ibn Khaldun’s famous interpretation of the history of northern Africa (and the world) and say, “See, this is just pastoralist nomads versus settled agriculturalists and city-dwellers”. But there is a much more specific history that has considerable depth and antiquity to it that involves relationships between Berber-speaking Tuareg pastoralists, Fulani pastoralists, and the settled agricultural societies of the Niger River; between North African states and Sahelian states; between cities and their rural hinterlands; between Islamic cultures and non-Islamic ones. That all matters not just as contemporary sociology but as deep and structurally recurrent history, as a series of patterns and concepts that can be consciously recited by contemporary combatants but that also can be the structural priors of how they mobilize for and imagine conflicts.

To talk about deeper histories is not to explain current conflicts as destiny, or to put aside a whole host of material, economic, geopolitical and cultural issues with much more immediate explanatory weight. But somehow I feel as if we have to give people struggling to understand what’s happening (and what to do about it) the permission to consider all of the history, as well as the guidance to help them to weigh its importance in context.

Posted in Africa | 3 Comments

Now

I don’t think there’s much more to say about Aaron Swartz. I didn’t know him personally but like many others I am a beneficiary of the work he did. And I have agreed for much of my life as an academic with the thinking that led him to his fateful act in a closet at MIT. Most centrally, that there are several ethical imperatives that should make everything that JSTOR (or any comparable bundling of scholarly publication) holds freely available to everyone: much of that work was underwritten directly or indirectly by public funds, the transformative impact of open-access on inequality is already well-documented, and it’s in keeping with the obligations and values that scholars allege to be central to their work.

Blame is coming down heavy on MIT and JSTOR, both of which were at pains to distance themselves from the legal persecution of Swartz even before news of his suicide broke, particularly JSTOR, which very early on asked that Swartz not be prosecuted. Blame is coming down even more heavily, as it should, on federal prosecutors who have been spewing a load of spurious garbage about the case for over a year. They had discretion and they abused it greviously in an era when vast webs of destructive and criminal activities have been discretionarily ignored if they stem from powerful men and powerful institutions. They chose to be Inspector Javert, chasing down Swartz over a loaf of bread.

But if we’re talking blame, then there’s a diffuse blame that ought to be conferred. In a way, it’s odd that MIT should have been the bagman for the ancien regime: its online presence and institutional thinking about digitization has otherwise been quite forward-thinking in many respects. If MIT allowed itself to be used by federal prosecutors looking to put an intellectual property head on a pike, that is less an extraordinary gesture by MIT and more a reflection of the academic default.

I’ve been frustrated for years, like other scholars and faculty who take an interest in these issues, at the remarkable lassitude of academia as a whole towards publication, intellectual property and digitization. Faculty who tell me passionately about their commitment to social justice either are indifferent to these concerns or are sometimes supportive of the old order. They defend the ghastly proposition that universities (and governments) should continue to subsidize the production of scholarship that is then donated to for-profit publishers who then charge high prices to loan that work back to the institutions that subsidized its creation, and the corollary, demanded by those publishers, that the circulation of such work should be limited to those who pay those prices. Print was expensive, print was specialized, and back in the age of print, what choice did we have? We have a choice now. Everything, everything, about the production of scholarship can be supported by consortial funds within academia. The major added value is provided by scholars, again largely for free, in the work of peer review. We could put the publishers who refuse to be partners in an open world of inquiry out of business tomorrow, and the only cost to academics would be the loss of some names for journals. Every journal we have can just have another name and be essentially the same thing. Every intellectual, every academic, every reader, every curious mind that wants to read scholarly work could be reading it tomorrow if they had access to a basic Internet connection, wherever they are in the world. Which is what we say we want.

I once had a colleague tell me a decade ago that this shift wouldn’t be a positive development because there’s a digital divide, that not everyone has access to digital devices, especially in the developing world. I asked this colleague, whose work is focused on the U.S., if she knew anything about the costs and problems that print imposed on libraries and archives and universities around the world, and of course she didn’t. Digitized scholarship can’t be lost or stolen the way that print can be, it doesn’t have to be mailed, it doesn’t have to have physical storage, it can’t be eaten by termites, it can’t get mold on it. If it were freed from the grasp of the publishers who charge insane prices for it, it could be disseminated for comparatively small costs to any institution or reader who wants access. Collections can be uniformly large everywhere that there’s a connection: what I can read and research, a colleague in Nairobi or Beijing or Moscow or Sao Paulo can read and research, unless their government (or mine) interferes. That simply couldn’t be in the age of print. Collections can support hundreds or thousands of simultaneous readers rather than just the one who has something checked out. I love the materiality of books, too, but on these kinds of issues, there’s no comparison. And no justification.

The major thing that stands in the way of the potentiality of this change is the passivity of scholars themselves. Aaron Swartz’s action, and its consequences, had as much to do with that generalized indifference as it did with any specific institution or organization. Not all culture needs to be open, and not all intellectual property claims are spurious. But scholarship should be and could be different, and has a claim to difference deep in its alleged values. There should be nothing that stops us from achieving the simplest thing that Swartz was asking of us, right now, in memory of him.

Posted in Academia, Information Technology and Information Literacy, Intellectual Property | 6 Comments

Apres Le Perturbation

There are three ways to look at what’s happening right now to the economic and social viability of the professions and various kinds of cultural work. One is silly, one is depressing and one is ambiguous. Guess which I prefer?

The silly view is the magical thinking of digital utopians, that a new communicative technology has the intrinsic power to banish all questions of scarcity, to be the rising tide that floats every boat except for the CEOs of big companies, to liberate human creativity and invention to its fullest potential, to automatically make a commons where we shall live out our happy future. In this perspective, early modern copyright was a purely negative invention of rent-seekers and 19th Century professionalization nothing but monopolization by a small set of bourgeois aspirants.

The depressing, loosely Marxian view is that digitization is the kind of material transformation and social reorganization of production that enables the subjugation of independent or artisanal labor. That the production of profit-making expressive culture and of professional services was largely outside the control of industrial and monopoly capital until the late 20th Century because capital lacked the technological and social means to reorganize and control value in those domains up to that point–but that digital technologies, algorithmic processes, the production of a massive surplus of credentialed professionals by educational institutions and concerted attacks on the civic authority of professions and artists to set the terms under which they perform their work have succeeded in proletarianizing professionals and cultural workers. In this paradigm, advocates of digitization are just useful idiots for 21st Century capitalism, enabling private ownership and profit to fully penetrate professional institutions and exposing the everyday production of cultural works to “openness” while large companies like Google, Apple, Comcast or Disney become much less open.

The first view is simply wrong in its account of the history of intellectual property and professionalization, though there are episodes and dimensions of that history that fit this sketch. It’s also far too technologically deterministic. It’s the kind of view that deserves the critique offered by the second interpretation, because it’s worth at least paying attention to the dangers of uneven ‘openness’. If Google, Apple, Comcast and so on are allowed to sit behind impregnable castles except when they sally forth on intellectual property pillages or fling legal serfs at one another, then culture’s old burghers should do their best to keep control of a few free cities and hold out as best they can.

But I think both views are impoverished as descriptions of what’s happening and as guides to further action. Let’s just say for the moment that we buy into the language of “disruption”, which has the virtue of intermingling positive and negative meanings, in part depending on whether you’re the disruptor or the disruptee. But the word and its some of its less negative synonyms (disjunction, interruption, intermission) also offer the possibility that we are being offered a chance to see many accustomed practices in new ways, to reimagine some of our work and aspiration, to reorganize and retool.

So what can we learn? I’ll restate a few points that I tend to repeat a lot at this site:

1) That the professions had become far too closed both institutionally and substantively, too quick to exclude or disdain rivalrous or alternative forms of expertise and practice. The great force of authentic innovation and service that gave the professions their power and wealth in the 19th Century was dissipating, replaced by rent-seeking and timidity. Paradoxically, this is also what made it so easy for each of them to be tackled in isolation by profit-seekers and regulators. Professionals were, over the course of the 20th Century, less and less socially connected to one another as an overall group and progressively less concerned with an overall ethos, a general sense of responsibility, mission and commitment to quality that applied to any professional in any field. The current ‘disruption’ hasn’t yet led to professionals reconnecting with each other–each group has tended to face its own crisis in isolation, in parochial terms, and even to cheer as other groups or professions lose their favored place at the table. Nor, for the most part, have any professional communities really tried to re-engineer the institutional structures of their work to reconnect with larger publics, to embrace a wider conception of their mission and expertise, or to reinfuse their practices with innovation. But there’s still time for most, if not all, of the major professions of the 20th Century to move in that direction.

2) That the middlemen of 20th Century culture industries, editors and publishers and producers and administrators, were vastly too narrow-minded in their assessment of what could count as “good culture”, and even what could sell as “profitable culture”. Some of this can be attributed to the overhead costs of 20th Century mass and elite art and culture. Those costs made risk-adversity be sensible. But many brokers of taste, including critics inside and outside of academia, ended up believing in a vision of exclusivity. In fact, they ended up believing in it even when they said they didn’t, and continued to believe in it well after the underlying economics of cultural production changed for all but a very small subset of forms and genres. What the Great Disruption has revealed as an absolute fact is that there are a great many more people capable of writing, filmmaking, acting, photographing, reporting, cooking, staging, editing, programming, sculpting, storytelling, singing, painting etc. quite well, many more works to value or view or read than there once were. Moreover, some enabling technologies have let many people see behind the curtain to find that what was taken as great individual originality was in fact mastery of craft secrets and techniques. At the same time, most of us can see that there is still a very big difference between exceptional work (defined in a variety of ways) and ordinary “good” work, and equally that there is still bad culture. As the message of Ratatouille suggests, it may be right than anyone can cook, but not that everyone can–and that there are still artists like Remy whose work is distinctive and highly valued. After the disruption has run its course, the real question will be whether we can find a way to reward ‘ordinary’ creators for the value they generate in a way that is commensurable with their work and whether ‘extraordinary’ creators will still be in business in some fashion. My thought is yes to both–and it will be important to find an answer that suits both groups of producers.

3) In the 20th Century, we accepted the institutionalized, routinized use of people with ostensibly high-value professional training for tasks that didn’t require their expertise. Or well before the intrusion of certain kinds of rationalizing economies, the professions devalued their own work. Professors moved to marginalize and massify teaching before their administrations required them to do so, doctors moved to minimize contact with patients before insurers asked it of them, law firms assigned young lawyers to mechanically process large bodies of documentation in the discovery phase of litigation, and so on. The professions cleared the way for their own reorganization and mechanization largely to create more privileged terms of labor for the most senior or powerful professionals. This was a brief moment in the history of the professions, especially marked in the 1960s and 1970s, but it opened the way for what came later. If the current disruption has positive value, it might be to spur professionals to identify far more sharply what kinds of labor require extensive credentialing and training and to understand where there is a mismatch between the needs of the professions and the training they have insisted upon to this point. Some of this has already happened, either under duress or as a creative response to changed circumstances. More needs to happen.

Posted in Digital Humanities, Information Technology and Information Literacy, Intellectual Property | 4 Comments

Guns as Witchcraft

Over the holidays, after the shootings in Newtown, I was in a conversation on Facebook in which I reiterated my point from earlier in the year that in the United States, gun ownership and gun practices are culture, and as such, not likely to be quickly or predictably responsive to legislation or policy in any direction. I don’t say this to characterize guns (or anything else that falls into the big domain of “culture”, e.g., distinctive everyday practices and forms of consciousness) as something which should not be subject to official, governmental or institutional action, nor as something we cannot change. But as I said last summer, purposeful changes to culture towards a clearly imagined end are very difficult to accomplish.

In the course of that conversation, a colleague and I moved towards one of the comparisons I had in mind in making this caution, namely, the composite, complicated set of ideas and practices in much of contemporary sub-Saharan Africa that get somewhat misleadingly lumped together as “witchcraft”, “sorcery” or similar terms. Scholars studying Africa take great pains, for good reason, to offer nuanced, contextual accounts of witchcraft practice and discourse that among other things, argue that the label itself derives from European colonial ideology and racialized ideas about “primitive societies”–a history which shapes contemporary understandings both inside and outside of Africa. However troubled the history of the label, there’s still a living, contemporary domain of African practices and beliefs that needs a name, and it’s a domain that’s entangled with the history of European imaginings of Africa and Africans. So for the moment, with many cautions, sorcery or witchcraft it is.

At least in southern Africa, I think folks reach for a single word not because it’s all the same thing, but because there’s some connected “deep” ideas that express themselves in a wide variety of ways and contexts. In fact, not only is each manifestation of those ideas different, you can actually see the deeper thinking mobilized by antagonists in various struggles, pulling in different directions. Witchcraft is a way to talk about why things happen in the world, in particular (but not exclusively) why bad things happen. As I’ve come to understand it, there’s two particularly key propositions: that most of what happens to individual people, whatever changes their situation or status, stems from their social relations (both direct personal relationships and generalized sociality) and that such events or changes are worked or brought about through invisible spiritual means, whether that means personified or animate spirits or more abstract and generalized spiritual force.

So if I become ill or suffer misfortune (on one hand) or experience a striking positive change in my individual circumstances (on the other), the interpretation that refers back to witchcraft or sorcery assumes that either change is a consequence of my social relations, transmitted into my life through the mobilization of invisible, indirect spiritual power. This sounds very abstract, and it is, which explains to some extent why these views are so adaptable to varying circumstances. They’re assumptions that can’t be easily shaken or discarded even by people who don’t believe in any of the specifics. It’s extraordinarily difficult to comprehensively dissent from background ideas or interpretations that most people you know share in some measure. It is, on the other hand, very possible to shape these ideas to fit a wide variety of aspirations and circumstances. The underlying concepts can allow people to come together for community healing, or to create a powerful social consensus against the misdeeds of the few. “Witchcraft” lets people describe and condemn exploitation and tyranny, but it also can mystify and empower exploitation and tyranny. It can give malicious family members and community malcontents new languages and possibilities for hurting others, or serve as a way to imagine and explore one of the deepest puzzles of human existence: why bad things happen to good people. Invoking sorcery can be a way to stifle initiative and creativity, or a way to complain about stagnation and suffering.

——–

In 1993, a man named Gian Luigi Ferri entered an office building in San Francisco, went to the 34th floor offices of the law firm Pettit & Martin and went on a shooting rampage, killing eight people and wounding six before committing suicide. It’s never been clear exactly why he chose the firm as his target. Materials he left behind were mostly incoherent, but he blamed law firms in general for the failure of his businesses.

At the time of the shooting, my father was the managing partner of the Los Angeles branch of Pettit & Martin. (The firm dissolved in 1995, which many outsiders attributed to the impact of the murders, but as I recall it, the firm had underlying financial and managerial issues that had little to do with the shooting.) I remember speaking with him not long after the killings. His emotions, understandably, were unusually raw and vivid. Though he was prone to verbal displays of temper, he was normally quite precise and controlled about how and when he allowed that to show in his professional and public life, and he was never physically intimidating either at home or work. On the other hand, as a former Marine, he was quite proud of his physical health and strength, and believed that if he were physically threatened he would be able and willing to defend himself without hestitation. As an adult, I once saw him unblinkingly and calmly stare down a man who was menacing the two of us with a knife, leading the other man to apologetically back away. As far as I know, he didn’t keep a gun in our house, though he was comfortable with and knowledgeable about guns. He had gone hunting with his father as a boy but told me a number of times that he had no taste for hunting as an adult.

What I remember as we talked about the shooting in San Francisco is that he believed, ardently and sincerely, that if he had been in the San Francisco offices that day he would have found a way to stop the gunman. He would have tackled him or disarmed him or found a weapon. I don’t think this was empty chest-thumping on his part: he was serious and sincere and very willing to concede that maybe he would have died in the attempt. But he maintained that he would have tried.

My father was speaking the language of American witchcraft. And in saying this, I do not for one minute mock or dismiss him or his counterfactual imagining of that horrible day. Gian Luigi Ferri was one kind of American sorcerer, and my father was another. The two deep cultural ideas that we hold to that manifest around guns and gun control alike–and around many other things besides guns–are as follows: 1) that individual action focused by will, determination and clarity of intent can always directly produce specific outcomes and equally that individuals who fail to act when confronted by circumstances (including the actions of other individuals) are culpable for whatever happens next and 2) that there are single-variable abstract social forces that are responsible for seemingly recurrent events and that the proper establishing structure, rule or policy can cancel out the impact of that variable, if only we can figure out which one is the right one.

I’ll come back to #2 in a bit, because as I’ve put it here, it may not sound like a generalized American belief, but instead just the institutionalized faith of social scientists and policy-makers. #1 is probably easier for most Americans to recognize. Some of that is a generic liberal, Enlightenment idea about the sovereign individual, but the idea has a peculiar emotional and cultural intensity in the United States, a historical rootedness in a wide variety of distinctively American experiences and mythologies: the gunfighter in the West, the evangelical who saves both self and community, the engineer who finds a way to keep failure from being an option, the deification of the Founding Fathers as extraordinary individuals, Thoreau’s call to disobedience. It goes on and on. It’s a deep and abiding idea that expresses itself in otherwise antagonistic ideologies or very different local cultures across the country. That each of us can act as independent individuals, of our own accord, with deliberate intent, and change what would have been. Or in failing to act, be held responsible for what actually did happen. That idea can come to rest on very different moments and practices–or on fetish objects of various kinds.

Including guns. This is what it means to engage “gun culture”, and why that is such a difficult thing to do. Because there are other men (and women) like my father who believe as he did that if they were present at a moment of violence or trauma, they would find a way to stop it. For many of them, a gun provides that assurance. And while you can say that it probably would not turn out that way, or that there is just as much possibility of an intervention making things even worse, this is just going deeper into the weeds. Because it’s not just the people imagining that they would save everyone who are the issue, but the killers, who are just as affected by a faith in individual action, often after a life in which they’ve been comprehensively denied any other way to believe in the consequentiality of their personal agency.

Maybe it’s possible to surgically remove guns from this latticework. But maybe it’s the bigger weave that’s the issue. Look at all the ways we acknowledge, encourage or make affordances for this deeper belief about ourselves, about why and how things happen in the world, and you begin to see a different challenge. There’s a reason why contemporary Africans who would just as soon defect from anything resembling witchcraft discourse find it hard to do. If I wanted to offer a different view about why anything, everything happens in the world, to explain that causation and consequence flow from accidents, from unmanageable interactions, from partial or dispersed forms of personhood and subjectivity, from systems and institutions, or many other similar formulations, I would be up against not just gun owners but gun control advocates, in general. Up against most Americans in their most intimate experiences and understandings of daily life and self-conception. Indeed, up against myself. Not only am I as much affected as anyone else, like many Americans (and others around the world), I rather like this way of understanding causality and consequence. I like it both intellectually and romantically, as an ideal and a structure of feeling. Even as I know that it is in some sense defective as an actual explanation and as an aspiration, and that it generates and sustains many practices that I dislike or oppose.

This is where idea #2 kicks in. The one problem with a pervasive belief that what happens to us is the consequence of our individual actions (or failure to act) is when we see in our larger national or global culture that some of what we attribute to the willful actions of individuals seems to be recurrent, patterned, widespread. This is a common problem for every deeply vested local or particular cultural vision of selfhood and society. Witchcraft discourse in southern Africa talks about both individual acts of sorcery and about the question of whether (or where) sorcery is systematic or generalized and how to relate the two. What I’d argue is that Americans work out this distinction by believing that recurrent or patterned actions are the result of the relationship between a single social variable expressed as individual actions and a single particular political design that permits or encourages that expression. That sounds modern and bureaucratic but its American roots lie in constitutionalism, in the proposition that concretely correct social designs or covenants can express–or suppress–any given will to act. That respect for religious freedom, for example, can arise from William Penn setting that as an initial condition of his colony rather than, as Peter Silver and other historians point out, an emergent result of many social interactions that did not have religious freedom as an objective, including settler mobilization against Native Americans. This can be a secular vision or a religious one, or both and neither. The Devil can serve as as an explanation just as well as guns or video games or lack of mental health care or media attention.

We believe that we can fix problems that we describe and perceive as singular issues. We tinker endlessly with machinery that seeks to identify the single establishing rule, the single malformed covenant, the single enabling policy that expresses or stifles individual action. That produces killers who mass murder children or produces saviors who would protect them. How quick we are to rush to our snipe hunts, running through dark woods. We’re told, often, that we break apart conjoined, messy problems temporarily, so that experts can study and understand, so that policy can be made, but that somehow we will reassemble it all at some point.

That point never comes because just as with our faith in our individual action, a successful reassemblage hits us hard in our deeper cultural understandings of why bad and good things happen. We don’t have a good language for intentional social or political action to achieve progress that bows to a messier, more partial, more complex-systems understanding of the world and all the things in it. We may have an intellectual vocabulary for that, but not yet (maybe not ever) a deeply felt, emotional experience of it. I feel sometimes as if I’m groping for that new sense of self and society, trying to get it to take root in myself, but just for myself, I have to figure out how to speak it and imagine it in a way that doesn’t sound like fatalism or resignation, and in a language that has everyday resonance. (Which this essay certainly does not.)

So we go on thinking that when the moment comes, we’ll do the right thing, and that in between, we’ll someday find the law, the policy, the rule, the Constitutional amendment that will keep individuals from doing some particular wrong thing, that will push some abstract force or some Satanic provocation under the national rug once and for all. Just as witch-finding and healing, condemnation and consensus, never somehow seem to prevent or check either the personalized force of sorcery or its pervasive spirit.

Posted in Oh Not Again He's Going to Tell Us It's a Complex System, Politics | 6 Comments

Why It’s Not Even Worth Talking About Gaza

I don’t often link primarily to just say, “I’ll have what he’s having, bartender”, but this short essay by David Atkins on Gaza is a good reason to break that habit.

As Atkins says, it’s pointless and thankless in several respects to even try to talk about Gaza right now. I’ll add to his list a reprint of a point that I made on Facebook:

“One of the interesting undertones in the current moment in Gaza is the extent to which all but a very few of people involved–leaders, civilians, victims, observers–narrate themselves as helpless, as passive components of a structure, a machinery, a territoriality, a history. Not only does almost everyone speak as if there are no choices, almost no one speaks as if the events, the actions, the things being done, have any hope to do anything but eventually lead to more things that will be done. The only thing that animates people is a fury at any other group or faction’s expression of passivity or helplessness. Everyone imagines themselves without agency and all other groups as fully agentive. Everyone is all reaction to some other action.

Everyone might well be right.”

Posted in Politics | 14 Comments