Mucking Out Mead

Via Mohamad Bazzi of New York University, I learned last week about several articles published in the last few years by Lawrence Mead, also of NYU. I had a vague awareness of Mead as a kind of post-Moynihan “pathology of poverty” scholar who had had some influence over public policy in the 1990s, but otherwise I hadn’t really encountered his work in detail before. Bazzi was responding to a July 2020 article in the journal Society entitled “Poverty and Culture”. After I read it, I looked at a 2018 article by Mead in the same journal, titled “Cultural Difference”. The two substantially echo each other and are tied to a 2019 book by Mead, which I really dread to look at.

1. In “Culture and Poverty”, Mead is talking about “structural poverty” (though he doesn’t use the term), and yet does nothing to reference a very large body of comparative social science on structural poverty that has been published between 1995-2020. His references to poverty scholarship are entirely to work from the mid-1990s or before.

2. Paragraph 3 in the article chains together a set of assertions: low-income neighborhoods lack “order”, marriage is in steep decline, poor children do poorly in school, and “work levels among poor adults remain well below what they need to avoid poverty”. These require separate treatment, but they are chained together here to form a composite image: structural poverty causes “disorder”, it is tied to low rates of marriage and school performance, and it’s because the poor don’t work enough. This is sloppy inferential writing, but it is only an appetizer before a buffet of same.

3. Poverty arises, says Mead, from not working or from having children out of wedlock who are not supported. Not just here but throughout his article (and similar recent work), Mead seems to completely unaware of the fact that in the contemporary United States, some people in structural poverty or who are close to the federal government’s official poverty line are in fact employed. It also takes some astonishing arrogance and laziness to say that arguments that racial bias, lack of access to education, or lack of access to child care play a role in causing structural poverty have been flatly and undebatedly disproven—with only a footnote to your own book written in 1992 as proof of that claim.

4. On page 2 (and in other recent Mead writings) we arrive at his core argument, which is basically a warmed-over version of Huntingdon’s “clash of civilizations”. though even that goes unreferenced; he has a few cites of modernization theory and then one of Eric Jones’ European Miracle and McNeill’s Rise of the West, again without acknowledging or seeming to even be aware of the vast plenitude of richly sourced and conceptually wide-ranging critiques of modernization theory and Jones’ 1987 book. He doesn’t even seem aware that McNeill’s own work later on cast doubt on the idea that the West’s internal culture was the singular cause of European domination after 1500.

5. So let’s spend time with the intensely stupid and unsupportable argument at the heart of this article that vaguely poses as scholarship but in fact is nothing of the sort. Mead argues that Europeans who came to the Americas were all “individualists” with an inner motivation to work hard in pursuit of personal aspiration and that they all “internalized right and wrong” as individually held moralities, whereas Native Americans, blacks and “Mexicans absorbed by the nation’s westward expansion” were from the “non-West” and were hence conformists who obeyed authoritarian power and who saw ethics as “more situational and dependent on context”, “in terms of what the people around them expect of them”.

6. So today’s poor are “mostly black and Hispanics, and the main reason is cultural difference. The great fact is that these groups did not come from Europe…their native stance toward life is much more passive than the American norm…they have to become more individualist before they can ‘make it’ in America. So they are at a disadvantage competing with European groups—even if they face no mistreatment on racial grounds”. This, says Mead, explains “their tepid response to opportunity and the frequent disorder in their personal lives”.

7. This entire argument would not be surprising if you were reading the latest newsletter from Stormfront or the local chapter of the Klan. But as scholarship it is indefensible, and that is not merely a rejection of the ideological character of the argument. Let me muck out the stables a bit here in factual terms just so it is clear to anyone reading just how little Mead’s argument has to do with anything real.

8. Let’s start with the African diaspora and the Atlantic slave trade. In West and Central Africa between 1400 and 1800, what kinds of societies that were in contact with the Atlantic world and were drawn into the slave trade are we dealing with in terms of moral perspectives, attitudes towards individualism and aspiration, views of work, and so on?

9. First off, we’re not dealing with one generically “African” perspective across that vast geographical and chronological space, and we’re not dealing with collective or individual perspectives that remained unchanged during that time. I’m going to be somewhat crudely comparative here (but what I’m calling crude is essentially about ten magnitudes of sophistication above Mead’s crayon scrawling: in his 2018 essay “Cultural Difference”, Mead says “most blacks came from Africa, the most collective of all cultures”.) Consider then these differences, quickly sketched:
a. Igbo-speaking communities in the Niger Delta/Cross River area between 1600-1800 famously did not have chiefs, kings or centralized administrative structures but were woven together by intricate commercial and associational networks, and in these networks both men and women strove to ascend in status and reputation and in wealth (both for themselves and their kin). There was a strong inclination to something we might call individualism, a tremendous amount of emphasis on aspiration and success and something that resembled village-level democracy.
b. Mande-speaking societies associated with the formation of the empire of Mali in the upper Niger and the savannah just west of the Niger and subsequent “tributary” empires like Kaaba in upper Guinea were structured around formal hierarchies and around the maintenance of centralized states with an emperor at the top of the hierarchy. But they also invited Islamic scholars to pursue learning and teaching within their boundaries (and built institutions of learning to support them) and reached out to make strong new ties to trans-Saharan merchants. Moreover, the social hierarchies of these societies also had a major role for groups of artisans often called nyamakalaw: blacksmiths, potters, weavers, and griot or ‘bards’, who not only were a vibrant part of market exchange but who also had an important if contested share of imperial authority that involved a great deal of individual initiative and aspiration.
c. The Asante Empire, one of a number of Akan-speaking states in what is now Ghana, rose to pre-eminence in the 18th and 19th Century, and both its rulers and its merchant “middling classes” showed a tremendous amount of personal ambition and investment in individual aspiration, as did their antagonists in the Fante states to the south, who were heavily involved in Atlantic trade (including the slave trade) and who were very much part of Atlantic commercial and consumer culture. Cities like Anomabu and Cape Coast (and others to their east) were commercial entrepots that in many ways resembled other cosmopolitan Atlantic port cities in Western Europe and the Americas.
d. (I can keep going like this for a long while.) But let’s throw in one more, just because it’s illustrative, and that’s the Kingdom of Dahomey. It was an authoritarian state—though so was most of “the West” in the 17th and 18th Century, coming to that soon—but it was also deeply marked by religious dissent from those who profoundly disagreed with their ruler’s participation in the Atlantic slave trade, as a number of scholars have documented, as well as very different kinds of personal ambitions on the part of its rulers.
e. The upshot is that you cannot possibly represent the societies from which Africans were taken in slavery to the Americas as conformist, as uniformly authoritarian, as fatalistic or uninterested in personal aspiration, or as unfamiliar with competitive social pressures. I think you can’t represent any of them in those terms (I’m hard-pressed to think of any human society that matches the description) but none of the relevant West or Central African societies do. It’s not merely that they don’t match, but that they had substantially different ideas and structures regarding individual personhood, labor, aspiration, social norms, political authority, etc. from one another.

10. Let’s try something even sillier in Mead’s claims (if that’s possible), which is the notion that “Hispanics” or “Mexicans” are “non-Western” in the sense that he means. Keep in mind again that the argument depends very much on a kind of notion of cultural difference as original sin—he doesn’t even take the easy Daniel-Moynihan route to arguing that the poor are stuck in a dysfunctional culture that is a consequence of structural poverty—an argument that has a lot of problems, but it is in its way non-racial (it’s the same claim that undergirded J.D Vance’s Hillbilly Elegy, for example): culture is a product of class structure which then reinforces class structure in a destructive feedback loop. Mead is pointedly rejecting this view in favor of arguing that cultural difference is an intact transmission of the values and subjectivities of societies from 500 years ago into the present, and that the impoverishing consequences of this transmission can only be halted by the simultaneous “restoration of internal order” (e.g., even tougher policing) and the black and brown poor discovering their inner competitive individualist Westerner and letting him take over the job of pulling up the bootstraps.

11. Right, I know. Anyway, so Mead has a second group of people who are carrying around that original sin of coming from the “non-West”, full of conformism and reliance on authoritarian external commands and collectivism and avoidance of individual aspiration: “Hispanics”, which at another point in the article he identifies more specifically as “Mexicans”. I would need a hundred hands to perform the number of facepalms this calls for. Let’s stick to Mexico, but everything I’m going to say applies more or less to all of Latin America. What on earth can Mead mean here? Is he suggesting that contemporary Latinos in the United States who have migrated from Mexico, are the descendents of migrants from Mexico, or are the descendents of people who were living within the present-day boundaries of the United States back when some of that territory was control either by the nation-state of Mexico or earlier as a colonial possession of Spain, are somehow the product of sociohistorical roots that have nothing to do with “the West”?

12. Mead does gesture once towards the proposition that by “Western” he really means “people from the British Isles and northern Europe”; at other times, he seems to be operating (vaguely) with the conception of “Western” that can include anybody from Europe. He could always make the move favored by Gilded Age British and American racists and claim that Spain, Portugal, Italy, and Greece are not really Western, that their peoples were lazy collectivists who liked authoritarian control, and so on—it’s consistent with the incoherence of the rest of the argument, but he may sense that the moment he fractures the crudity of “Western” and “non-Western” to make more specific claims about the sociopolitical dispensations of 1500 CE that produced contemporary “cultural difference”, he’s screwed. In his 2018 essay, it becomes clearer why he would be screwed by this, because then he couldn’t contrast European immigrants from Italy and Eastern Europe in the late 19th Century with the really-truly “culturally different” black and brown people—if he drops Spain out of “Western” (by which he really means “white”), he’s going to lose his basis for saying that Giovanni DiCaprio had a primordial Western identity but Juan Castillo is primordially non-Western.

13. He’s screwed anyway, because there is no way you can say that Mexican-Americans are “non-Western” because they derive their contemporary cultural disposition from some long-ago predicate that is fundamentally different than that of white Americans and that this has nothing to do with the ways that societies in the Americans have structured racial difference and inequality. What is he even thinking is this ancient predicate? That Mexican-Americans are reproducing the sociocultural outlook of Mesoamerican societies that predate Spanish conquest? That Spain was non-Western, or that the mestizo culture of early colonial Mexico was totally non-Western? I can’t even really figure out what he thinks he is thinking of here: the Ocaam’s Razor answer is “well, he’s a bigot who wants to explain African-American and Latino poverty as a result of a ‘cultural difference’ that is a proxy for ‘biological difference’”, because his understanding of the histories he’s flailingly trying to invoke is so staggeringly bad that you can’t imagine that he is actually influenced by anything even slightly real in coming to his conclusions.

14. To add to this, he clearly knows he’s got another problem on his hands, which is why Asian-Americans aren’t in structural poverty in the same way, considering that most of his Baby’s First Racist History Stories conceptions of “cultural difference” would seemingly have to apply to many East, Southeast and South Asian societies circa 1500 as well. (And to Europe too, but hang on, I’m getting there.) In his 2018 essay, he’s got some surplus racism to dispense on them: some of them “become poor in Chinatowns” (citing for this a 2018 New York Times article focused on “Crazy Rich Asians”), and saying that despite the fact that they do well in school, Asians do not “assert themselves in the creative, innovative way necessary to excel after school in an individualist culture” and “fall well short of the assertiveness needed to stand out in America”. But he’s not going to get hung up on them because they pretty well mess up his argument, much like anything remotely connected to reality does.

15. Another reality that he really, really does not want to even mention, because he can’t have any conceivable response to it, is “well, what about persistent structural poverty in parts of the United States where the poor are white? And not just white, but whiteness that has pretty strong Scots-Irish-English roots, like in parts of Appalachia?” In terms of how he is conceptualizing cultural difference, as a cursed or blessed inheritance of originating cultures five or six hundred years old, he’s completely screwed by this contemporary structural fact. He can’t argue that it’s just a short-term consequence of deindustrialization or globalization—the structural poverty of Appalachia has considerable historicity. It used to give white supremacists fits back in the early 20th Century too.

16. Moreover, of course, everything I’ve said above about the complexity of the West and Central African origins of people taken across the Atlantic as slaves goes very much for Europeans arriving in the Americas. The idea that the Puritans, for example, represent a purely individualistic Western culture pursuing individual aspiration who are not ruled by and conforming to external authority is a laughably imprecise description of the communities they made. The sociopolitical and intersubjective outlooks derived from the local origins of various Europeans arriving in the Americas between 1500 and 1800 were substantially different. The states that many came from were absolutist, hierarchical, authority-driven, and the cultures that many reproduced were patriarchal, controlling, and not particularly anything like Mead’s sketch of “Western” temperaments, which is just a kind of baby-talk version of the Protestant work-ethic, a concept which actual historians doing actual research have complicated and questioned in a great many ways. Moreover, as many scholars have pointed out, the conflicts between these divergent origins were substantial until many colonists found that the threat of Native American attacks and slave revolts pushed them towards identifying as a common “white” identity.

17. Speaking of slavery, it’s another place where the entire premise of Mead’s article is just so transcendently awful and transparently racist. Mead is arguing that somehow the cultural disposition of a generic “Africa” survived intact through enslavement, which even the most enthusiastic historian of black American life would not try to claim for more positive reasons, and that slavery had no culture-making dimension in its own right. The debate about African influences, “Africanisms” and so on in the African diaspora is rich and complicated and of long-standing by scholars who actually do research, but that same research amply documents how the programmatic violence of slavery aimed to destroy or suppress the diverse African heritage of the enslaved. That research also documents the degree to which Africans in the Americas participated in the making of new creole or mixed cultures alongside people of European, Native American, and Asian descent. It’s easy to see why Mead has to make this flatly ridiculous claim and avoid seeing slavery as a culture-making (and culture-breaking) system, because it leads right away to the proposition that structural poverty among African-Americans has causal roots in enslavement, in post-Civil War impoverishment, in racial discrimination and segregation in the 20th Century. It also takes some spectacular, gross misperception, by the way, to see slave-owners collectively as canonical examples of “Western” hard-working, aspiration-fulfilling individualists. Right, right, having a hundred slaves plow your fields for you under threat of torture and death is the essence of inner-driven individualism and hard work

18. I’m leaving completely aside in all of this an entire different branch of absurdity in the article, which is that Mead says nothing about growing income inequality and lack of social mobility in the United States over the last thirty years, and nothing about what life is actually like for people who are working minimum wage jobs with all of what he calls “Western” motivations—with an individualist sensibility, with aspirations for improvement, and so on. He might say that getting into the historical details about Western and non-Western cultural differences is just beyond his remit in a short article connected to a long project. I don’t think he can say that legitimately, because extraordinary claims call for extraordinary evidence, even in a short article. But there is no way that he can excuse not citing or being even aware of the last thirty years of social science research on structural poverty in the United States. The footnotes in both his 2020 article and his 2018 article are like time-capsules of the 1990s, with the occasional early-2000s citation of scholars like Richard Nisbett.

19. I’ve bothered to lay all this out because I want people to understand that many critiques that are dismissed breezily as ideological or “cancel culture” derive from detailed, knowledgeable, scholarly understandings of a given subject or concept—and that in many cases, if a scholar or intellectual is arguing that another scholar should not have a platform to publish and speak within it is because the work they are producing shows extraordinary shoddiness, because the work they are producing is demonstrably—not arguably, not contentiously, but unambiguously—untrue. And because it is so dramatically bad, that work has to raise the question of what that scholar’s real motivation is for producing that work. Sometimes it’s just laziness, just a case of recycling old work. That isn’t anything that requires public dismissal or harsh critique.

But when the work is not only bad, but makes morally and politically repellant claims, it’s right to not merely offer public criticism but to raise questions about why a respectable scholarly journal would offer a place to such work: it mocks the basic ideals of peer review. It’s right to raise questions about why a prestigious university would regard the author of such work as a person who belongs on its faculty and tout him as an expert consultant in the making of public policy. That may be an accurate description of his role in setting policy on poverty in the past and his past work may possibly be not as awful as this recent work (though the contours of some of this thinking are visible, and reveal anew just how deeply flawed the public policy of the Clinton Administration really was). This is not about punishing someone for past sins, nor for their political affiliations. It is about what they have chosen to put to the page recently, and about the profound intellectual shoddiness of its content, in service to ideas that can only be called racist.

Posted in Academia, Cleaning Out the Augean Stables, Politics | 1 Comment

Masking and the Self-Inflicted Wounds of Expertise

A broken clock tells the time accurately twice a day, but Donald Trump tells the truth even less often than that. Never on purpose and rarely even by accident. And yet he told an accidental truth recently, one that doesn’t reflect well on him, in saying that some Americans wear masks consistently today because masks have become a symbol of opposition to Trump.

Almost everything that involves the actions of the federal government has been like that since the fall of 2016. What the government does signifies Trump, what it doesn’t do or pointedly refuses to do signifies resistance to his authority. It isn’t instantly true: some policies and actions of the government just continue to signify ordinary operations and the provision of expected services. But the moment Trump becomes even slightly aware of any given policy or action and addresses it even once, the 60-40 divide that now structures two cultural and imaginative sovereignities instantly manifests and the signifiers fall rapidly into Trump’s devouring gravitational pull.

It’s likely true that in any other administration, typical public health discourse about covid-19, including advice on masks, would have been met with some paranoia or resistance, all the more so if masks and the constriction of economic activity were co-identified. It’s also true that Trump is the explosive, catastrophic culmination of thirty years of deliberate Republican subversion of the authority of scientific expertise and the cultivation of the logics of conspiracy theory. Some degree of partisan division in the reaction to various suggestions and orders would have been inevitable even were the President a competent, reasonable adult who believed that the Presidency must at least rhetorically and conceptually be devoted to the leadership of the entire body politic, not an inward-turning constituency of far-right Americans trying to preserve their racial and cultural privileges. No matter what, we would have had a surplus of the sort of fragility, weakness, incoherence and malice that has been on display in public hearings in Florida, California and elsewhere over masking policies. But without Trump, I think that would have been more clearly a fringe sentiment with relatively little weight on the body politic. With him, it is a crushing burden.

But if we hope to eventually emerge from this catastrophic meltdown into a better, more democratic, more just, and more commonsensical nation–perhaps even just into a country that possesses a much larger supply of the adult maturity required to just wear a mask for a year or so in order to safeguard both our own personal health and the health of our fellow citizens, then we have other kinds of work to do as well. One of the major tasks is that experts and educated professionals have got to learn to give up some of their own bad habits. If Republicans have worked to sabotage science and expertise in order to protect their own interests from regulation or constraint, then experts have frequently amplified those ill-meant efforts through their own ineptitude, their own attraction to quack social science and wariness about democratic openness.


This is an old theme for me at this blog, but the masking debacle provides a fresh example of how deep-seated the problem really is.

The last fifteen years have been replete with examples of how many common assumptions we make about medical therapies, sociological and economic phenomenon, drivers of psychological behavior and experience and much else besides rest on very thin foundations of basic research and on early much-cited work that turns out to be a mixture of conjecture and the manipulation of data. We know much less than we often suppose, and we tend to find that out at very unopportune moments.

In the present moment, for example, it perhaps turns out that we know much less about just how long a virus like covid-19 can be infectious in human respiration, how far it travels, and precisely how much wearing a fabric mask with some form of non-woven filter inside might protect a person who was wearing it properly, in relationship to a variety of atmospheric conditions (indoors, outdoors; strong air movement, little air movement; rapid athletic respiration, ordinary at-rest respiration) etc. There are very legitimate reasons why these are not things we can study well right now in the middle of this situation, and why they are a hard set of variables to measure accurately even when the situation is not urgent.

And yet. It has seemed likely from the very first news of a novel coronavirus spreading rapidly in China that wearing a mask, even a simple fabric or surgical mask, might help slow the spread of the virus and offer some form of protection to the wearer, however humbly or partially so.

The early response of various offices within the US government likely will receive considerable critical attention for the next decade and beyond. Not only did the unspeakably self-centered political imperatives of the Trump Administration intervene at a very early juncture, but also there seems to have been some basic breakdowns in competence and leadership at the CDC and elsewhere.

The question of masks, however, was bungled in a more complicated and diffuse way. It’s now clear that most public health officials and medical experts knew full well from the very first news about covid-19 that even surgical or fabric masks but especially N95 or other rated masks, would provide some measure of personal and collective protection for any wearer. And yet many voices stressed until late March 2020 that masks weren’t useful to the general public, that social isolation was the only effective counter-measure, that no one but medical workers or people in close contact with covid-19 patients should be wearing masks. Why not tell people to wear masks from the outset?

The answer seems to be only very slightly about any degree of uncertainty about the empirical evidence for mask-wearing. What really seems to have driven the reluctance to recommend mask-wearing are three basic propositions:

1) That if the benefits of mask-wearing were acknowledged, this would spur a massive amount of panic buying and hoarding of rated masks, which were after all a commonly available commodity, less for protection against infectious disease and more for protection against inhaling minute particulate matter in woodworking, drywalling and other projects.

2) That the general public would not know how to properly wear any mask, whether a simple fabric mask with non-woven filters or a rated mask, in order to insure actual protection from infection–that the masks only conferred meaningful protection if fitted correctly, if not touched constantly by hands during a period of exposure, if the mask-wearer did not touch their face otherwise, if rigorous hand-washing preceded and followed mask-wearing, and if some form of protective eyewear were also worn–and would hence not receive the expected protection from even non-rated masks.

3) That wearing masks might give people a false sense of security and prompt them to circumvent the more critical and impactful forms of social distancing and isolation that were (accurately) seen as more critical to mitigating the damage of the pandemic.


There are two basic problems with the line of reasoning embedded in those propositions. The first is that they reflect how profoundly unwilling educated professionals are to speak to democratic publics in a way that notionally imagines them as capable of understanding more complicated procedures and more complicated facts.

I know what you are saying: well, have you watched the YouTube videos of people testifying angrily about masks, in which they appear to be barely capable of understanding how to tie their own shoes, let alone how to deal with a public health emergency like this pandemic? Yes, and yes, those folks are appalling and yes they seem to represent a larger group of Americans.

The problem in part is that their behavior and the public culture of educated professionals have involved in relational tandem to one another–and to be caught up in the expression of and enforcement of social stratification. Because we expect people to be irrational and incapable of understanding, we offer partial explanations, exaggerations and half-true representations of research findings and recommended procedures and justify doing so on the grounds that it is urgent to get the outcomes we need to prevent some greater harm–to get people to behave properly, to get funds allocated, to get policies enacted. But it is not a secret that we are doing so. The news gets out that we amplified early reports of famine in order to get the aid allocated in time to make a difference, that we amplified the impact of one variable in the causation of a complicated social problem because it’s the only one we can meaningfully act upon, and so on. The people we’re trying to nudge or change or move know they’re being nudged. They know it from our affect, they know it from their own fractured understandings of the information flowing around them, they know it because it’s a habit with a long history. So they amplify their resistance in turn, even before the Republicans manipulate them or Donald Trump malevolently encourages them.

And in turn what this does is also commit experts to an increasingly unreal or inaccurate understanding of social outcomes in a way that corrodes their own expertise. The experts start to be vulnerable to manipulation by other experts who provide convenient justifying explanations for nudging or manipulation. “Make the plates half as big and it’s like magic! People eat less, obesity falls, the republic is saved! You don’t have to actually talk to people any more or try to understand them in complex terms!” Most of that thinking rests on junk modelling and Malcolm Gladwell-level simplifications once you peel it back and take a close look.

Even when the causes of behavior are in some sense simple, so many experts look away if it turns out the causes are in the wrong domain or are something they themselves are ideologically forbidden to speak to with any clarity. Take for example the fear of hoarding in the early reluctance to clearly recommend mask usage. It’s true that hoarding was a problem and it’s clear it could have been far worse still had the general public come to believe that owning a package of N95 masks was as important as stocking up on toilet paper or making a sourdough starter.

But what’s the problem there? It’s not in the least bit irrational under our present capitalist dispensation to buy up as much of a commodity that you suspect is about to gain dramatically in value. Buy low, sell high is a commandment under capitalism. In our present crisis, we’ve all felt outrage at the men who fill storage units full of hand sanitizer and PPE and called them hoarders. But they’re just the down-market proles that the nightly news feels comfortable mocking. There’s been just as much up-market hoarding, but there we call it business. The President of the United States has helped fill the troughs for various hogfests with his promotion of hydroxychloroquine and so on, but beyond that, organized profiteering has unfolded on more spectacular and yet sanctified scales.

At whatever scales, if the problem is hoarding rather than altruism in a public health crisis, if the problem is someone pursuing profit instead of saving lives, then name the problem for what it is: capitalism as we know it and live it. That’s not ideology or philosophy, it’s plain empirical fact. It’s fine to say that you are facing a problem whose cause is utterly beyond your capacity to address and beyond your expertise to understand. It is not fine to avoid doing that in order to launder the problem so that it comes out being something you know how to describe and feel you can do something to affect. In this case that “something” is to offer a half-truth (masks aren’t useful) in the thought that it might impede or slow down a basically rational response that threatens your capacity to act in a crisis.

I keep saying that expertise needs to respect and emulate the basic idea of the Hippocratic Oath, most centrally: first, do no harm. It is less harmful to name a problem for what it is, even when you cannot deal with it as such and your expertise does not really extend to it. It is less harmful to tell democratic publics what you know to the extent that you know it than to try and amplify, exaggerate or truncate what you know because you’re sure (with some justification) that they will not understand the full story if you lay it out. I understand the impulses that drive expert engagements with publics, but those impulses, even with the best of intentions, end up fueling a fire that far more malicious actors have been building for decades.

Posted in Oath for Experts, Oh Not Again He's Going to Tell Us It's a Complex System, Politics | 4 Comments

Knowing Better

I’m struggling to process my own discomfort at the thought of either cancelling a fall semester or doing it only online with the primary intention of protecting the health of faculty and staff.

Assuming that the still-fragmentary data about the pandemic holds somewhat true, students who are 18-22 year olds would be right to think that the risk to their own health from gathering together on a residential campus this fall is relatively small. They’re not invulnerable, of course–there are people in that age range who are immuno-compromised, there are people in that age range who have gotten very sick or died from covid-19 without apparent vulnerabilities, and there is the possibility that even asymptomatic or lightly symptomatic cases of coronavirus may pose unknown long-term health threats given how little we really know about the disease. On the other hand, one thing we do know is that it’s very contagious. I do not think it’s likely that colleges and universities can have a testing regimen sufficient to ensuring that everyone who comes to campus is not an infectious carrier. By fall, I expect that a much wider number of people will be exposed to it, whether or not campuses reopen. If they reopen, it’s almost certain that covid-19 will be a constant threat during the semester.

The major threat would be to older faculty and to staff who have regular contact with students. We could continue to hold most of our meetings remotely and stay away from each other, but if students are here, the people who teach them, serve them food, clean the buildings, attend to their mental and physical health, counsel them on academic and community matters, discuss their financial aid, etc., will inevitably be at some risk of exposure to a large community pool of potential carriers, even with some form of PPE (a non-trivial thing to secure in sufficient quantities in and of itself).

I’m a fat guy with high blood pressure in his mid-50s, so this is a meaningful threat to my survival. I should be, rationally, all for anything that will allow me to continue in relative isolation while still getting paid and doing as much of my job as I can in ways that are as creative and professional as I can manage as long as possible. And rationally, I am.

My discomfort is in the contrast between that future for me and my wider society. Many people have proposed that this is a national and global challenge that compares in its intensity and exigency and unpredictability to wartime. A few of the people using that structure of metaphor should probably think again about it–our utterly failed national leadership are just amplifying their failure when they talk in these terms. But mostly it’s meant sincerely and mostly I take it to heart. It’s because I take it to heart that I’m uncomfortable.

I’m uncomfortable because I think closing major institutions and workplaces (academic and otherwise) through the fall and possibly even longer while finding ways for professionals and white-collar employees to continue to productively work remotely while likely at the same time furloughing or terminating the employment of people who can’t work remotely doesn’t feel wartime to me. It doesn’t feel like wartime that I should be solicitiously protected from a risk to my health and a risk to my livelihood at once while some people are fired and other essential employees are compelled to take risks, often for little to no economic reward and with little national support beyond the same empty gladhanding we have given men and women sent to die in misbegotten wars since 2001–grocery clerks, delivery people, health care professionals, farm workers, meatpackers, police and fire, and so on. Wartime means shared sacrifice, shared danger, shared risk.

If we can’t all stay home and work on laptops–and plainly we can’t–there is part of me that think we all should be on the same frontlines, in the same foxholes, enduring the same bombardments. Not without precautions–masks, distancing, hand-washing, the whole thing. Not without the equivalent of 4F–the immuno-compromised, the highly vulnerable, in all industries and jobs given leave to stay home and be paid securely for the duration. But the rest of us–even me, obese and high blood pressure and all–out there like everyone else. Not for the sake of “the economy”, which needs a total transformation. Not for the 1%, not for anyone’s political prospects. But just as there has been solidarity in being apart to stretch out the curve, if by September some of us are in the soup of contagion with no choice (or in the abyss of unemployment in an especially cruel and unequal national economy), I feel as if there should be solidarity in the inescapability of threat. And I believe enough in the mission of my work to think that my students deserve to continue their studies, and to continue them in a format better than online–to think that there is a value in facing this risk. At least as much value as delivering packages, stocking shelves, collecting garbage, producing food and other services we have deemed so essential (if poorly compensated) that we feel they must continue regardless. I’m in no rush to say that a college education is inessential or can be delayed without cost, and not merely because that’s my meal ticket. I honestly believe it, more than ever with my own child in college.

I know there’s a lot wrong with these feelings, and that many of you feel very differently. Give me a moment and I will feel the same: that we should continue to shelter as long as possible, that no job is worth dying for, that we should not for a moment sanction the degree to which our systems have failed us all in the face of a deeply forseeable, inevitable crisis by numbly accepting a hollow rhetoric about shared sacrifice and duty. Indeed, if you follow the wartime metaphor, this has always been the problem for dissenters and social critics in wartime–to seem to deny or dismiss the heroic willingness of soldiers to die and the homefront to endure shared hardship by refusing the call to unity. And yet the metaphor has a pull, and all the more because this crisis at least does not involve the contingent failure of the powerful to make peace with an enemy they did not have to fight. We could have been so much better prepared but this crisis will come to humanity now and again no matter what we do, all the more so in the Anthropocene, as life (including pathogens and parasites) evolves to human bodies and systems as its primary ecosystem. This is one of the few existential crises that should put us in radical solidarity with one another.

So I grapple. I don’t want any of the short-term futures that September may bring. I can see the reasonableness of the ones I would guess to be most likely. I feel the pull of an unreasonable desire for something else.

Posted in Academia, Politics | 9 Comments

An Actual Trolley Problem

I’ve always seen a certain style of thought experiment in analytic philosophy and psychology as having limited value–say for example the famous “trolley problem” that asks participants to make an ethical choice about whose life to save in a situation where an observer can make a single intervention in an ongoing event that directs inevitable harm in one of two directions.

The problem with thought experiments (and associated attempts to make them into actual psychological experiments) is that to some extent all they do is clarify what our post-facto ethical narrative will be about an action that was not genuinely controlled by that ethical reasoning. Life almost never presents us these kinds of simultaneous, near-equal choices, and we almost never have the opportunity to reason clearly in advance of a decision about such choices. Drama and fiction as well as philosophy sometimes hope to stage or present us these scenarios either to help us understand something we did (or was done to us) in the confusion of events, or perhaps to re-engineer our intuitions for the next time. What this sometimes leads to is a post-facto phony ideological grandiloquence about decisions that were never considered in their actual practice and conception as difficult, competing ethical problems. Arthur Harris wasn’t weighing difficult principles about just war and civilian deaths in firebombing Dresden, he was wreaking vengeance plain and simple. Neoliberal institutions today frequently act as if they’re trying to balance competing ethical imperatives in purely performative way en route to decisions that they were always going to make, that were always going to deliver predictable harms to pre-ordained targets.

But at this moment in late March 2020, humanity and its various leaders and institutions are in fact looking at an honest-to-god trolley problem, and it is crucial that we have a global and democratic discussion about how to resolve it. This is too important to leave to the meritocratic leaders of civil institutions and businesses, too important to be left to the various elected officials and authoritarian bureaucracies, too important to be deferred to just one kind of expertise.

The terms of the problem are as follows:

Strong national quarantines, lockdowns, and closure of nonessential businesses and potential gathering places in order to inhibit the rapid spread of the novel coronavirus COVID-19 will save lives in all countries, whether they have poorly developed health infrastructures, a hodgepodge of privately-insured health networks of varying quality and coherence or high-quality national health systems. These measures will save lives not by containing the coronavirus entirely but simply by slowing the rapidity of its spread and distributing its impact on health care systems which would be overloaded even if they had large amounts of surplus capacity. The overloading of health care facilities is deadly not just to people with severe symptomatic coronavirus infections but to many others who require urgent intensive care: at this same moment, there are still people having heart attacks, life-threatening accidental injuries, poisonings, overdoses, burns from fires, flare-ups of serious chronic conditions, and so on. There are still patients with new diagnoses of cancer or undergoing therapy for cancer. There are still people with non-COVID-19 pneumonias and influenza, still people with malaria and yellow fever and a host of other dangerous illnesses. When a sudden new pandemic overwhelms the global medical infrastructure, some of the people who die or are badly disabled who could have been saved are not people with the new disease. Make no mistake: by the time this is all said and done, perhaps seventy percent of the present population of the planet or more will likely have been exposed to and been carriers of the virus, and it’s clear that some percentage of that number will die regardless of whether there was advanced technology and expertise available to care for them. Let’s say it’s two percent if we can space out the rate of infection: that is still a lot of people. But let’s say it’s eight percent, including non-COVID 19 people who were denied access to medical intervention, if we don’t have strong enforced quarantines at least through the first three months where the rate of infection in any given locale starts to rise rapidly. That’s a lot more people. Let’s say that a relatively short period of quarantine at that level–three months–followed by moderate social distancing–splits the difference. A lot of people, but fewer than in a totally laissez-faire approach.

Against that, there is this: in the present global economy, with all its manifest injustices and contradictions, the longer the period of strongly enforced quarantine, the more that another catastrophe will intensify that will destroy and deform even more lives. There are jobs that must continue to be done through any quarantine. Police, fire and emergency medical technicians must work. Most medical personnel in emergency care or hospitals must work. Critical infrastructure maintenance, all the way down to individual homes and dwellings, still has to be done–you can’t leave a leaking pipe in the basement alone for four months. Banks must still dispense money to account holders, collect interest on loans, and so on. And, as we’re all discovering, there are jobs which can be done remotely in a way that was impossible in 1965 or 1985. Not optimally from anyone’s perspective, but a good deal of work can go on in that way for some months. But there are many jobs which require physical presence and yet are not regarded as essential and quarantine proof. No one is getting routine tooth cleaning. The barber shops are closed. Restaurants and bars are closed. Ordinary retail is closed. Amusement parks and concert halls are closed. All the people whose lives depend on those businesses will have no money coming in the door. Three months of that might be barely survivable. Ten months of that are not. Countries with strong social-democratic safety nets have some insulation against the damage that this sudden enforced unemployment of a quarter to a half of the population. Countries like the United States with almost no safety nets are especially exposed to that damage. But the world can’t go on that way for the full length of time it might take to save the most lives from the coronavirus pandemic. And make no mistake, this will cost lives as well. Quite literally from suicide, from sudden loss of access to shelter and health care, from sudden inability to afford the basic necessities of everyday life. But also from the loss of any future: the spiralling catastrophe of an economic downturn as grave as the Great Depression will deform and destroy a great deal, and throw the world into terrifying new disequilibrium.

It cannot be that saving the most lives imaginable from the impact of the pandemic is of such ethical importance that the destructiveness of the sudden collapse of the world economy is unimportant. It cannot be that business as usual–already deformed by inequality and injustice–must march forward over the deaths caused by the unconstrained, unmanaged spread of COVID-19. Like many people, this problem is not at all abstract for me. I’m 55, I have high blood pressure, I have a history of asthma, I’m severely overweight and when I contract the disease, I may well die. I have a mother that I love who is almost 80, aunts and uncles whom I love who are vulnerable, I have valued colleagues and friends who are vulnerable, and of course some who may die in this have no pre-existing vulnerabilities but just draw a bad card for whatever reason. But there has to be a point where protecting us to the maximum degree possible does more harm to others in a longer-lasting and more devastating way.

And this trolley problem cannot be left to the civic institutions and businesses that in the US were the first to act forcefully in the face of an ineffective and diffident national leadership. Because they will decide it on the wrong basis and they will decide it in a way that leaves all of us out of the decision. They will decide it with lawyers in closed rooms, with liability and insurance as their first concerns. They will decide it following neoliberal principles that let them use the decision as a pretext to accomplish other long-standing objectives–streamlining workforces, establishing efficiencies, strengthening centralized control.

It cannot be left to political authorities alone. Even in the best-case scenario, they will decide it in closed rooms, following the technocratic advice of experts who will themselves stick to their specialized epistemic networks in offering counsel: the epidemiologists will see an epidemic to be managed, the economists will see a depression to be prevented. In the worst-case scenario, as in the United States, corrupt leaders will favor their self-interest, and likely split differences not out of some transparent democratic reasoning but as a way to avoid responsibility.

This has to be something that people decide, and that people are part of deciding. For myself, I think that we will have to put a limit on lockdowns and quarantines and that limit is likely to be something like June or July in many parts of the United States and Europe. We can’t do this through December, and that is not about any personal frustration with having to stay at home for that length of time. It’s about the consequences that duration will wreak on the entirety of our social and economic systems. But it is not anything that any one of us can decide for ourselves as a matter of personal conscience. We the people have to decide this now, clearly, and not leave it to CEOs and administrators and epidemiologists and Congressional representatives and well-meaning governors and untrustworthy Presidents. This needs not to be a stampede led by risk-averse technocrats and managers towards the path of least resistance, because there’s a cliff at the end of all such paths. This is, for once, an actual trolley problem: no matter what we do, some people are going to die as a result of what we decided.

Posted in Academia, Grasping the Nettle, Politics, Swarthmore | 1 Comment

Free College: Not So Extreme

I’ve complained that for the most part, self-identified centrists and moderates prefer not to engage in direct arguments about their policy preferences in this election, but instead to argue about “electability”–essentially laundering their preferences through mute off-stage proxies, some other group of voters who won’t accept an “extremist” policy proposal simply because it’s extreme.

It’s not as if any given proposal is intrinsically extreme. (Well, up to a point: there are ideas that might be in some absolute sense be deemed to be fringe–say, banning any policy that accepts that the Earth is round and that the solar system is heliocentric.) “Extreme”, for the most part, is a judgment about how far a given idea is from some perceived stable consensus or status quo. As such, it’s more a marketing term than an empirical description: you make something extreme by describing it as such, over and over again, much the same way that you remind people that Colgate has new whitening agents and an improved fluoride formula.

Let’s take by way of an example the proposition that making public higher education free to all citizens and residents is a really extreme proposal. So extreme that it has been the normal public policy of many other liberal democracies (and a few non-democracies). So extreme that up to 1980 or so, it was in effect the policy of most states in the United States, in that there was a sufficient level of public funding for universities and community colleges that most could, if they chose, attend college for very little.

How did that become “extreme”? Through a steady thirty year effort to defund public higher education, which simultaneously raised the cost to prospective students while degrading the quality of the service it provided. Why exactly did we do that? Largely because we had thirty years of both Republican and Democratic administrations that turned away from public goods in general while cutting taxes, thirty years of austerity talk about inefficiencies and the need for private competition, and thirty years of educated elites trying to slow increasing access to higher education as union-protected high-wage manufacturing was transferred overseas and high-paying professional work that required educational credentials became the only alternative to low-paying service jobs. Thirty years of using higher education as the false whipping boy explanation for a major structural realignment of the economy (there aren’t enough engineers! There are too many poets and anthropologists!) while starving higher education in the process.

That’s how “public higher education should be free or nearly so to citizens and residents” became an extreme idea. It isn’t that way naturally: it was made extreme, a parting gift from the boomers and their parents who benefited from the idea back when it wasn’t extreme. It’s as if a thief broke into your house, stole something valuable, and then claimed you shouldn’t have it back because you never could have afforded it in the first place.

Inasmuch as any moderates care to actually engage the proposal on its merits, they have complained that it is regressive. Meaning that free access to public higher education should be means-tested, and the wealthy should have to pay. That sounds reasonable enough, but in fact, this is the torturous logic that has brutalized liberal democracy for the last three decades.

Before we get to the practicalities of it, at a more philosophical level, what looks like a gesture that targets income inequality in fact sanctifies it as foundational. When you prorate access to public goods, you establish that there are and must always be tiers of citizens–that inequality is fundamental. Platinum-tier citizenship, Gold-tier citizenship, etc. It effectively amends the Declaration of Independence: that all men are created unequal. What is public should be public to all: it is the baseline of equality.

That the wealthy can buy more on top of that is true–but don’t write that into the baseline. The rich can buy more legal representation than the state can provide, they can buy concierge health care on top of a public provision, they can buy an expensive private education. Yes. Figuring how to keep that from cancelling out equality of opportunity is a difficult, challenging problem. But you do not acknowledge that fact by writing it in at the deepest level of provision.

The wealthy already legitimately pay their fair share in a progressive tax system–if it’s actually used effectively as a way to redistribute excessive wealth and check run-away inequality.

The complaint that “free college” is regressive, moreover, feels jury-rigged to this political moment. The people who raise this argument against Sanders and Warren curiously seem to think this is the one ad hoc case where this concern is important. They are not arguing in favor of universal means-testing in the provision of public goods. Should we start charging wealthy people more to enter national parks? To send their children to public secondary schools? Should their EZ Passes charge them twice as much to drive on an interstate highway? Should they have to pay a fee to use the Library of Congress website? Be charged a fee in order to send a letter to their representative in Congress? Why not? They can afford it, after all. Isn’t it regressive to allow them to generally access public goods on the same basis as everyone else?

The idea that free public college should only be for those who really need it, moreover, is in practical terms the same kind of terrible idea that Democratic moderates have been peculiarly in love with since Johnson’s Great Society programs. When you set out to create elaborate tiers that segregate the deserving poor from the comfortable middle-class and the truly wealthy, you create a system that requires a massive bureaucracy to administer and a process that forces people into petitionary humiliation in order to verify their eligibility. You create byzantine cutoff points that become business opportunities for predatory rentiers. “Ah! I see you earn just $1,000 too much to qualify for free public college, and so will have to pay $5,000 a semester. Why don’t you consider taking on debt to attend my for-profit online school and we’ll spread that out for you? How about you hide that $1,000 in income using my $500/year accounting service? Try using your employer-offered system for tax-deferred payments into a special fund rather than receiving raises for the next four year!” Simplicity isn’t just about a basic idea of citizenship: it is also about efficiency, the very thing that neoliberal policy-makers supposedly revere so greatly and yet will so very often go to great pains to avoid.

Perhaps it is no great surprise that eye-rolling dismissals at supposedly utopian improvidence and hand-waving at proxies who are afraid of extremism is the preferred way to engage proposals like “free public higher education”. Who would want to undo thirty years of rigging the conversation, after all? Or for that matter twenty or so years of close ties between some of the economic interests that have benefitted from the defunding of public higher education and centrist policy makers.

Posted in Academia, Politics | 2 Comments

Harvest Time on the Whirlwind Farm

To some extent, people turn to omnicompetent forms of conspiracy theory when they cannot believe that anybody could be THAT incompetent.

People who are always and invariably against conspiracy theories tend to be that way first and foremost because omnicompetent conspiracy seems both impossibly improbable and because it is a futile theory (you can’t oppose omnicompetence by definition; in fact, if omnicompetence is real, then being allowed to voice the conspiracy theory is part of the conspiracy).

In some cases, the two kinds of theories of improbability and futility cross. It is truly hard to believe that four years after a deeply contested election that featured credible accusations about malign interference (including hacking attempts) in election security and four years after an election revealed bitter divides within the Democratic Party that threaten the stability of a coalition required to defeat a man who is endangering democracy itself, the Democratic Party would turn to a small tech company called Shadow, a company with no real track record, to hastily build an app of questionable usefulness even IF it actually worked as planned, to be used in the first primary elections of the 2020 campaign–and given reports that the app wasn’t working well and hadn’t been stress tested a month ago, would fail to build an alternative procedure should the app fail. The chain of miscalculations involved does seem almost impossible to believe in.

And yet so too is believing that any candidate actually running for the nomination could be so omnicompetently operating as a Manchurian candidate so as to make that chain of miscalculations occur as part of a plan, or that the DNC leadership has suddenly achieved this kind of omnicompetence after decades of evident managerial fecklessness and gang-that-couldn’t-shoot-straight mistakes. I mean, if you’re really omnicompetent conspirators, aim high–just steal the election seamlessly, plant evidence to discredit them that you wants discredited, etc.

Somewhere in that intersection there may be improvident forms of hidden coordination, self-interested incompetence that might be called cronyism, perfectly innocent misplaced trust in technology, and a kind of structured helplessness that the tech industry has sought to produce in all of us. That may deserve to be called something other than conspiracy or incompetence. It is, unfortunately, gravely consequential, in a way that demands both heads should roll and that reforms should be made, some of them far-reaching and substantial.


Ilana Gershon’s excellent book Down and Out in the New Economy offers an analysis of the “gig economy” that is both subtle in its grasp of our historical moment and hugely consequential in its implications. Among other things, it gives me an unexpected window into understanding what may have happened in Iowa as the caucus officials turned to the app developed by Shadow.

When faculty talk about the damage done to our institutions and our profession by the rise of contingent labor in academia, we sometimes overlook that we are here, as in many things, only one piece in a larger puzzle. Two things have happened to most of the major professional workplaces that were a centerpiece of mid-20th Century life around the world (the university, the secondary school, the hospital, the law firm, the advertising agency, the major corporation, the governmental bureau or civil office). On one hand, governance and authority over the mission and operations of the institution and its employed professionals have been increasingly transferred to a series of dislocated, dispersed administrative organizations devoted to particular kinds of compliance. Some of those are statutory, some are a consequence of institutional membership in associational networks, and some are effectively from off-site or absentee owners of some of the operations of the institution. The authority of these external organizations over the institution is often projected into the institution via specific professionals on the institutional payroll whose work is largely the maintenance of compliance. The organizational chart shows those individuals working within the institution’s hierarchy and procedures, but in many ways they are equally subject to and solicitious of the separate external organization. The hospital manager who is the primary point of contact with insurance networks, the corporate executive who represents the private equity firm that recently bought the business, the academic dean charged with meeting the demands of an assessment organization, the investment manager who works as much for a hedge fund partner as he or she does for the institutional portfolio, the government official who is charged with managing procurement or who is the liason to a PAC or other source of extra-governmental influence on policy-making.

At the same time, most those professional institutions are off-loading much of the labor they once did and could still plausibly do out of their own staff and payroll onto outside consultants, facilitators, software developers, contract workers, and so on. In its early crude forms, this was “outsourcing”, the segmentation of the organization into geographically dispersed subsidiaries who could produce some labor very cheaply outside of the US or EU. I think now this wave has moved on to something more dispersed, less transparent, and more punctuated and uneven. This is the classic “gig economy” that Gershon has set out to investigate. From inside the institution, part of the logic of the gig is financial efficiency (the shedding of staff off the payroll) but I think it is more than that. I think it is also the management of risk, often by lawyers or legal professionals: necessary operations that entail risk if done incompetently or imprecisely are protected from claims of liability to some extent if they are devolved onto individuals and firms whose inner workings are private and to whom legal responsibility can possibly be redirected, along with less financially tangible forms of blame.

Gershon’s analysis is that as people transition into the gig economy, their relation with employing institutions changes. They no longer are offering their distinctive mix of intrinsic skills and human insight to the employer via a long-term contract. The gig workers are, Gershon observes, increasingly narrating their economic relations as if the gig worker were a business who is engaged in a business deal with another business. The worker is no longer identifying with the purposes and mission of the institution while employed by it, but is instead always thinking about the interests of their own “gig” brand, which align for as long (or short) as they may with the other business that pays them for services rendered.

There are ways in which this is neither bad nor good but simply different. But it has implications for the outcomes that institutions seek (or claim to seek), whether that is educating the next generation, healing the sick or injured, or delivering profit to shareholders. As an institution increasingly employs people who are essentially the intrusion of some other institution into its framework (the compliance professionals) and expels functions and tasks to be served by networks of consultants and subcontractors, it loses most or all control over the outcomes of its operations. It is subject to extra-institutional dictate in a way it is almost helpless to resist–“the call is coming from inside the house!”–while it has protected itself from both the expense and the risk of directly supervising (or being shaped by) people who carry out many functions that its mission or purpose require.

In fact, many institutions end up pairing another class of internal worker with the intruding compliance managers: the contact point for networks of consultants, facilitators and subcontractors. Much like the compliance worker, this person is not responsible to the institution. They’re responsible to the network that they manage. This has huge implications. It is not in the interest of the “internal gig manager” to put the institution’s needs or functions first, not the least because the internal gig worker knows that tomorrow they could be back out in the network again, and it is the network that matters, the network that secures the next gigs. But more potently, if the internal gig worker wants the gig to continue, they actually have to actively degrade the capacity of their employing institution to carry out some functions. Because that’s what makes consultants and subcontractors necessary: the institution has failed, is failing, will fail to do this work on its own–it lacks some form of expensive expertise or some form of knowledge about the nature of the labor function that it formerly handled on its own.


And here we return to the catastrophe of the Iowa caucuses. Whatever the specificity of the ways in which Shadow was employed to build an app that was designed to report the results of the caucuses–specifics that hopefully we will learn more about in the days and weeks to come–the ways in which both the national and state Democratic Party and an associated electoral administration has lost control of a vital function that once resided entirely within its organizational purview is familiar and haunting. And here I am no longer in equanimity about the implications: this is an actively commissioned outcome deriving from a web of systemic shifts in political, economic and social life over the last forty years. Call it neoliberalism, or find a better name. Argue it’s three things, not one thing. Argue it’s intentional or incidental, interested or unexpected. That’s all fine. One thing it is not is good.

All over this country (and the world) for the last twenty years, tech companies have worked with increasing intensity and sometimes desperation to actively produce in other institutions a state of learned, professed helplessness, a proposition that everything they do must be transformed (or “disrupted”) by tech in the name of some underspecified (or wholly unspecified) better end. Along the way, tech companies and the managerial clouds that swirl around them like courtiers have appropriated languages of fairness, of equity, of objectivity, of efficiency, of empowerment and attached them to cycles of tech adoption and to endless, vague ideas about process and ‘best practice’. If you understand tech as being more than just an app or a digital tool or a computer, you can even see that some of these processes and adoptions are of rules, procedures, codes that are themselves a kind of organizational technology.

And it is the change in institutions overall that make this ubiquity possible while amplifying the disastrous forms and modes of helplessness and surrender that comes with that ubiquity. The tech to worry about here is not really first or only the big companies we all love to hate (Google, Facebook, Apple, Microsoft). It is the tech of the gig economy: the small firms (who are often using, in ways acknowledged and obscured, the product of the big companies). This is the tech that we subcontract for and assemble. What it does and how it is put together is a black box–a Shadow indeed–and that is often part of its value, as Cathy O’Neill observes in her book Weapons of Math Destruction. Bias, unfairness–or a miscounting of electoral outcomes–that happens algorithmically in a product developed by a small firm using the proprietary technologies of three big firms is protected by multiple layers of secrecy and obscuration, even from the subcontractors who delivered the product. All of us in institutions hire the consultants and facilitators and subcontractors because they’re former students, former associates, former (or present) parts of our gig networks. As we all become gig workers, we all think about the gig, not the mission or the purpose.

If that means a food company loses the ability to know why the romaine lettuce it buys is frequently infected by e-coli, bad for its customers and likely bad for the company. If that means all food companies sell a product that is composed of ten different layers of subcontraction, bad for everyone who eats commodified food. The compliance officers inside the company aren’t truly protecting the public interest–they’re hidden inside the institution and yet not answerable to it. The gig contact points inside the company aren’t really responsible to the company, and neither are the contractors they’ve hired. Nobody’s really responsible. Maybe some individual will be unlucky enough to be identified in a viral video and hashtagged into temporary oblivion, but the structures live on.

Iowa is all of this made truly and horrifingly manifest. At the beginning of a national election that many citizens plainly feel is the most important election in their lifetime–and possibly one of the three or four most important in the history of this nation-state–a party organization lost control of one of its most important functions. It will be tempting to say that this must be a cunning, purposeful, self-interested conspiracy by a few, or a punishable kind of professional incompetence that was contingent, e.g., that could have been avoided. I strongly suspect instead that the Shadow we will uncover has fallen on us all, that all of us are involved in forms of labor towards valuable, important ends that our institutions have lost control over, and that none of us know quite how to walk back into the light of sovereignty and authority over the missions we value, the purposes we are called to, the responsibilities we revere.

Posted in Academia, Politics | Comments Off on Harvest Time on the Whirlwind Farm

Dershowitz Matters (Unfortunately)

The terrible thing about what has happened in the Senate is this: the intellectual and institutional infrastructure of legal and political systems can confer a kind of revenant legitimacy even on claims and verdicts that destroy those systems. You can murder a democracy with democratic institutions. The Nazi Party did not come to power in a coup; it came to power in an election. When the Nazi Party finally did suspend German democracy, it did so using the institutional language of the state that the Nazis were destroying.

So while we can all see the plain insanity of Alan Dershowitz’ proposition that if a President of the United States believes his own election is in the national interest, no action that he takes to secure his own election is therefore impeachable, it still matters that it was said on the floor of the Senate with the blessing of the current Administration and that it will be incorporated into the logic of the foregone acquittal that the Republican Senators will shortly be delivering.

Institutions are a kind of railroad track into the near-term future, laid down in haste as the train approaches. If the track is laid down pointing to a yawning canyon with no bridge over it, the engineers of the train do not have the option to swerve or evade the abyss ahead.

Dershowitz–and thus the GOP–are now fully committed to single-party rule, to an assertion that if they believe themselves the only legitimate rulers of the United States then it is so and they need fear no law nor principle as a barrier to their continued hold on power. As Dershowitz renders it, Trump would be doing nothing against his oath of office if he ordered a Democratic candidate investigated by the Justice Department or detained by the FBI for no other reason than to damage their candidacy. He could legitimately order federal money withheld from the home state of a political rival or for that matter legitimately impede or interfere with elections in jurisdictions likely to vote for any opponent. It’s not hard to think of quite real actions that will shortly be given political blessing (with the likely blessing of the Roberts Court to follow), because many of them are employed in authoritarian states that hold sham elections–tampering with the vote count, intimidation of opposition voters, arrests or detention of opposition candidates, refusal of legal supervision or eyewitnessing of election procedures, and so on. Dershowitz has sanctified all of that.

It doesn’t matter if 99% of the law professors in the nation think him wrong and craven. If his views stand as the prologue to an acquittal, they will be in some sense laying down the tracks that the train cannot help but follow.

Posted in Politics | Comments Off on Dershowitz Matters (Unfortunately)

No One Is Prepared

In the last two weeks, it’s been hard to miss the sudden turn among centrist punditry–the kind of people who presume to be kingmakers and political tastemakers, the viziers of the voters–towards trying to knock Bernie Sanders down before he surges his way into the nomination.

One of the most common themes in this turn (for example, in David Frum’s well-argued essay in the Atlantic) is that Sanders has yet to be tested politically, that no one has really attacked him yet in the way that the Republicans will attack him, that he is not electable because he has so many skeletons of some kind in his closet.

I think there are many assertions in these kinds of arguments that are questionable. For the most part, they seem to me to be a kind of zombie discourse rising out of the graveyard of the Cold War, that all a liberal or centrist politician needed to do to undercut a more progressive rival once was to hint darkly that the rival had ties to Communism–that they’d met approvingly with Fidel Castro or that they had said something nice once about the work of Marx or that they’d shown up to a meeting of socialists. Frequently this worked in concert with tying the candidate to some form of non-Marxist radical militancy–they’d been in a march with the Black Panthers! etc.–or to the antiwar movement (Hanoi Jane!).

It feels like a pretty OK, Boomer move now. I don’t think a lot of voters even recognize the allusive suggestions or have any real reflexive responses to the implications and nudges involved. Or even care much if someone comes right out and says, “Dude’s a Communist!” In part, as Frum notes (and most other hand-wringing centrists do not), even being a Communist affirms the degree to which Sanders seems to be about something other than just a career trajectory for a well-trained technocrat, that he is committed to an actual cause.

Let me not go too far into a tit-for-tat argument on these points, however. Let us just presume for the sake of argument that the Republican attack machine is going to go at Sanders full throttle and that he has yet to face any of the force of that assault, and that until he is thus attacked, we can say little about how well he will fare.

The more important observation at this point is that none of the Democrats have faced the full force of the contemporary Republican attack machine and none of them have demonstrated their capacity to survive it. I would argue that if Sanders seems unready, then all of his Democratic rivals are vastly more unready. And that all Democrats are now equally vulnerable to the way that the Republican Party now conducts itself politically, because the Republican Party no longer has any constraints on its behavior. Neither accuracy nor probity matters any longer. Legality is unimportant to a lawless party. The preservation of democratic norms and structures doesn’t matter to a party that no longer believes that the opposition has a right to govern if elected. The contemporary GOP and its base believe that by definition, only they have political legitmacy.

The Democrats are still preparing to run in an election, while their opponents are preparing to go to war.

To get some sense of what is now involved, look at what’s happening to John Bolton. He has gone from someone who was appointed by the President and trusted to provide counsel and make policy, who was a darling of conservatives and a frequent visitor to Fox News, to “Book Deal Bolton”, trailed by viral falsehoods about his financial motivations, his conspiratorial loyalty to the “deep state”, and so on. That the President chose such a person (as he did so many others who have now left his service) casts no negative light on him among his followers, nor does their former estimation of Bolton have any remnant force in their minds. Conservatives who loved and admired John Bolton two or four or eight years ago now profess to have always hated him and regarded him as a dangerous, even treasonous, figure.

Look at Biden. Twenty years ago, if you were the President and you knew that a potential rival in the upcoming election had a ne’er-do-well son who had gotten a job that smelled like nepotism and influence, you would have held on to that until after he was nominated and then found operatives far enough away from you to leak and rumor that into national news coverage. (Of course, twenty years ago, if you’d had a swarm of nepotistic trash trailing at your heels and had been the product of nepotism, you might have thought twice about making it an issue at all.) But this President and his goons decided to go ahead and start a Constitutional crisis just to play out a single rumor.

None of the Democrats have really faced anything like this at a national scale. What is about to happen will make the treatment of Barack Obama and Hillary Clinton look comparatively restrained. Our mass communications are essentially completely porous to malicious falsehoods from a wide variety of interested parties with money–I fully expect that this will be the first election where deepfake images are used in a sustained way, among many other things. Our mainstream media are mostly still easily manipulated or fooled by (or authorially involved in) the circulation of information intended to “change the frame” on candidates and issues. Almost a third of the citizenry are epistemologically barricaded against evidence that contradicts their current political commitments.

People who grew up in earlier political dispensations where “dirty tricks” meant things as quaint as getting Muskie to cry on camera or sending out anonymous circulars alleging that John McCain had a black child out of wedlock think that certain political candidates have certain kinds of vulnerabilities that they have to prepare for tactically and strategically, and that some candidates have too many vulnerabilities based on some actual past conduct and thus are “unelectable”. Wake up. We are in a new era. Against the hate machine that has been forged over the last two decades and now is armed and fully operational, all candidates have all vulnerabilities at all times. And all candidates have the potential to soar over the grinding, indiscriminate brutalization on offer: the US did elect, after all, a black President whose middle name was “Hussein”, something that the same centrist punditry that today says Sanders is unelectable then said made Obama unelectable. Much as they said Trump was unelectable.

There are two ways to cope, I think. One of them is Obama’s: to be above it all. I don’t think any of the current candidates have that available to them. It’s what O’Rourke and Booker, among others, tried.

The alternative is to go to war too. As Frum notes, that takes some of what Sanders has: advocating simple, elemental and dramatic policy ideas. It also takes not putting up with any bullshit, whether it’s media bullshit or the bullshit of the hate machine. It takes authenticity–meaning, something that comes from the candidate directly, not from a bunch of consultants and pollsters. It takes being raw rather than cooked.

If Sanders is untested in his ability to do that well against the ferocity and intensity of what’s coming, his rivals are vastly more untested. Or they’ve already failed: Biden’s meandering, vacillating confusion about what to do in the face of the attack from the White House and his clueless nostalgia for a consensus politics that was jointly murdered by Bill Clinton and Newt Gingrich in the 1990s is a vastly greater sign of unreadiness. Is there any sign that Pete Buttigieg is remotely prepared for the homophobia that will come his way? Or probably more potently, for the absolutely brutal and sick attacks that will come on his education and career trajectory? He can barely cope with relatively polite criticism about that from the left of the Democratic Party. Is Amy Klobuchar ready for open misogyny, for a reprise of the “unlikeability” attacks on Clinton? Warren already dropped the ball on the “Pocahontas” attack; she’s been a bit more deft since on other fronts, but still.

And these are all predictable vectors of attack, in that older political sense of ‘vulnerability’ and electability. But I don’t think any of them are ready for open falsehoods to be circulating straight from the President and elected members of the House and Senate, for the floodgates of hate unrestrained by shame or truth. The only way to be ready is to show that you understand that this is how politics is now, and to show the determination to win in order to save democracy itself. That urgency and intensity of purpose has yet to show itself on the Democratic side, for the most part–and if anyone has shown it, it’s Sanders and Warren, the candidates that the conventional wisdoms holds to be most unelectably unready.

Posted in Politics | 5 Comments

A Declaration of Dependence

When in the course of human events, it becomes necessary for one orange-faced man to dissolve the legal restrictions which he finds incovenient and to assume among the powers of the earth, the permanently dominant station to which the Laws of Nature and of Nature’s God entitles the orange-faced man and his political party, an indecent indifference to the opinions of mankind requires that they should incoherently tweet the causes which impel them to ignore, trifle with or mock the law.

We hold these lies to be self-evident, that billionaires are entitled to pay no taxes and the Russians are good guys who the United States should generally obey, that MAGA-hat wearing white men are endowed by their Creator with certain unalienable Rights, that among these are protection of Mediocrity, Things Being Just the Way They Want Them and A Life Free From Dealing With Brown People, Women and Others They Do Not Like. That to secure these rights, Governments are corrupt among Men, deriving their unjust powers from the inattention of a tweeting scumbag, — That whenever any Form of Government becomes threatening of these ends, it is the Right of the MAGAs and billionaires to ignore it or corrupt it, and to let Government wheeze out its last breaths in some miserable alleyway, undercutting its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Downfall and Misery along with everyone else even though they don’t think it’s going to be a problem for them.

A lack of prudence, indeed, will dictate that Governments long established should be let decay and degenerate for short-term greed and fear; and accordingly all experience hath shewn that mankind are less disposed to suffer when they take the trouble to have good governments, while evils are accumulable by abolishing the governments which have suppressed them. But when a long train of reasonable regulations and social reforms, pursuing invariably the same Object evinces a design to actually make the world better, it is their right, it is their duty, to sabotage and derogate such Government, and to remove all Guards for their future security.

Such has been the patient sufferance of this independent Nation; and such is now the necessity which constrains the MAGA and billionaires to shit all over their former Systems of Government. The history of the first black President is a history of repeated injuries and usurpations due to his highly moral character, generally intelligent approach to governance and centrist moderation, all having in direct object the maintenance of incremental social reform and some degree of constitutional liberty. To mock this, let fake bullshit be submitted to a simultaneously horrified and bemused world.

The former black president has given his Assent to Laws, the most wholesome and necessary for the public good.

He has urged his Governors to pass Laws of immediate and pressing importance, but has also negotiated and offered deals till his Assent should be obtained; and when ignored by do-nothing Republicans, he has mostly accepted the impasse with mild frustration and hope for eventual agreement.

The former black president has refused to pass other Laws for the exclusive accommodation of white men and billionaires, unless those people would accept the right of Representation in the Legislature of everyone else, a right inestimable to them because it is common to tyrants and white supremacists only.

The former black president has called together legislative bodies at places normal and close to the depository of their Public Records, for the sole purpose of reluctantly accepting their lack of compliance with his recommendations and plans.

In every stage of these Oppressions We have Petitioned for Redress in the most arrogant and obstructionist terms: Our repeated Petitions have been answered only by repeated patience and attempts to understand what is motivating us to be such assholes. A Prince, whose character is thus marked by every act which may define a Democratic ruler of a constitutional republic, is unfit to be the ruler of MAGA-hat wearing white men and billionaires.

Nor have We been wanting in attention to our Russian brethren. We have welcomed from time to time in attempts by their dictator to extend an unwarrantable jurisdiction over us and our elections. We have reminded them of the circumstances of our desire to build hotels there and of moving on her like a bitch. We have appealed to their kindred greed and lack of interest in freedom or human rights, and we have conjured them by the ties of our common intolerances to totally encourage these usurpations, which would inevitably strengthen our connections and correspondence. They have been totally delighted hear to the voice of injustice and of consanguinity, partly because they used a bunch of trolling operations to make it speak out. We must, therefore, acquiesce in the necessity, which announces our Subjugation, and hold them, as we hold other countries with dictators, Totally Great Friends With Whom We Have Perfect Conversations, and Hereby Abandon All That Shit About Having Our Own Country with a Constitution.

We hearby unseriously and incoherently tweet and ramble that these United States of America are basically over and Committed to Doing Whatever Putin Wants and Hey Whatever Other Dictators Ask For During Phone Calls.

Posted in Politics | 3 Comments

The Concern Troll In Everyone

What we commonly call “concern trolling” in online discussion has far deeper rhetorical roots in the public sphere. In many ways, it’s a style that was honed to near-perfection by centrist liberal intellectuals and academics in the 1960s, the sort that were brilliantly vivisected by Garry Wills in Nixon Agonistes. These were the men and women (mostly men) who were in their own view outside of and beyond ideology. They might favor a particular policy or course of action, but most of them claimed to be making a series of serially independent reasoned judgments, taking each issue on its merits, according to its facts. This was not just a personal preference. They argued that this approach was the essence of academic professionalism, of expert participation in public debate, and even the only right way to be a proper citizen in democratic and community deliberation. Journalists (in the U.S., not so much elsewhere) also commonly adopted this basic posture, that they were obliged to have no priors or assumptions, to treat everything they covered in a neutral, dispassionate manner that deferred to the facts of a given event or issue.

Intellectuals, scholars, experts, pundits or journalists who had strong or distinctive points of view were either bracketed off as a splinter movement (“New Journalism”) or disparaged as ideologues who approached issues and problems with a prior politics that led to a selective, biased understanding of the issues at hand and the facts of the matter. Intellectual and cultural historians, literary scholars and others who study the history of public debate in the United States with a long view are aware of just how different the resulting structure of public discourse in American life was after 1960 from the period between 1880 and 1950. It really was a shift: in the prior era of high modernism, intellectuals and experts didn’t hesitate to advocate policies and programs directly, for the most part–for good and ill.

Despite the supposed rejection of “objectivity” as a goal or evaluative marker in the social sciences, many experts and scholars have held on to the performative affect of the no-ideology intellectual continuously since the 1970s. In public writing, that increasingly meant that intellectuals and pundits only advocated ideas and proposals that came from someone else, from research or data or pilot programs etc. that resulted from the direct agency of some other expert, some other civic or institutional leader. Somewhere else and someone else. The pundit became a window to some distant light, transparent to its illumination but having no direct responsibility for incandescence. And increasingly, the light was directed towards those the pundit considered in darkness, in a gesture of olympian magnaminity. “Were I you,” he (almost always a he, almost always white) says, “I would look seriously at this finding, this study, this other expert, this situation. For were I you, I would act differently than you do, once I saw this finding, this study, this other expertise. Indeed, if you only will live in the light I cast in your direction, you might in fact be lucky enough to be me. That’s what I would be, were I you: I’d be me. Unbiased, generous, unhampered by ideology, just making up my own mind about everything rationally and without any priors right until the moment I come across it.”

Once upon a time, that noxious, phony performance was substantially confined to the op-ed pages of the New York Times and other major dailies, to the televisual punditry, to a small sector of public-facing academic social science, to a very particular subset of civic organizations. Bit by bit, however, it slipped into the wild, and now it infects our everyday public discourse across social media, public culture, civic institutions and conventional mainstream journalism.

In practice? Some examples of what this dissemination of deferral means:

1) Most of the public conversation between educated elites about electoral preferences studiously avoids the responsibility of actually having electoral preferences of one’s own that arise out of one’s own values and commitments. Instead, arguments about various candidates and issues are deferred onto some other “them” who have actual preferences who need to be appeased, mobilized or enlisted. So, for example, few people in these public conversations directly advocate for Joe Biden’s candidacy because they really like his positions or his specific qualities as a leader, but instead argue that Biden must be favored because he is the favored candidate of social groups who are not present in the conversation. Their (alleged) reasons for favoring him cannot be debated or discussed, only described, because they are not there to speak to them, and even if they were there, are presumed to not be willing to discuss their reasoning.

2) Much of the leadership within civic and academic institutions and often businesses as well advocates for particular policies or changes not because the leadership directly believe in or support such policies, but because the policies are “best practices” or “shared norms” that originated somewhere else. There is a quality of immaculate conception in these kinds of explanations, in fact: the policies adopted often have no specific place of birth and no initial author, but seem instead to have been adopted everywhere at once but nowhere also, with no sense that they are rivalrous to or critical of the norms they are set to displace.

3) Technocratic politicians and Silicon Valley companies (mostly) cleave to policies and products that have been produced almost through a competitive bidding process: they describe a problem they have identified and that they seek to provide some some of ameliorating solution for. The advocated solution doesn’t arise out of the political values or philosophy of the leader and their staff: the root value is to solve problems, what Evgeny Morozov calls “solutionism”. The solution is framed as neutral, as containing no values or agenda. (And hence, opposing the solution is held to be a corrupt investment in the continuation of the initial problem, because why else would you stand in the way of fixing a problem?) Once again, the people responsible for implementing or selling solutions are placed a sanitary step away from the conceptualization of and advocacy for a solution.

I think this is all tied to the much more abstract, multivalent erosion of 19th and 20th Century conceptions of publics and citizenship in the direction of the constellation of ideas and practices that we often call “neoliberalism”. The advantages of this deferral of direct responsibility for advocacy are obvious for individuals and institutions. David Brooks or Bret Stephens can throw up their hands and say that they’re not responsible for gross errors of fact or tendentious constructions of argument, because they’re only serving as a messenger for what is said and claimed by others that they believe their readers should know about. Institutions can shield themselves against risk and liability if they are only conforming to or compliant with decisions and practices adopted elsewhere. The failure of solutions can be blamed on the subcontractor that supplied them or simply on the intractability of the problem itself without putting any values or beliefs in danger.

The costs I think are also plain. Chief among them is that this deepens the isolation of elites from a wider civic culture, because all of these moves position elites and their institutions as the chess players on a board populated by other groups, other people, other communities but bearing no responsibility for either setting the rules of the game nor for the outcomes of play. We can scarcely begin to think successfully about what other worlds are possible when we absent ourselves and the direct power we actually have from the world we actually inhabit.

Posted in Politics | 1 Comment