Oh Not Again He’s Going to Tell Us It’s a Complex System – Easily Distracted https://blogs.swarthmore.edu/burke Culture, Politics, Academia and Other Shiny Objects Sat, 27 Jun 2020 20:03:09 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.15 Masking and the Self-Inflicted Wounds of Expertise https://blogs.swarthmore.edu/burke/blog/2020/06/27/masking-and-the-self-inflicted-wounds-of-expertise/ https://blogs.swarthmore.edu/burke/blog/2020/06/27/masking-and-the-self-inflicted-wounds-of-expertise/#comments Sat, 27 Jun 2020 20:03:09 +0000 https://blogs.swarthmore.edu/burke/?p=3308 Continue reading ]]> A broken clock tells the time accurately twice a day, but Donald Trump tells the truth even less often than that. Never on purpose and rarely even by accident. And yet he told an accidental truth recently, one that doesn’t reflect well on him, in saying that some Americans wear masks consistently today because masks have become a symbol of opposition to Trump.

Almost everything that involves the actions of the federal government has been like that since the fall of 2016. What the government does signifies Trump, what it doesn’t do or pointedly refuses to do signifies resistance to his authority. It isn’t instantly true: some policies and actions of the government just continue to signify ordinary operations and the provision of expected services. But the moment Trump becomes even slightly aware of any given policy or action and addresses it even once, the 60-40 divide that now structures two cultural and imaginative sovereignities instantly manifests and the signifiers fall rapidly into Trump’s devouring gravitational pull.

It’s likely true that in any other administration, typical public health discourse about covid-19, including advice on masks, would have been met with some paranoia or resistance, all the more so if masks and the constriction of economic activity were co-identified. It’s also true that Trump is the explosive, catastrophic culmination of thirty years of deliberate Republican subversion of the authority of scientific expertise and the cultivation of the logics of conspiracy theory. Some degree of partisan division in the reaction to various suggestions and orders would have been inevitable even were the President a competent, reasonable adult who believed that the Presidency must at least rhetorically and conceptually be devoted to the leadership of the entire body politic, not an inward-turning constituency of far-right Americans trying to preserve their racial and cultural privileges. No matter what, we would have had a surplus of the sort of fragility, weakness, incoherence and malice that has been on display in public hearings in Florida, California and elsewhere over masking policies. But without Trump, I think that would have been more clearly a fringe sentiment with relatively little weight on the body politic. With him, it is a crushing burden.

But if we hope to eventually emerge from this catastrophic meltdown into a better, more democratic, more just, and more commonsensical nation–perhaps even just into a country that possesses a much larger supply of the adult maturity required to just wear a mask for a year or so in order to safeguard both our own personal health and the health of our fellow citizens, then we have other kinds of work to do as well. One of the major tasks is that experts and educated professionals have got to learn to give up some of their own bad habits. If Republicans have worked to sabotage science and expertise in order to protect their own interests from regulation or constraint, then experts have frequently amplified those ill-meant efforts through their own ineptitude, their own attraction to quack social science and wariness about democratic openness.

—————

This is an old theme for me at this blog, but the masking debacle provides a fresh example of how deep-seated the problem really is.

The last fifteen years have been replete with examples of how many common assumptions we make about medical therapies, sociological and economic phenomenon, drivers of psychological behavior and experience and much else besides rest on very thin foundations of basic research and on early much-cited work that turns out to be a mixture of conjecture and the manipulation of data. We know much less than we often suppose, and we tend to find that out at very unopportune moments.

In the present moment, for example, it perhaps turns out that we know much less about just how long a virus like covid-19 can be infectious in human respiration, how far it travels, and precisely how much wearing a fabric mask with some form of non-woven filter inside might protect a person who was wearing it properly, in relationship to a variety of atmospheric conditions (indoors, outdoors; strong air movement, little air movement; rapid athletic respiration, ordinary at-rest respiration) etc. There are very legitimate reasons why these are not things we can study well right now in the middle of this situation, and why they are a hard set of variables to measure accurately even when the situation is not urgent.

And yet. It has seemed likely from the very first news of a novel coronavirus spreading rapidly in China that wearing a mask, even a simple fabric or surgical mask, might help slow the spread of the virus and offer some form of protection to the wearer, however humbly or partially so.

The early response of various offices within the US government likely will receive considerable critical attention for the next decade and beyond. Not only did the unspeakably self-centered political imperatives of the Trump Administration intervene at a very early juncture, but also there seems to have been some basic breakdowns in competence and leadership at the CDC and elsewhere.

The question of masks, however, was bungled in a more complicated and diffuse way. It’s now clear that most public health officials and medical experts knew full well from the very first news about covid-19 that even surgical or fabric masks but especially N95 or other rated masks, would provide some measure of personal and collective protection for any wearer. And yet many voices stressed until late March 2020 that masks weren’t useful to the general public, that social isolation was the only effective counter-measure, that no one but medical workers or people in close contact with covid-19 patients should be wearing masks. Why not tell people to wear masks from the outset?

The answer seems to be only very slightly about any degree of uncertainty about the empirical evidence for mask-wearing. What really seems to have driven the reluctance to recommend mask-wearing are three basic propositions:

1) That if the benefits of mask-wearing were acknowledged, this would spur a massive amount of panic buying and hoarding of rated masks, which were after all a commonly available commodity, less for protection against infectious disease and more for protection against inhaling minute particulate matter in woodworking, drywalling and other projects.

2) That the general public would not know how to properly wear any mask, whether a simple fabric mask with non-woven filters or a rated mask, in order to insure actual protection from infection–that the masks only conferred meaningful protection if fitted correctly, if not touched constantly by hands during a period of exposure, if the mask-wearer did not touch their face otherwise, if rigorous hand-washing preceded and followed mask-wearing, and if some form of protective eyewear were also worn–and would hence not receive the expected protection from even non-rated masks.

3) That wearing masks might give people a false sense of security and prompt them to circumvent the more critical and impactful forms of social distancing and isolation that were (accurately) seen as more critical to mitigating the damage of the pandemic.

—————-

There are two basic problems with the line of reasoning embedded in those propositions. The first is that they reflect how profoundly unwilling educated professionals are to speak to democratic publics in a way that notionally imagines them as capable of understanding more complicated procedures and more complicated facts.

I know what you are saying: well, have you watched the YouTube videos of people testifying angrily about masks, in which they appear to be barely capable of understanding how to tie their own shoes, let alone how to deal with a public health emergency like this pandemic? Yes, and yes, those folks are appalling and yes they seem to represent a larger group of Americans.

The problem in part is that their behavior and the public culture of educated professionals have involved in relational tandem to one another–and to be caught up in the expression of and enforcement of social stratification. Because we expect people to be irrational and incapable of understanding, we offer partial explanations, exaggerations and half-true representations of research findings and recommended procedures and justify doing so on the grounds that it is urgent to get the outcomes we need to prevent some greater harm–to get people to behave properly, to get funds allocated, to get policies enacted. But it is not a secret that we are doing so. The news gets out that we amplified early reports of famine in order to get the aid allocated in time to make a difference, that we amplified the impact of one variable in the causation of a complicated social problem because it’s the only one we can meaningfully act upon, and so on. The people we’re trying to nudge or change or move know they’re being nudged. They know it from our affect, they know it from their own fractured understandings of the information flowing around them, they know it because it’s a habit with a long history. So they amplify their resistance in turn, even before the Republicans manipulate them or Donald Trump malevolently encourages them.

And in turn what this does is also commit experts to an increasingly unreal or inaccurate understanding of social outcomes in a way that corrodes their own expertise. The experts start to be vulnerable to manipulation by other experts who provide convenient justifying explanations for nudging or manipulation. “Make the plates half as big and it’s like magic! People eat less, obesity falls, the republic is saved! You don’t have to actually talk to people any more or try to understand them in complex terms!” Most of that thinking rests on junk modelling and Malcolm Gladwell-level simplifications once you peel it back and take a close look.

Even when the causes of behavior are in some sense simple, so many experts look away if it turns out the causes are in the wrong domain or are something they themselves are ideologically forbidden to speak to with any clarity. Take for example the fear of hoarding in the early reluctance to clearly recommend mask usage. It’s true that hoarding was a problem and it’s clear it could have been far worse still had the general public come to believe that owning a package of N95 masks was as important as stocking up on toilet paper or making a sourdough starter.

But what’s the problem there? It’s not in the least bit irrational under our present capitalist dispensation to buy up as much of a commodity that you suspect is about to gain dramatically in value. Buy low, sell high is a commandment under capitalism. In our present crisis, we’ve all felt outrage at the men who fill storage units full of hand sanitizer and PPE and called them hoarders. But they’re just the down-market proles that the nightly news feels comfortable mocking. There’s been just as much up-market hoarding, but there we call it business. The President of the United States has helped fill the troughs for various hogfests with his promotion of hydroxychloroquine and so on, but beyond that, organized profiteering has unfolded on more spectacular and yet sanctified scales.

At whatever scales, if the problem is hoarding rather than altruism in a public health crisis, if the problem is someone pursuing profit instead of saving lives, then name the problem for what it is: capitalism as we know it and live it. That’s not ideology or philosophy, it’s plain empirical fact. It’s fine to say that you are facing a problem whose cause is utterly beyond your capacity to address and beyond your expertise to understand. It is not fine to avoid doing that in order to launder the problem so that it comes out being something you know how to describe and feel you can do something to affect. In this case that “something” is to offer a half-truth (masks aren’t useful) in the thought that it might impede or slow down a basically rational response that threatens your capacity to act in a crisis.

I keep saying that expertise needs to respect and emulate the basic idea of the Hippocratic Oath, most centrally: first, do no harm. It is less harmful to name a problem for what it is, even when you cannot deal with it as such and your expertise does not really extend to it. It is less harmful to tell democratic publics what you know to the extent that you know it than to try and amplify, exaggerate or truncate what you know because you’re sure (with some justification) that they will not understand the full story if you lay it out. I understand the impulses that drive expert engagements with publics, but those impulses, even with the best of intentions, end up fueling a fire that far more malicious actors have been building for decades.

]]>
https://blogs.swarthmore.edu/burke/blog/2020/06/27/masking-and-the-self-inflicted-wounds-of-expertise/feed/ 4
The Kid With the Hammer https://blogs.swarthmore.edu/burke/blog/2018/02/27/the-kid-with-the-hammer/ https://blogs.swarthmore.edu/burke/blog/2018/02/27/the-kid-with-the-hammer/#comments Tue, 27 Feb 2018 22:13:50 +0000 https://blogs.swarthmore.edu/burke/?p=3224 Continue reading ]]> A certain kind of application of social science and social science methods continues to be a really basic limit to our shared ability in modern societies to grapple with and potentially resolve serious problems. For more than a century, a certain conception of policy, government and the public sphere has been determined to banish the need for interpretation, for difficult arguments about values, for attention to questions of meaning, in understanding and addressing anything imagined as a “social problem”. This banishment is performed in order to move a social scientistic mode of thinking into place, to use methods and tools that allow singular causes or variables to be given weight in relation to a named social problem and then to be solved in order of their casual magnitude.

Certainly sometimes that analysis is multivariable. It may even occasionally draw upon systems thinking and resist isolating individual variables as something to resolve individually. But what is left outside the circle always are questions of meaning that require interpretation, that require philosophical or value-driven understanding, that can’t be weighted or measured with precision. Which is why in some sense technocratic governance, whether in liberal societies or more authoritarian ones, feels so emotionally hollow, so unpersuasive to many people, so clumsy. It knocks down the variables as they are identified, often causing new problems that were not predicted or anticipated. But it doesn’t understand in any deeper way what it is trying to grapple with.

I’ve suggested in the past that this is an unappreciated aspect of military suicides since 2001, that the actual content of American wars, the specific experiences of American soldiers, might be different than other wars, other experiences, and that difference in meaning, feeling, values might be a sufficient (and certainly necessary) explanation of suicide. But that conversation never floats up to the level of official engagement with the problem, and not merely because to engage it requires an official acknowledgement of moral problems, problems in meaning and values, with the unending wars that began in 2001. It’s because even if military and political leaders might have a willingness to consider it, they don’t have the tools. It’s not in the PowerPoints, in the graphs, in the charts. It’s in the hearts, the feelings, the things spoken and unspoken in the barracks and the bedrooms. It’s in the gap between the sermons and the town meetings on one hand and the memories of things done and said in the battlefield. No one has to say anything for that gap to yawn wide for a veteran or veteran’s family–it is there nevertheless.

Here’s another example: a report on “teen mental health deteriorating”. It’s a classic bit of social scientistic reason. Show the evidence that there is something happening. That’s fine! It’s useful and true. You cannot use interpretation or philosophy to determine that truth. But then, sort the explanations, weigh the variables, identify the most significant culprit. It’s the smartphones! It’s social media!

Even this is plausible enough and not without its uses. But the smartphone here is treated as causal in and of itself, with some hand-waving at social psychology and cognitive science. Something about screen time and sociality, about what we’re evolved to do and about what we do when our evolution drives us towards too much of something. What’s left out is the hermeneutics of social media, the meaning of what we say on it and in it. Because that’s too hard to understand, to package and graph, to proscribe and make policy about.

And yet, I think that’s a big part of what’s going on. It is not that we can say things to each other, so many others, so easily and so constantly. It is the content and meaning of what we say. The structures of feeling that follow from reading a stranger with no standing in your own life pronouncing authoritatively in the genre of a social-justice-oriented “explainer” that you are commanded to do something, feel something, compared to a person with great standing in your own life providing delicately threaded advice about a recent experience that you’ve had? Those are hugely divergent emotional and social experiences, they produce loops and architectures of sentiment. Reading people who hate you, threaten you, express a false intimacy with you, who decide to amplify or redirect something you’ve said? Those experiences have an impact on a reader (and on the capacity to speak) that rests on how their content (and authors) have meaning to the reader, often in minutely divergent and rapidly shifting ways.

We blunder not in our diagnosis of a problem (teen mental health is more fragile) or even in roughly understanding an important cause. We blunder in our proposed solution: take away the smartphones! (Or restrict their use.) Because that shows how little we understand of what exactly is making people feel that their online sociality is a source of vulnerability and fragility and yet precious and important all the same. It’s not the device, it’s the content. Or in a more well-known formulation, not the medium but the message. That requires semantic understanding, it requires literary interpretation, it requires history and ethnography, to understand and engage. And perhaps change–but that takes also a different set of instruments for coordinating shared or collective action than the conventional apparatus of government and policy.

]]>
https://blogs.swarthmore.edu/burke/blog/2018/02/27/the-kid-with-the-hammer/feed/ 3
Home to Roost https://blogs.swarthmore.edu/burke/blog/2017/04/17/home-to-roost/ https://blogs.swarthmore.edu/burke/blog/2017/04/17/home-to-roost/#comments Mon, 17 Apr 2017 22:26:54 +0000 https://blogs.swarthmore.edu/burke/?p=3096 Continue reading ]]> Formal argument in the classic style has real limits. Sometimes when we try to rule some sentiment or response in an argument or dialogue as “out of bounds” by classing it as a logical fallacy or as some other form of argumentative sin, we box out some important kinds of truth. Not all contentious discussion between two or more people is an exchange of if-then statements that draw upon bodies of standard empirical evidence. Sometimes, for example, it’s actually important to talk about matters marked off-limits by formalists as ad hominem: there are plenty of real-world moments where the motivations of the person you’re arguing with matter a great deal in terms of deciding whether the argument is worth having and whether it’s worth the labor time or emotional effort to assess what’s been said.

Equally, there is a sort of casual hand-waving manner of dismissing something that’s been said as an invalid “slippery slope argument” as if any argument that says, “A recent event might have long-term cumulative consequences that are more severe” is always invalid, always lacking in evidence. Typically the hand-waver says, “Come, come, this event is a minor thing, where’s the evidence that it will lead to something worse, that’s a fallacy because you can’t prove that it will.”

I find this especially frustrating as a historian, because often what I’m doing is comparing something in the present to a wide number of examples of change over time in the past. And in many cases, people in the past who have noted the incremental or cumulative dangers of an event or trend and been correct have had to endure finger-wagging galore from mainstream pundits who try to stay deeply buried in the vaults of consensus. When someone says, “Eventually this will undermine the legitimacy of something important”, that’s a slippery-slope argument of a kind, but it’s a completely legitimate one. Eventually it will. Now it has.

For almost the entire lifespan of this now more-than-a-decade-old blog, one of the things I’ve been warning about is the dangers posed by a failing sense of connection between citizens and the formal political institutions of many nation-states, including the United States–and that one of the foremost dangers would be that a kind of populist anger that might be potentially indeterminate or plastic in its ideological loyalties would be captured by reactionary nationalism. Well, here we are: the slide down that slope is nearly complete. One of the reasons I’m not sure what to blog about any longer is that I don’t think there’s any way back up that slope. There’s no do-overs. I don’t know what to do next, nor do I have any kind of clear insight about what may come of the moment we’re in.

The one thing I do know is that we cannot form anything like a coherent political or intellectual response if we refuse to understand how we got to this moment, and how the history of the present looks to the people who have registered their alienation from and unhappiness with conventional political elites and their favored institutions in a series of votes over the last five years in the United Kingdom, in Colombia, in Austria, in the United States, in India, in Turkey and elsewhere, including in the imminent French elections. Even when we are intensely critical of what they’ve done, and even when we say with complete accuracy that one of the major motivations for what they’ve done is deep-seated racism, xenophobia or other form of desire to discriminate against a class or group of their fellow citizens, we still have to see when and how some of what they think makes a kind of sense–and where people tried to warn long ago that if things kept going as they were going, the eventual consequence might be an indiscriminate feeling of popular cynicism or despair, a kind of blanket dismissal of the powers that be and an embrace of a kind of flat form of “fake news”.

Some examples.

First, let’s take the deranged fake stories about a pizza restaurant in Washington DC being a center of sex trafficking. What makes it possible to believe in obvious nonsense about this particular establishment? In short, this: that the last fifty years of global cultural life has revealed that public innocence and virtue are not infrequently a mask for sexual predation by powerful men. Bill Cosby. Jimmy Savile. Numerous Catholic priests. On and on the list goes. Add to that the fact that one form of feminist critique of Freud has long since been validated: that what Freud classed as hysteria or imagination was in many cases straightforward testimony by women about what went on within domestic life as well as within the workplace lives of women. Add to that the other sins that we now know economic and political power have concealed and forgiven: financial misdoings. Murder. Violence. We may argue about how much, how often, how many. We may argue about typicality and aberration. But whether you’re working at it from memorable anecdotal testimony or systematic inquiry, it’s easy to see how people who came to adulthood in the 1950s and 1960s all over the world might feel as if we live on after the fall, even if they know in their hearts that it was always thus. Just as we fear crime far more than we ought to, we may overestimate dramatically how much corruption is hidden behind a facade of innocence. We should understand why it is easy to believe that anybody powerful, especially any powerful man, might be engaged in sexual misconduct. Think of how many male celebrities and political figures marketed as dedicated to “family values” have turned out to be serial philanders. Cultural conservatives sometimes try to blame this series of revelations on the permissiveness of post-1970 popular culture, but the problem is with the gap between what people pretend to be doing and what they are doing–and it is the kind of gap that readily appears in the rear-view mirror of the past once you see it clearly in the present, as a persistent consequence of male power. The slippery slope here is this: that at some point, people come to accept that this is what all powerful men do, and that any powerful man–or perhaps even powerful woman–who professes innocence is lying. All accusations sound credible, all power comes pre-accused, because at some point, all the Cosbys and teachers at Choate Rosemary Hall and Catholic priests have made it plausible to see rape, assault, molestation everywhere. And by making all of that into that kind of banality, we make it harder to accuse any given individual, like our current President, of some distinctively awful behavior, even though he’s plainly guilty of that. We have to reckon with where we’re at. There’s no way out of where we are without some change in the entanglement of gender, power and sex. Yes, of course it doesn’t mean that every accusation is by definition true, but we should understand why any accusation can make a kind of sense, no matter what other ideological overtones come along with it.

Second, let’s talk about wiretapping. Again, mainstream punditry complains of how President Trump accuses the Obama White House of having him tapped, and they ask: where’s the evidence? And they’re right: the evidence is laughably absent. What they don’t reckon with is that once again, we’re on the bottom of a long-since-slid slope. How many times did Americans and other citizens in other countries have to warn of the consequences of ubiquitous surveillance by intelligence services in terms of the faith and trust that democratic citizens might put in their institutions–and in the degree to which those citizens might believe their own privacy to be safely respected? With each revelation, with each disclosure, with each accusation, sensible liberals and conservatives alike have insisted that this case was necessary, that that practice was prudent, that this example was a minor misstep or judgmental error. That the world is a dangerous place. That the safeguards were in place: secret courts, hidden judges, prudent spies, classified oversight. That citizens just had to trust in the prerogatives of the executive branch, or the prudence of the legislators, or the professionalism of the generals and spies. And so many times that trust has been breached: we have heard, many years later, that surveillance that was crudely political was approved, that signals were intercepted without a care in the world for restraint or rights, and that what intelligence was gathered was ignored, distorted or misused. So are we surprised that today, the current occupant of the White House, can indulge in bad conspiracy theory and evidence-less speculation and strike a chord with some listeners? We shouldn’t be surprised–and we should recognize that this is what happens when you misuse surveillance decade after decade.

I could go on. Corruption: despite a brief spasm of reform after Nixon, pretty soon we were back to numerous elected officials who thought little of lying and covering up, or saying one thing while grossly doing another behind closed doors. Crony capitalism–having another law for the rich than the poor–all the current material that Trump likes to preach to his favored audiences. People were warned that if something didn’t change, if some acts weren’t cleaned up, if folks didn’t think about what happens when mistrust grows to an epidemic, if there wasn’t some urgency about a more transparent and honest government, then the public would grow accustomed to it all, would come to believe in the ubiquity of those sins. They would stop listening to cries of wolf, because they would falsely believe all the world to be a world of wolves. Some of what Trump throws at the wall sticks because there’s a truth to it, however woefully he may stink of the worst of what he hurls.

Undoing that will take something like a revolution, or at least a cleansing. If we still hope to avoid that being Steve Bannon’s “unravelling of the administrative state”, then it will take something quite the opposite of what Bannon has in mind. It will take a new generation of public officials, political leaders, and prominent citizens who understand that even small ditches will increment eventually into bottomless pits. Who live up to what they profess, who build something new. So far I see almost no sign that the mainstream of the Democratic Party understands this at all.

]]>
https://blogs.swarthmore.edu/burke/blog/2017/04/17/home-to-roost/feed/ 12
Trumpism and Expertise https://blogs.swarthmore.edu/burke/blog/2016/12/15/trumpism-and-expertise/ Thu, 15 Dec 2016 17:55:19 +0000 https://blogs.swarthmore.edu/burke/?p=3053 Continue reading ]]> The conventional wisdom was that the Cold War ended when the Soviet Union fell and its satellite states became independent once again.

I think actually that the Cold War just ended right now in 2016. What is it that has ended? Basically an interstate system built to systematically offload volatility and risk onto Western Europe’s former colonies while reducing uncertainty and volatility in interstate relations within the core, whether that was within Europe, between the West and the East, or between the major economic hubs of the global system. In my own current research, I’m thinking about the way that interstate relations were ritualized and formalized to express this sort of predictability between the major Cold War powers and the new states of independent Africa. I recently heard a fantastic talk by the historian Nikhil Singh that added to my thinking on this point, in which he observed that another part of this infrastructure of relations involved assertions about global collaborations towards modernity and progress, that the new temporality of the world-system stressed the relative simultaneity of modernity between states and within states, that the developing world was only just “behind”, that systems of governance and management were all at once just now modernizing, rather than the indefinitely deferred maybe-someday modernity imagined by the architects of indirect rule in modern European empires.

That’s what is ending now, after a long sickly period of invalidism since 1992 or so. All over the world it’s ending. Some places never got to see that less-risk, less-uncertainty world, because they were always tagged as the sites where proxy war would happen or state failure would be tolerated. By the early 2000s, nowhere seemed to be the site of a managed, controlled form of methodical progress. But the elaborate protocols and hierarchies of the infrastructure of Cold War relationships, with their managerial certainties about the importance of expertise and experience, survived more or less intact past the fall of the Berlin Wall. The world that area studies was meant to service, a world where dominant states had to shepherd their flocks with well-trained men and women who spoke languages, knew histories and cultures, understood the particular protocols for each state, that’s the world that’s grinding to a halt. We are now fully in what Ziauddin Sardar calls a “post-normal” world, shaped by complex feedback loops of causality and outcomes that our traditional modes of management and expertise are ill-prepared to deal with or understand.

—————-

Do you actually need to be an expert to head an executive department of the United States government (or its counterparts)? It is plain that for the last three decades, you have not needed to be in the sense that a lack of direct expert knowledge of your area of responsibility would outright mean you would not be appointed or confirmed.

Have the executive departments of the United States government operated better when their top official is a well-trained subject specialist with direct prior experience in that area of administration? I’m not sure that this holds up either. In some cases, I think too much expertise for the Cabinet officer has been a problem, in fact: the policies that get put forward in that circumstance are sometimes too circumscribed, too technocratic, too narrowly conceptualized.

So where is the real domain of expertise? Two places, I think: the undersecretaries who do the real work of leading on particular policies and specific administration, and the “deep state” that executes the will of the appointees below the level of the Cabinet (who in turn are trying to follow the direction of the Cabinet appointees, the President, and to a lesser extent Congress).

I think it’s fair to say that the Administration now taking shape is showing an unprecedented degree of hostility towards the standard post-1945 relationship between expertise and executive administration in this respect. Many of the Cabinet and non-Cabinet heads proposed so far by Trump actively disdain their own department and argue that sources of information and policy insight are better found away from any system of authenticated or trained expertise, regardless of the ideological predisposition of said experts. Given the strength of this view so far, I think we can expect that Trump’s appointees will seek to have all their immediate subordinates align with this overall distaste for the standard markers and sources of expert knowledge.

The “deep state” is another matter. Not only are many civil servants legally protected and standard systems of appointment and seniority insulated from direct political control, many of them also do work where the expertise they possess is opaque to appointed-level authorities but also required by dense interlocking bodies of statute and regulation. Reaching into the worlds where visas are granted, borders are patrolled, inspections are conducted and so on is more than the work of four or eight years. I suspect much of this work, with the requisite expertise required to carry it out, will go significantly unperturbed unless or until it is subject to a strong and persistent directive from the top. (Say, for example, to massively restrict certain kinds of visas or to aggressively deport undocumented residents in new ways, and so on.)

So here’s the question: will an active hostility to expertise in the top three or four layers of executive authority produce bad outcomes at a novel and consistent scale in the coming years?

The answer, I think, is yes, but not all at once, and not as consistently as we might be inclined to presuppose. Let’s start with one of the first issues to arise out of Trump’s approach to government, namely, his disinterest in diplomatic protocols in calls to heads of state and in receiving his daily intelligence briefing. Here are two cases where he has announced as matter of policy that he will not be guided in the same way as past chief executives by expert advice. What will come of that?

Why, for example, does a head of state (or his immediate executive underlings) follow the advice of protocol experts and the diplomatic corps in speaking with counterparts? Three reasons, principally. First, as part of that Cold War system of reducing net uncertainty and risk, by making sure that no miscommunication of intent takes place. Second, as part of an overall system of standardization of communication that performs a certain kind of notional equality between states as a marker of progress towards global modernity. Third, as a persuasive strategy, wherein the rhetorical, cultural and political expertise of diplomatic staff allows the head of state to produce favored outcomes through a form of knowledge arbitrage or information asymmetry, wherein the most expertly informed leader most adroitly matches or confounds the agenda of his conversational partner.

1) On uncertainty and risk. Trump has already communicated his view that better deals are made by a negotiator who is unpredictable, and his general Cabinet seems to believe similarly that the United States should no longer be seen as a reliable, predictable partner with a persistent long-term agenda that favors shared interests and overall stability, but instead as a highly contingent actor who will seek maximum national advantage in all interactions, even if that destabilizes existing agreements and frameworks. He seems to believe this approach is best carried out with a minimum of prior expert knowledge, treating all negotiating partners as similarly pursuing maximum national advantage.

Is he right or wrong about expertise here? Well, first, this is not so much about expertise as it is about philosophy, ethics and morality. It’s a view of human life. But it is also about expertise: it’s a damn fool negotiator who spurns useful information about the person he’s bargaining with, and at least some of that information is not available to intuition, no matter how good the intuition might be. Trump reads the room intuitively in only three ways, though I’ll give him credit for some real skills in this respect: he knows what ramps or riles a crowd up and how to keep adjusting to changes in the crowd’s mood, he knows instinctively how to emasculate or frighten weak men like his primary rivals, and he knows how to bluster when he’s up against someone who isn’t going to back down. He is the equivalent of the poker player Phil Hellmuth. But that style can be beaten, and it can be beaten by someone with more information who also knows that the intuitive negotiator can’t turn his style off when necessary. (I suspect this is why a lot of Trump’s actual deals have been pretty bad for him in their specifics: he can be outplayed by someone who knows the specifics better and understands Trump’s personality well enough to play at him rather than be played. His supposed unpredictability is actually pretty predictable)

Trump may be right that the desire for stability and risk management, managed by conventional systems of academically-vetted expertise, has made the United States in particular a lumbering colossus that can be exploited, targeted, predicted, and manipulated. Much as I think academic disciplinarity in general often prefers predictability and incrementalism over idiosyncrasy and invention, despite much rhetoric to the contrary. But I suspect he will be wrong that expertise is of little importance to the negotiator, and I know that a more unpredictable and uncertain world is a more dangerous one by far. The plutocrats who make up a significant percentage of his Cabinet should be as scared of that as anyone else: “disruption” has a different meaning when there are no rules or limits on interstate relations and international institutions.

2) On the notional equality of states and the belief in progress. Experts were an important part of how we maintained both visions in the Cold War: the proposition that you had to recognize the equal-but-different character of each nation, its defining cultural practices, ways of thought, and so on, was the only equality that an unequal world could offer. Everybody got their own CIA Factbook listing, every country got its own briefing in the same format, every nation had its own scholarly literature. And every expert could produce an account–even a left-wing or dissenting account–of what progress in each notionally equal national unit might look like. Nations in this sense functioned as proxy individuals in a basically liberal framework; just as each individual was notionally entitled to have their distinctiveness recognized by psychologists, by teachers, by doctors, by civil servants, by colleagues, by law enforcement, so too was each nation attended to.

Do we need that? Well, there are other visions of progress, other possible worlds–and other discourses of equality and justice that do not rest on giving everyone their own seat at the United Nations. Some of those other visions require expertise, perhaps of a kind other than what most of the present infrastructure of expertise stands ready to supply.

The Trump Administration is not gearing up for an opposite vision of progress, however, but for its abandonment. Since the end of the Cold War, most leaders have become sheepish about progress-talk. It’s best saved for bland, vague ceremonial speeches or as part of an outraged denunciation of the enemies of progress, say, following a terrorist attack. The Trump Administration and its counterparts rising around the world aren’t interested in even that much, though I would expect a few muttered gestures of this sort at the usual times to persist.

Do we need progress and a system of notional equality between nations or societies? Hell yeah. Are experts important to it? Yes, but not as important as rethinking some of the vision underpinning progress, which experts have been strikingly bad at doing for the entire post-1945 era. Walt Rostow and his heirs, of varying ideologies, can go ahead and sit down and wait until the infrastructure gets rebuilt. The deep ideas and feelings that can sustain a vision of a better world need attention from ordinary people in their everyday lives, from philosophers and hermits, from novelists and dreamers, from tillers of the soil and computer programmers. What Trump is doing here is not first and foremost about a vulgarian assault on expertise, it’s far more fundamental and disastrous than that.

3) On the need for expertise to achieve known objectives and aims.

Here I think it’s unmistakeable: hostility to expertise is stupid. That’s not hypothetical. You did not have to be an expert on the Middle East to know that the American invasion of Iraq was a dumb idea: occupations are almost always dumb ideas, and the people who claimed otherwise in 2002 by citing the US occupation of Germany and Japan after World War II were obviously dumb and/or dishonest in making that point before we ever got to its lack of expert knowledge of history. But the Bush Administration made what was always going to be something of a mess into a catastrophe by insisting that people who had expert knowledge of the Middle East, about Iraq, or even about counterinsurgency, be kept out of the planning of the invasion and the occupation. They got played again and again by unreliable allies, they provoked and motivated Iraqi resistance largely through blundering and incompetence, they wasted both blood and treasure due to inexpert fecklessness.

There are innumerable examples like this in the last sixty years of international relations, and more in the larger swath of world history. It is true enough that expertise alone does not guarantee better outcomes. Left to their own devices, without common sense or wisdom, experts will do things that very nearly as catastrophic as what non-experts do. But the solution to the fallability of experts is not to rubbish them altogether.

Here I think it is safe to say: bad things are going to happen if the Trump Administration is as serious as it appears to be about doing without expert advice in international and domestic policy.

————–

However, there is also this: experts of all kinds have some housecleaning to do in the wake of this election.

First, I’ll return to a point I’ve made many times on this blog. Professionals cannot claim that only they are capable of securing the quality of their services if they don’t actually self-police. Expertise lost some of its legitimacy as a force in public culture and governance through a long period of tolerance for ill-considered or badly supported guidance to policy makers and the public by some experts. I’m not talking here about research fraud, which I think we do well enough with given the difficulty of detecting it consistently, or about extremist outliers who provide patently unbalanced or unsupported advice, but instead the kind of mainstream social science and some natural science that makes overly strong claims about policy or action based on narrowly significant research findings, or is too constrained by over-specialization and so misses the forests for the trees. We have led a lot of people astray, or we have allowed poor-quality journalism or self-interested clients (like industries or particular ideologically-driven policy communities) to distort and misuse what we produce. We need to publish less and polish more, and to abandon narrow single-variable modes of explanation and intervention in dealing with genuinely complex problems. If we’re actually confident that expertise is necessary for governance and for institutional action more generally, then we should be thinking harder about how we make sure that what we deliver is of the highest quality (much as surgeons might generally see that they have a collective interest in preventing poorly-trained surgeons from killing or maiming patients). As much as possible, the dumb kind of cherrypicking favored by pundits like Ezra Klein or slick non-fiction writers like Malcolm Gladwell needs to be contested at every turn. If you buy expertise, we should force clients to buy the whole of it, and relentlessly challenge people who just cite the one thing from our work or guidance that they find flattering, sellable or instrumentally useful. We need to look at Philip Tetlock’s critique of expert political judgment and his accompanying analysis of “superforecasting” and take a lot of the diagnostic there to heart.

Second, in light of this, we have to see some portion of Trumpism’s vision of expertise as rooted in that history of exaggeration and misuse. And part of the problem is that our horrified reaction to Trumpism in turn at least can look like (and might actually be) another kind of “economic anxiety”, namely, a fear of losing one of our major markets for what we have trained to do, and thus a customer base of students looking to be trained similarly.

It would paradoxically help our shared reputation and perhaps rebuild public trust if we could acknowledge the degree of self-interest we have in the system operating as it has operated. Technocrats are as disliked as they are in part because they cast themselves as neutral arbiters who simply are providing information and knowledge without self-interest in either the service or the outcomes. The economies which support their work are frequently opaque even within insider circles, let alone to wider publics. Experts, whether they are pundits or staff members of large organizations or academics or public intellectuals, should have to disclose more clearly where they make their money, and how much the delivery of specific kinds of counsel or research outcomes to specific clients is required to get paid off.

Third, we should also be more confident in a sense that Trumpism is going to be a shitshow if it actually goes ahead and cuts expertise out of the loop as it is seeming to do thus far. I understand that it’s hard to watch bad things happen to our common, shared interests as a people and a world, but it is important in some sense that this horrific experiment be run without intervention. To whatever extent possible, real experts should withhold their guidance if the people now in charge show no respect for the entire idea of expert guidance, even if the consequences are serious, and document every case where the advice of experts was not sought or was superceded if provided. No one will thank us for confronting them with such an archive later on, any more than gravely ill patients welcome being scolded by a doctor who is exasperated by a patient’s systematic failure to follow medical advice, but this is precisely the kind of documentation we’re going to need in the future to re-establish the place of expertise in public life.

]]>
The Room Where It Happens https://blogs.swarthmore.edu/burke/blog/2016/12/08/the-room-where-it-happens/ https://blogs.swarthmore.edu/burke/blog/2016/12/08/the-room-where-it-happens/#comments Thu, 08 Dec 2016 15:44:56 +0000 https://blogs.swarthmore.edu/burke/?p=3055 Continue reading ]]> It would be in a way a comfort–and also a terror–to think, “Well, that’s those people, it’s the way they think, we cannot stop them and there is no way to engage them.”

It’s true, there is no way to engage them–that is what this article shows about Lenny Pozner’s efforts to confront conspiracy theorists who deny that his child died at Sandy Hook. And there is no way to stop them through some force or power that we can muster.

What I think could do is start to recognize our connections to conspiratorial readings as well as our alienation from them. I know some of my close colleagues are less enamored than I am with some recent scholarly writing about the dangers of the “hermeneutics of suspicion”, and I take some of their points seriously.

But I do think that we have for almost fifty years been walking ourselves into a series of practices of reading the textual and cultural worlds around us as a series of visible clues to invisible processes. In some measure because that is the truth of those cultural worlds, in multiple ways. Texts have meanings that they do not yield up to an initial reading. They affect us in ways that are deferred, delayed, or mysterious. So we are right to pursue interpretations that look for how what is visible both produces invisible outcomes and is a sign of invisible circulations in the world.

It is also the truth that we are not witness to many of the moments that control our lives, and some of those are found in “the room where it happens”: in the private chambers of political and social power. But many more are nowhere to be found, produced out of the operations of complex systems that no one controls, in the arcs that fire between sociocultural synapses. We want desperately to see into both kinds of invisibility, and so we pore over the visible as a map to them.

We know that things persist which our society says we no longer profess. Racism, sexism, bias of many kinds, are visible, but you can’t trace them easily back to the visible text of political structure or even to deliberate professions of ideology, to intentional statements made willfully by individuals about how they will dispense the powers at their command. Steve Bannon is not Bull Connor, even if they have inside of them the same awful invisible edifice.

What this leads to–leads *us* to, as well as alt-right conspiracy theorists–is an assertion from the visible of the inevitability of the invisible, of a description of invisible specificity. I have listened to colleagues tell me with a straight face what happened in the room that I was in and they were not in, and have told them that what they’ve said is not even a permissible interpretation, it’s just wrong. To no avail: the people in question just kept telling the story of non-events as fact. I have listened at full faculty meeting to one faculty member offer a description of what happened in a process of decision-making which she was not part of, only to be contradicted by five other faculty members who were part of it, and to the describer insisting that what she said was true while also insisting that she wasn’t saying that what her colleagues had said was untrue. What she said had happened while they were not in that room–but there was no room that they had not been in.

I think we could all compile examples, and we’re tempted to just say: that’s just that person being silly. Or it’s just minor. Or it’s an aberrant result of psychological imbalance.

This is letting ourselves off too lightly. It’s deep in our bones: we have battered ourselves against the shell that hides the invisible, we have produced an escalating tower of knowledge that stretches ever further into the sky without ever finding the heaven of truth, and we’re tired. We know still that there are rooms and entire worlds where it happens and we’re tired of being happened to. So we search for a crack, a clue, a fragment, a trail. We detect, we investigate. We deduce, believing in Holmesian fashion that the remaining impossibilities must be the truth. We describe things that never happened in the belief that they must have, and we attribute things that happened in immanence, in the air that surrounds us and chokes us, to specific agents and specific locations, to the devils we can name.

We, we, we. And them. Not all invisibilities are alike, and the work of inventing some of them is, as Pozner puts it beautifully in working through his own trauma, smothering everything human. It is the same paradox of witchcraft-finding in southern Africa: the quest to locate and confront evil becomes the evil it sets out to fight. But we are not homo evidentius, fighting an alien subspecies of homo conspiratorius. This is another strain of an illness that we also suffer from.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/12/08/the-room-where-it-happens/feed/ 1
On the Arrival of Rough Beasts https://blogs.swarthmore.edu/burke/blog/2016/05/05/on-the-arrival-of-rough-beasts/ https://blogs.swarthmore.edu/burke/blog/2016/05/05/on-the-arrival-of-rough-beasts/#comments Thu, 05 May 2016 16:08:54 +0000 https://blogs.swarthmore.edu/burke/?p=2962 Continue reading ]]> One of the things I find most interesting about the history of advertising is the long-running conflict between the “creatives” and their more quantitative, data-driven opponents within ad agencies. It’s a long-running, widespread opposition between a more humanistic, intuitive, interpretative style of decision-making and professional practice and a more rules-driven, empirical, formalistic approach.

The methodical researchers are generally always going to have to create advertisements and construct marketing campaigns by looking at the recent past and assuming that the near-term future will be the same. In an odd way, I think their practices have been the analog equivalent to much of the algorithmic operations of digital culture, trained through the methodical tracking of observable behavior and the collection of very large amounts of sociological data. If you know enough about what people in particular social structures have done in response to similar opportunities, stimuli or messages, the idea goes, you’ll know what they will do the next time.

My natural sympathies, however, are with the creatives. The creatives are able to do two things that the social science-driven researchers can’t. They can see the presence of change, novelty and possibility, even from very fragmentary or implied signs. And they can produce change, novelty and possibility. The creatives understand how meaning works, and how to make meaning. They’re much more fallible than the researchers: they can miss a clue or become intoxicated with a beautiful interpretation that’s wrong-headed. They’re either restricted by their personal cultural literacy in a way that the methodical researchers aren’t, and absolutely crippled when they become too addicted to telling the story about the audience that they wish was true. Creatives usually try to cover mistakes with clever rhetoric, so they can be credited for their successes while their failures are forgotten. However, when there’s a change in the air, only a creative will see it in time to profit from it. And when the wind is blowing in a stupendously unfavorable direction, only a creative has a chance to ride out the storm. Moreover, creatives know that the data that the researchers hold is often a bluff, a cover story, a performance: poke it hard enough and its authoritative veneer collapses, revealing a huge hollow space of uncertainty and speculation hiding inside of the confident empiricism. Parse it hard enough and you’ll see the ways in which small effect sizes and selective models are being used to tell a story, just as the creatives do. But the creative knows it’s about storytelling and interpretation. The researchers are often even fooling themselves, acting as if their leaps of faith are simply walking down a flight of stairs.

This is only one manifestation of a division that stretches through academia and society. I think it’s a much more momentous case of “two cultures” than an opposition between the natural sciences and everything else. If you want to see this fault line somewhere else besides advertising, how about in media-published social analysis of this year’s presidential election in the United States? Glenn Greenwald and Zaid Jilani are absolutely right that not only have the vast majority of analysts palpably misunderstood what was happening and what was going to happen, but that most of them are now unconvincingly trying to bluff once again at how the data makes sense, the models are still working, and the predictions are once again reliable.

The campaign analysts and political scientists who claim to be working from rock-solid empirical data will never see a change coming until it is well behind them. Up to the point of its arrival, it will always be impossible, because their models and information are all retrospective. Even the equivalent of the creatives in this arena are usually wrong, because most of them are not really trying to understand what’s out there in the world. They’re trying to make the world behave the way they want it to behave, and they’re trying to do that by convincing the world that it’s already doing exactly what the pundit wants to the world to do.

The rise of Donald Trump is only the most visible sign of the things that pundits and professors alike do not understand about which way the wind is blowing. For one, Trump’s rise has frequently been predicted by one set of intuitive readers of American political life. Trump is consequence given flesh, the consequence that some observers have said would inevitably follow from a relentless disregard for truth and evidence that’s been thirty years on the making, from a reckless embrace of avowedly instrumental and short-term pursuit of self-interest, from a sneering contempt for consensus and shared interests. He’s the consequence of engineering districts where swing votes don’t matter and of allowing big money to flood the system without restraint. He’s what many intuitive and data-driven commenters have warned might happen if all that continued. But the election analysts can’t think in these terms: the formal and understood rules of the game are taken to be unchanging. The analysts know what they know. The warning barks from the guard-dogs are just an overreaction to a rustle in the leaves or a cloud over the moon.

But it’s more than that. The pundits and professors who got it wrong on Trump (and who are I think still wrong in understanding what might yet happen) get it wrong because the vote for Trump is a vote against the pundits and professors. The political class, including most of the Republican Party but also a great many progressives, have gotten too used to the idea that they know how to frame the narrative, how to spin the story, how to massage the polls, how to astroturf or hashtag. So many mainstream press commenters are now trying to understand why Trump’s alleged gaffes weren’t fatal to his candidacy, and they’re stupidly attributing that to some kind of unique genius on Trump’s part. The only genius that Trump has in this respect is understanding what was going on when his poll numbers grew rather than dropped after those putative gaffes. The content of those remarks was and remains secondary to his appeal. The real appeal is that he doesn’t give a shit what the media says, what the educated elite say, what the political class says. This is a revolt against us–against both conservative and progressive members of the political class. So of course most of the political class can’t understand what’s going on and keep trying to massage this all back into a familiar shape that allows them to once again imagine being in control.

Even if Trump loses, and I am willing to think he likely will by a huge margin, that will happen only because the insurgency against being polled, predicted, dog-whistled, manipulated and managed into the kill-chutes that suit the interests of various powers-that-be is not yet coalesced into a majority, and moreover, is riven internally by its own sociological divisions and divergences. But even as Trump was in some sense long predicted by the gifted creatives who sift the tea leaves of American life, let me also predict another thing: that if the political class remains unable to understand the circumstances of its own being, and if it is not able to abandon its fortresses and silos, the next revolt will not be so easily contained.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/05/05/on-the-arrival-of-rough-beasts/feed/ 1
Opt Out https://blogs.swarthmore.edu/burke/blog/2016/02/23/opt-out/ https://blogs.swarthmore.edu/burke/blog/2016/02/23/opt-out/#comments Tue, 23 Feb 2016 19:22:45 +0000 https://blogs.swarthmore.edu/burke/?p=2939 Continue reading ]]> There is a particular kind of left position, a habitus that is sociologically and emotionally local to intellectuals, that amounts in its way to a particular kind of anti-politics machine. It’s a perspective that ends up with its nose pressed against the glass, looking in at actually-existing political struggles with a mixture of regret, desire and resignation. Inasmuch as there is any hope of a mass movement in a leftward direction in the United States, Western Europe or anywhere else on the planet, electoral or otherwise, I think it’s a loop to break, a trap to escape. Maybe this is a good time for that to happen.

Just one small example: Adam Kotsko on whether the Internet has made things worse. It’s a short piece, and consciously intended as a provocation, as much of his writing is, and full of careful qualifiers and acknowledgements to boot. But I think it’s a snapshot of this particular set of discursive moves that I am thinking of as a trap, moves that are more serious and more of a leaden weight in hands other than Kotsko’s. And to be sure, in an echo of the point I’m about to critique, this is not a new problem: to some extent this is a continuous pattern that stretches back deep into the history of Western Marxism and postmodernism.

Move #1: Things are worse now. But they were always worse.

Kotsko says this about the Internet. It seems worse but it’s also just the same. Amazon is just the Sears catalogue in a new form. Whatever is bad about the Internet is an extension, maybe an intensification, of what was systematically bad and corrupt about liberalism, modernity, capitalism, and so on. It’s neoliberal turtles all the way down. It’s not worse than a prior culture and it’s not better than a prior culture. (Kotsko has gone on to say something of the same about Trump: he seems worse but he’s just the same. The worst has already happened. But the worst is still happening.)

I noted over a decade ago the way that this move handicapped some forms of left response to the Bush Administration after 9/11. For the three decades before 9/11, especially during the Cold War, many left intellectuals in the West practiced a kind of High Chomskyianism when it came to analyzing the role of the United States in the world, viewing the United States as an imperial actor that sanctified torture, promoted illiberalism and authoritarianism, acted only for base and corrupt motives. Which meant in some sense that the post-9/11 actions of the Bush Administration were only more of the same. Meet the new boss, same as the old boss. But many left intellectuals wanted to frame those actions as a new kind of threat, as a break or betrayal of the old order. Which required saying that there was a difference between Bush’s unilateralism and open sanction of violent imperial action and the United States during the Cold War and the 1990s and that the difference was between something better and something worse. Not between something ideal and something awful, mind you: just substantively or structurally better and substantively or structurally worse.

This same loop pops up sometimes in discussions of the politics of income inequality. To argue that income inequality is so much worse today in the United States almost inevitably requires seeing the rise of the middle-class in postwar America as a vastly preferable alternative to our present neoliberal circumstances. But that middle-class was dominated by white straight men and organized around nuclear-family domesticity, which no progressive wants to see as a preferable past.

It’s a cycle visible in the structure of Howard Zinn’s famous account of American history: in almost all of Zinn’s chapters, the marginalized and the masses rise in reaction to oppression, briefly achieve some success, and then are crushed by dominant elites, again and again and again, with nothing ever really changing.

It’s not as if any of these negative views of the past are outright incorrect. The U.S. in the Cold War frequently behaved in an illiberal, undemocratic and imperial fashion, particularly in the 1980s. Middle-class life in the 1950s and 1960s was dominated by white, straight men. The problems of culture and economy that we identify with the Internet are not without predicate or precedent. But there is a difference between equivalence (“worse now, worse then”) and seeing the present as worse (or better) in some highly particular or specific way. Because the latter actually gives us something to advocate for. “Torture is bad, and because it’s bad, it is so very very bad to be trying to legitimate or legalize it.” “A security state that spies on its own people and subverts democracy is bad, and because it’s bad, it’s so much worse when it is extended and empowered by law and technology.”

When everything has always been worst, it is fairly hard to mobilize others–or even oneself–in the present. Because nothing is really any different now. It is in a funny kind of way a close pairing to the ahistoricism of some neoliberalism: that the system is the system is the system. That nothing ever really changes dramatically, that there have been in the lives and times that matter no real cleavages or breaks.

Move #2: No specific thing is good now, because the whole system is bad.

In Kotsko’s piece on the Internet, this adds up to saying that there is no single thing, no site or practice or resource, which stands as relatively better (or even meaningfully different) apart from the general badness of the Internet. Totality stands always against particularity, system stands against any of its nodes. Wikipedia is not better than Amazon, not really: they’re all connected. Relatively flat hierarchies of access to online publication or speech are not meaningful because elsewhere writers and artists are being paid nothing.

This is an even more dispiriting evacuation of any political possibility, because it moves pre-emptively against any specific project of political making, or any specific declaration of affinity or affection for a specific reform, for any institution, for any locality. Sure, something that exists already or that could exist might seem admirable or useful or generative, but what does it matter?

Move #3: It’s not fair to ask people how to get from here to a totalizing transformation of the systems we live under, because this is just a strategy used to belittle particular reforms or strategies in the present.

I find the sometimes-simultaneity of #2 and #3 the most frustrating of all the positions I see taken up by left intellectuals. I can see #2 (depressing as it is) and I can see #3 (even when it’s used to defend a really bad specific tactical or strategic move made by some group of leftists) but #2 and #3 combined are a form of turtling up against any possibility of being criticized while also reserving the right to criticize everything that anyone else is doing.

I think it’s important to have some idea about what the systematic goals are. That’s not about painting a perfect map between right now and utopia, but the lack of some consistent systematic ideas that make connections between the specific campaigns or reforms or issues that drawn attention on the left is one reason why we end up in “circular firing squads”. But I also agree that it’s unfair to argue that any specific reform or ideal is not worth taking up if it can’t explain that effort will fix everything that’s broken.

4. It’s futile to do anything, but why are you just sitting around?

E.g., this is another form of justifying a kind of supine posture for left intellectuals–a certainty that there is no good answer to the question “What is to be done?” but that the doing of nothing by others (or their preoccupation with anything but the general systematic brokenness of late capitalism) is always worth complaining about. Indeed, that the complaint against the doing-nothingness of others is a form of doing-something that exempts the complainer from the complaint.

——-

The answer, it seems to me, is to opt out of these traps wherever and whenever possible.

We should historicize always and with specificity. No, everything is not worse or was not worse. Things change, and sometimes neither for better nor worse. Take the Internet. There’s no reason to get stuck in the trap of trying to categorize or assess its totality. There are plenty of very good, rich, complex histories of digital culture and information technology that refuse to do anything of the sort. We can talk about Wikipedia or Linux, Amazon or Arpanet, Usenet or Tumblr, without having to melt them into a giant slurry that we then weigh on some abstracted scale of wretchedness or messianism.

If you flip the combination of #2 and #3 on their head so that it’s a positive rather than negative assertion, that we need systematic change and that individual initiatives are valid, then it’s an enabling rather than disabling combination. It reminds progressives to look for underlying reasons and commitments that connect struggles and ideals, but it also appreciates the least spreading motion of a rhizome as something worth undertaking.

If you reverse #4, maybe that could allow left intellectuals to work towards a more modest and forgiving sense of their own responsibilities, and a more appreciative understanding of the myriad ways that other people seek pleasure and possibility. That not everything around us is a fallen world, and that not every waking minute of every waking day needs to be judged in terms of whether it moves towards salvation.

We can’t keep saying that everything is so terrible that people have got to do something urgently, right now, but also that it’s always been terrible and that we have always failed to do something urgently, or that the urgent things we have done never amount to anything of importance. We disregard both the things that really have changed–Zinn was wrong about his cyclical vision–and the things that might become worse in a way we’ve never heretofore experienced. At those moments, we set ourselves against what people know in their bones about the lives they lived and the futures they fear. And we can’t keep setting ourselves in the center of some web of critique, ready to spin traps whenever a thread quivers with movement. Politics happens at conjunctures that magnify and intensify what we do as human beings–and offer both reward and danger as a result. It does not hover with equal anxiety and import around the buttering of toast and the gathering of angry crowds at a Trump rally.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/02/23/opt-out/feed/ 4
Inchworm https://blogs.swarthmore.edu/burke/blog/2015/10/02/inchworm/ https://blogs.swarthmore.edu/burke/blog/2015/10/02/inchworm/#comments Fri, 02 Oct 2015 22:02:32 +0000 https://blogs.swarthmore.edu/burke/?p=2886 Continue reading ]]> Over the last decade, I’ve found my institutional work as a faculty member squeezed into a kind of pressure gradient. On one side, our administration has been requesting or requiring more and more data, reporting and procedures that are either needed to document some form of adherence to the standards of external institutions or that are wanted in order to further professionalize and standardize our operations. On the other side, I have colleagues who either ignore such requests (both specific ones and the entire issue of administrative process) to the maximum extent possible or who reject them entirely on grounds that I find either ill-informed or breathtakingly sweeping.

That pressurized space forms from wanting to be helpful but wanting also to actually take governance seriously. I think stewardship doesn’t conform well to a hierarchical structure, but it also should come with some sense of responsibility to the reality of institutions and their relationship to the wider world. The strongest critics of administrative power that I see among faculty, both here at Swarthmore and in the wider world of public discourse by academics, don’t seem very discriminate in how they pick apart and engage various dictates or initiatives and more importantly, rarely seem to have a self-critical perspective on faculty life and faculty practices. At the same time, there’s a lot going on in academia that comes to faculty through administrative structures and projects, and quite a lot of that activity is ill-advised or troubling in its potential consequences.

A good example of this confined space for me perennially forms around assessment, which I’ve written about before. Sympathy to my colleagues charged with administrative responsibilities around assessment means I should take what they ask me to produce seriously both in the sense that there are consequences to the institution if faculty fail to do in the specified manner and seriously because I value them and even value the concepts embedded in assessment.

On the most basic human level, I agree that the unexamined life is not worth living. I agree that professional practices which are not subject to constant examination and re-evaluation have a tendency to drift towards sloppiness and smug self-regard. I acknowledge that given the high costs of a college education, potential students and their families are entitled to the best information we can provide about what our standards are and how we achieve them. I think our various publics are entitled to similar information. It’s not good enough to say, “Trust us, we’re great”. That’s not even healthy if we’re just talking to ourselves.

So yes, we need something that might as well be called “assessment”. There is some reason to think that faculty (or any other group of professionals) cannot necessarily be trusted to engage in that kind of self-examination without some form of institutional support and attention to doing so. And what we need is not just introspective but also expressive: we have to be able to share it, show it, talk about it.

On the other hand, throughout my career, I’ve noticed that a lot of faculty do that kind of reflection and adjustment without being monitored, measured, poked or prodded. Professionalization is a powerful psychological and intellectual force through the life cycle of anyone who has passed through it, for good and ill. The most powerfully useful forms of professional assessment or evaluation that I can think of are naturally embedded in the workflow of professional life. Atul Gawande’s checklists were a great idea because they could be inserted into existing processes of preparation and procedure, because they are compatible with the existing values of professionals. A surgeon might grouse at the implication that they needed to be reminded about which leg to cut off in an amputation but that same surgeon would agree that it’s absolutely essential to get that right.

So assessment that exists outside of what faculty already do anyway to evaluate student learning during a course (and between courses) often feels superfluous, like busywork. It’s worse than that, however. Not only do many assessment regimes add procedures like baroque adornments and barnacles, they attach to the wrong objects and measure the wrong things. The amazing thing about Gawande’s checklists is that they spread because of evidence of their very large effect size. But the proponents of strong assessment regimes, whether that’s agencies like Middle States or it’s Arne Duncan’s troubled bureaucratic regime at the U.S. Department of Education, habitually ignore evidence about assessment that suggests that it is mostly measuring the wrong things at the wrong time in the wrong ways.

The evidence suggests, especially for liberal arts curricula, that you don’t measure learning course by course and you don’t measure it ten minutes after the end of each semester’s work. Instead you ought to be measuring it over the range of a student’s time at a college or university, and measuring it well afterwards. You ought to be measuring it by the totality of the guidance and teaching a faculty member provides to individual students, and by moments as granular as a single class assignment. And you shouldn’t be chunking learning down into a series of discrete outcomes that are chosen largely because they’re the most measurable, but through the assemblage of a series of complex narratives and reflections, through conversations and commentaries.

In a given semester, what assessment am I doing whether I am asked to do it or not? In any given semester, I’m always trying some new ways to teach a familiar subject, and I’m always trying to teach some new subjects in some familiar ways. I am asking myself in the moment of teaching, in the hours after it, at the end of a semester and at the beginning of the next: did that work? What did I hope would work about it? What are the signs of its working: in the faces of students, in the things they say then and there in the class, in the writing and assignments they do afterwards, in the things they say during office hours, in the evaluations they provide me. What are the signs of success or failure? I adjust sometimes in the moment: I see something bombing. I see it succeeding! I hold tight in the moment: I don’t know yet. I hold tight in the months that follow: I don’t know yet. I look for new signs. I try it again in another class. I try something else. I talk with other faculty. I write about it on my blog. I read what other academics say in online discussion. I read scholarship on pedagogy.

I assess, I assess, I assess, in all those moments. I improve, I think. But also I evolve, which is sometimes neither improvement nor decline, simply change. I change as my students change, as my world changes, as my colleagues change. I improvise as the music changes. I assess.

Why is that not enough for the agencies, for the federal bureaucrats, for the skeptical world? Two reasons, namely. The first is that we have learned not to trust the humanity of professionals when they assure us, “Don’t worry, I’m on it.” For good reasons sometimes. Because professionals say that right up to the moment that their manifest unprofessionalism is laid screamingly bare in some awful rupture or failure. But also because we are in a great war between knowing that most of the time people have what my colleagues Barry Schwartz and Ken Sharpe call “practical wisdom” and knowing that some of the time they also have an innocent kind of cognitive blindness about their work and life. Without any intent to deceive, I can nevertheless think confidently that all is well, that I am teaching just as I should, that I am always above average and getting better all the time, and be quite wrong. I might not know that I’m not seeing or serving some group of students as they deserve. I might not know that a technique that I think delivers great education only appears to because I design tests or assignments that evaluate only whether students do what I want them to do, not whether they’ve learned or become more generally capable. I might not know that my subject doesn’t make any sense any longer to most students. Any number of things.

So that’s the part that I’ll concede to the assessors: it’s not enough for me to be thoughtful, to be practically wise, to work hard to sharpen my professionalism. We need something outside ourselves: an observer, a coach, a reader, an archive, a checklist.

I will not concede, however, that their total lack of interest in this vital but unmeasurable, unnumbered information is acceptable. This should be the first thing they want: our stories, our experiences, our aspirations, our conversation. A transcript of the lived experience of teaching. This is the second reason that the assessors think that what we think about our teaching is not wanted or needed. They don’t want that because they believe that all rhetoric is a lie, all stories are told only to conceal, all narrative is a disguise. They think that the work of interpretation is the work of making smoke from fog, of making lies from untruths. The reason they think that is that stories belong at least somewhat to the teller, because narratives inscribe the authority of the author. They don’t want to know how I assess the act of teaching as I perform it because they want a product, not a process. They want data that belongs to them, not information that creates a relationship between the interpreter and the interpreted. They want to scrub evidence clean, to make an antiseptic knowledge. They want bricks and mortar and to be left alone to build as they will with it.

——————

I get tired of the overly casual use of “neoliberal” as a descriptive epithet. Here however I will use it. This is what neoliberalism does to rework institutions and societies into its preferred environment. This is neoliberalism’s enclosure, its fencing off of commons, its redrawing of the lines. The first thing that gets done with data that has had its narrative and experiential contaminants scrubbed clean is that the data is fed back into the experience of the laborers who first produced it. This was done even before we lived in an algorithmically-mediated world, and has only intensified since.

The data is fed back in to tell us what our procedures actually are, our standards have always been. (Among those procedures will always be the production of the next generation of antiseptic data for future feedback loops.) It becomes the whip hand: next year you must be .05% better at the following objectives. If you have objectives not in the data, they must be abandoned. If you have indeterminacies in what you think “better” is, that’s inadmissable: rarely is this looping even subject to something like a Bayesian fuzziness. This is not some exaggerated dystopic nightmare at the end of a alarmist slippery slope: what I’m describing already happened to higher education in the United Kingdom, largely accomplishing nothing besides sustaining a class of transfer-seeking technocratic parasites who have settled into the veins of British universities.

It’s not just faculty who end up caught in the loop, and like frogs boiling slowly to death, we often don’t see it happening as it happens. We just did our annual fire drill here in my building, and this year the count that we did of the evacuees seemed more precise and drawn-out than last year, and this year we had a mini-lecture about the different scenarios and locations for emergency assembly and it occurred to me: this is so we can report that we did .05% better than last year.

We always have to improve just a little, just as everything has to be “growth-based”, a little bigger next year than last year. It’s never good enough to maintain ground, to defend a center, to sustain a tradition, to keep a body healthy happy and well. Nor is it ever good enough to be different next year. Not a bit bigger, not a bit better, but different. New. Strange. We are neither to be new nor are we to maintain. We are to incrementally approach a preset vision of a slightly better but never perfect world. We are never to change or become different, only to be disrupted. Never to commune or collaborate, always to be architected and built.

———————

So here I am in the gradient again, bowed down by the push on all sides. I find it so hard when I talk to faculty and they believe that their teaching is already wholly and infinitely sufficient. Or that it’s nobody’s business but their own how they teach, what they teach, and what comes of their teaching. Or that the results of their teaching are so sublime, ineffable and phenomenologically intricate that they can say nothing of outcomes or consequences. All these things get said, at Swarthmore and in the wider world of academia. An unexamined life.

Surely we can examine and share, express and create. Surely we can provide evidence and intent. Assess and be assessed in those ways. Surely we don’t have to bury that underneath fathoms of tacit knowledge and inexpressible wisdom. We can have our checklists, our artifacts.

But surely too we can expect from administrations that want to be partners that we will not cooperate in building the Great Machine out of the bones of our humane work. That we’re not interested in being .05% better next year, but instead in wild improvisations and foundational maintenance, in becoming strange to ourselves and familiar once again, in a month, a moment or a lifetime. Surely that’s what it means to educate and become educated in an uncertain world: not .05% more measured comprehension of the impact of the Atlantic slave trade on Sao Tome, but thinking about how a semester of historical study of the Atlantic slave trade might help make a poet forty years hence to write poems, might sharpen an analytic mind, might complicate what was simple or simplify what was complex. Might inform a diplomat ten years from now, might shape a conservative’s certainty that liberals have no answers when he votes next year’s Presidential race. Might inspire a semester abroad, might be an analogy for an experience already had. I can talk about what I do to build ramps to all those possibilities and even to the unknown unknowns in a classroom. I can talk about how I think it’s working and why I think it’s working. But don’t do anything that will lead to me or my successors having to forgo all of that thought in favor of .05% improvements onward into the dreary night of an incremental future.

]]>
https://blogs.swarthmore.edu/burke/blog/2015/10/02/inchworm/feed/ 5
Is There a Desert or a Garden Underneath the Kudzu of Nuance? https://blogs.swarthmore.edu/burke/blog/2015/08/31/is-there-a-desert-or-a-garden-underneath-the-kudzu-of-nuance/ https://blogs.swarthmore.edu/burke/blog/2015/08/31/is-there-a-desert-or-a-garden-underneath-the-kudzu-of-nuance/#comments Mon, 31 Aug 2015 17:52:21 +0000 https://blogs.swarthmore.edu/burke/?p=2875 Continue reading ]]> I like this essay by Kieran Healy a lot, even though I am probably the kind of person who habitually calls for nuance. What this helps me to understand is what I am doing when I make that nearly instinctive move. I suppose in part I am doing what E.P. Thompson did in writing against theory as abstraction: believing that the important things to understand about human life are always descriptive, always in the details, always in what is (or was) lived, real, and tangible. There are days where I would find more persuasive, both as scholar and person, from the truths found in a novel or a deep work of narrative journalism than from social theory. But it is stupid to act as if one can be a microhistorian in a naive and unstructured fashion: there’s tons of theory in there somewhere, from the selection of the stories that we find worth our time to what we choose to represent them as saying. I do not read about human beings and then insist that the only thing I can do is just read to you what I read. I describe, I compress, I abstract. That’s what Kieran is arguing that theory is, and what the demand for “nuance” prevents us from doing in a conscious and creative way.

I suppose I lately have a theory of theory, which is that it is usually a prelude to doing something to human beings wherein the abstractions that make theory ‘good to think’ will become round holes through which real human square pegs are to be pounded. But this is in some sense no better (or worse) than any other abstraction–to really stick to my preferences, I should take every theory (and its application or lack thereof) on its particulars.

I also think that there is something of a puzzle that Kieran works around in the piece, most clearly in his discussion of aesthetics. (Hopefully this is not an objection about the need for nuance by some other name.) But it is this: on what grounds should we prefer a given body of theory if not for its descriptive power? Because that’s what causes the kudzu of nuance to grow so fast and thoroughly: academics read each other’s work evaluatively, even antagonistically. What are we to value between theories if not their descriptive accuracy? (If that’s what we are to value, that will fertilize the kudzu, because that’s what leads to ‘your theory ignores’ and ‘your theory is missing…’) We could value the usefulness of theory: the numbers of circumstances to which it can apply. Or the ease-of-use of theory: its memorability, its simplicity, its familiarity. Or the generativity of theory, tested by the numbers of people who actually do use it, the amount of work that is catalyzed by it.

The problem with all or any of those is that I don’t know that it leaves me with much when I don’t like a theory. Rational choice/homo economicus fits all of these: it is universal in scope, it’s relatively easy to remember and apply as a way to read many many episodes and phenomena, and it has been hugely generative. I don’t like it because I think for one it isn’t true. Why do I think that? Because I don’t think it fits the actual detailed evidence of actual human life in any actually existing human society. Or the actual evidence of how human cognition operates. But I also don’t like it because of what is done in the name of such theory. That would always have to be a post-facto kind of judgment, though, if I were prohibited from a complaint about the mismatch between a theory and the reality of human life, or it would have to be about ad hominem: do I dislike or mistrust the politics of the theorists?

I think this is why we so often fall back into the kudzu of nuance, because if we clear away the overgrowth, we will face one another naked and undisguised. We’d either have to say, “I find your theory (and perhaps you) aesthetically unpleasing or annoying” or “I don’t like the politics of your theory (and perhaps you) and so to war we will go”. The kudzu of nuance may be ugly and confusing, but it at least lets us continue to talk at and past one another without arriving at a moment of stark incommensurability.

]]>
https://blogs.swarthmore.edu/burke/blog/2015/08/31/is-there-a-desert-or-a-garden-underneath-the-kudzu-of-nuance/feed/ 1
Hearts and Minds https://blogs.swarthmore.edu/burke/blog/2015/04/21/hearts-and-minds-2/ https://blogs.swarthmore.edu/burke/blog/2015/04/21/hearts-and-minds-2/#comments Tue, 21 Apr 2015 19:13:09 +0000 https://blogs.swarthmore.edu/burke/?p=2798 Continue reading ]]> Much as I disliked Jonathan Haidt’s recent book The Righteous Mind overall, I’m quite interested in many of the basic propositions that this strain of cognitive science and social psychology are proposing about mind, consciousness, agency, responsibility and will. Most often what frustrates me most is not how unsettling the scholars writing in this vein are but how much they domesticate their arguments or avoid thinking through the implications of their findings.

When we read The Righteous Mind together at Swarthmore, for example, one of my chief objections to Haidt’s own analysis is that he simply asserts that what he and others have called WEIRD psychosocial dispositions (Western, Educated, Industrial, Rich and Democratic) at some point emerged in recent human history (as the acronym suggests) and have never been common or universal at any point since, including now. Haidt essentially leverages that claim into an argument that “conservative” dispositions are the real universal, which I don’t think he even remotely proves, and then gets even more into the weeds by suggesting that people with WEIRD-inflected moral dispositions would accomplish more of their social and political objectives if only they acted somewhat less WEIRD. The argument achieves maximum convolution in Haidt when he seems to suggest that he prefers WEIRD outcomes, because he’s largely stripped away the ground on which he or anyone else could argue for that preference as something other than the byproduct of a cognitive disposition. Why are those outcomes preferable? If they are preferable in terms of some kind of fitness, that they produce either better individual or species-level outcomes in terms of reproduction and survival, presumably that will take care of itself over time. If they are preferable because of some other normative rationale, then where are we getting the capacity for reason that allows us to recognize that? Is it WEIRD to think of WEIRD, in fact? Is The Righteous Mind itself just a product of WEIRD cognitive dispositions? (E.g., the proposition that one should write a book which is based on research which argues that the writing of books based on research should persuade us to sometimes make moral arguments that do not derive their force from the writing of books based on research.)

————

Many newer cognitivist, evolutionary-psychological and memetics-themed arguments get themselves into the same swamp. Is memetics itself just a meme? What kind of meme reproduces itself more readily by revealing its own character? Is “science” or “rationality” just a fitness landscape for memes? Daniel Kahneman at least leaves room for “thinking slow”, which is potentially the space inhabited by science, but the general thrust of scholarly work in these domains makes it harder and harder to account for “thinking slow”, for a self-aware, self-reflective form of consciousness that is capable of accurately or truthfully understanding some of its own conditions of being.

But it isn’t just cognitive science that is making that space harder and harder to inhabit. Various forms of postmodern and postructuralist thought have arrived at some similar rebukes to various forms of Cartesian thinking via some different routes. So here we are: the autonomous self driven by a rational mind with its own distinctive individual character and drives is at the very least a post-1600 invention. This to my mind need not mean that the full package of legal, institutional and psychological structures bound up in that invention are either fake impositions on top of some other “real” kind of consciousness or sociality, nor that this invention is always to be understood as and limited to a Eurocentric imposition. “Invention” is a useful concept here: technologies do not drift free of the circumstances of their creation and dissemination but they can be powerfully reworked and reinterpreted as they spread to other places and other circumstances.

Still, if you believe the new findings of cognitivists, we may be at the real end of that way of thinking about the nature of personhood and identity, and thus maybe at the cusp of experiencing our sense of selfhood differently as well. I think this is where I really find the new cognitivists lacking in imagination, to the point that I end up thinking that they don’t really believe what their own research supposedly shows. If they’re right (and this might apply to some flavors of poststructuralist conceptions of subjectivity and personhood, too), then most of our social structures are profoundly misaligned with how our minds, bodies and socialities actually work. What I find most queasy about a lot of contemporary political and social discourse in the US in this respect is how unevenly we invoke psychologically or cognitively inflected understandings of responsibility, morality, and capacity. Often we seem to invoke them when they suit our existing political and social commitments or prejudices and forget them when they don’t. About which Haidt, Kahneman and others would doubtless say, “Of course, that’s our point”–except that if you believe that’s true, then that would apply to their own research and the arguments they make about its implications, that cognitivism is itself evidence of “moral intuitions”.

————-

Think for example about the strange mix of foundational assertions that now often govern the way we talk about the guilt or innocence of individuals who are accused of crimes or of acting immorally. There’s always been some room for debating both nature and nurture in public disputes over criminality and immorality in the US in the 19th and 20th Century, but the mix now is strikingly different. If you take much of the new work in cognitive science seriously, its implications for criminal justice systems ought to be breathtakingly broad and comprehensive. It’s not clear that anyone is ever guilty in the sense that our current systems assume that we can be, e.g., that as rational individuals, we have chosen to do something wrong and should be held accountable. It’s equally unclear whether we can ever be expected to accurately witness a crime, nor that we are ever capable of accurately judging the guilt or innocence of individuals accused of crimes without being subject both to cognitive bias and to large-scale structural structures of power.

But even among the true believers in the new cognitive science, claims this sweeping are made at best fitfully, and equally many of us in other contexts deploy cognitive views of guilt, responsibility and evidence only when they reinforce political or social ideologies that we support. Many of us (including myself) argue for the diminished (or even absent) responsibility of at least some individuals for behaving criminally or unethically when we believe that they are otherwise the victims of structural oppression or that they are suffering from the aftermath of traumatic experience. But some of us then (including myself) argue for the undiminished personal-individual-rational responsibility of individuals who possess structural power, regardless of whether they have cognitive conditions that might seem to diminish responsibility or have suffered from some form of social or experiential trauma.

Our existing maps of power don’t overlay very well in some cases onto what the evidence of the new cognitive science might try to tell us, or even sometimes into other vocabularies that try to escape a Cartesian vision of the rational, self-ruling individual. A lot of cultural anthropology describes bounded, local forms of reason or subjectivity and argues against expecting human beings operating within those bounds to work within some other form of reason. We try to localize or provincialize any form of reason, all modes of subjectivity, but then we often don’t treat the social worlds of the powerful as yet another locality, we don’t try for an emic understanding of how particular social worlds of power see and imagine the world, but instead actually treat many social actors in those worlds as if they are the Cartesian, universal subjects that they claim to be, and thus hold them responsible for what they do as if they could have seen and done better from some point of near-universal scrutiny of the rational and moral landscape of human possibility.

———–

From whatever perspective–cognitive science, poststructuralism, cultural anthropology, and more–we keep reanimating the Cartesian subject and the social and political structures that were made in its name even when we otherwise believe that minds, selves, consciousness and subjectivity don’t work that way and ought not to work that way. I think at least to some extent this is because we either cannot really imagine the social and political structures that our alternative understandings imply (and thus resort to metaphors: rhizomes, etc.) or because we can imagine them quite well and are terrified by them.

The new cognitivism or evolutionary psychology, if we took it wholly seriously, would either have to tolerate a much broader range of behaviors now commonly defined as crimes and ethical violations as being natural (because where could norms that argue against nature possibly come from, save perhaps from some countervailing cognitive or evolutionary operation) or alternatively would have to approach crime and ethical misbehavior through diagnosis rather than democracy.

The degree to which poststructuralism of various kinds averts its anticipatory gaze when actually confronted by institutionalizations of fragmented, partial or intersectional subjectivity (as opposed to pastward re-readings of subjects and systems now safely dead or antiquated) is well-established. We hover perpetually on the edge of provincializing Europe or seeing the particularity of whiteness because to actually do it is to established the boundedness, partiality and fragility of subjects that we otherwise rely upon to be totalizing and masterful even in our imagination of how that center might eventually be dispersed or dissolved.

I’m convinced that the sovereign liberal individual with a capacity (however limited) for a sort of Cartesian rationalism was and remains an invention of a very particular time and place and thus was and remains something of a fiction. What I’m not convinced of is whether any of the very different projects that either know or believe in alternative ways of imagining personhood and mind really want what they say they want.

]]>
https://blogs.swarthmore.edu/burke/blog/2015/04/21/hearts-and-minds-2/feed/ 7