Oath for Experts – Easily Distracted https://blogs.swarthmore.edu/burke Culture, Politics, Academia and Other Shiny Objects Sat, 27 Jun 2020 20:03:09 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.15 Masking and the Self-Inflicted Wounds of Expertise https://blogs.swarthmore.edu/burke/blog/2020/06/27/masking-and-the-self-inflicted-wounds-of-expertise/ https://blogs.swarthmore.edu/burke/blog/2020/06/27/masking-and-the-self-inflicted-wounds-of-expertise/#comments Sat, 27 Jun 2020 20:03:09 +0000 https://blogs.swarthmore.edu/burke/?p=3308 Continue reading ]]> A broken clock tells the time accurately twice a day, but Donald Trump tells the truth even less often than that. Never on purpose and rarely even by accident. And yet he told an accidental truth recently, one that doesn’t reflect well on him, in saying that some Americans wear masks consistently today because masks have become a symbol of opposition to Trump.

Almost everything that involves the actions of the federal government has been like that since the fall of 2016. What the government does signifies Trump, what it doesn’t do or pointedly refuses to do signifies resistance to his authority. It isn’t instantly true: some policies and actions of the government just continue to signify ordinary operations and the provision of expected services. But the moment Trump becomes even slightly aware of any given policy or action and addresses it even once, the 60-40 divide that now structures two cultural and imaginative sovereignities instantly manifests and the signifiers fall rapidly into Trump’s devouring gravitational pull.

It’s likely true that in any other administration, typical public health discourse about covid-19, including advice on masks, would have been met with some paranoia or resistance, all the more so if masks and the constriction of economic activity were co-identified. It’s also true that Trump is the explosive, catastrophic culmination of thirty years of deliberate Republican subversion of the authority of scientific expertise and the cultivation of the logics of conspiracy theory. Some degree of partisan division in the reaction to various suggestions and orders would have been inevitable even were the President a competent, reasonable adult who believed that the Presidency must at least rhetorically and conceptually be devoted to the leadership of the entire body politic, not an inward-turning constituency of far-right Americans trying to preserve their racial and cultural privileges. No matter what, we would have had a surplus of the sort of fragility, weakness, incoherence and malice that has been on display in public hearings in Florida, California and elsewhere over masking policies. But without Trump, I think that would have been more clearly a fringe sentiment with relatively little weight on the body politic. With him, it is a crushing burden.

But if we hope to eventually emerge from this catastrophic meltdown into a better, more democratic, more just, and more commonsensical nation–perhaps even just into a country that possesses a much larger supply of the adult maturity required to just wear a mask for a year or so in order to safeguard both our own personal health and the health of our fellow citizens, then we have other kinds of work to do as well. One of the major tasks is that experts and educated professionals have got to learn to give up some of their own bad habits. If Republicans have worked to sabotage science and expertise in order to protect their own interests from regulation or constraint, then experts have frequently amplified those ill-meant efforts through their own ineptitude, their own attraction to quack social science and wariness about democratic openness.

—————

This is an old theme for me at this blog, but the masking debacle provides a fresh example of how deep-seated the problem really is.

The last fifteen years have been replete with examples of how many common assumptions we make about medical therapies, sociological and economic phenomenon, drivers of psychological behavior and experience and much else besides rest on very thin foundations of basic research and on early much-cited work that turns out to be a mixture of conjecture and the manipulation of data. We know much less than we often suppose, and we tend to find that out at very unopportune moments.

In the present moment, for example, it perhaps turns out that we know much less about just how long a virus like covid-19 can be infectious in human respiration, how far it travels, and precisely how much wearing a fabric mask with some form of non-woven filter inside might protect a person who was wearing it properly, in relationship to a variety of atmospheric conditions (indoors, outdoors; strong air movement, little air movement; rapid athletic respiration, ordinary at-rest respiration) etc. There are very legitimate reasons why these are not things we can study well right now in the middle of this situation, and why they are a hard set of variables to measure accurately even when the situation is not urgent.

And yet. It has seemed likely from the very first news of a novel coronavirus spreading rapidly in China that wearing a mask, even a simple fabric or surgical mask, might help slow the spread of the virus and offer some form of protection to the wearer, however humbly or partially so.

The early response of various offices within the US government likely will receive considerable critical attention for the next decade and beyond. Not only did the unspeakably self-centered political imperatives of the Trump Administration intervene at a very early juncture, but also there seems to have been some basic breakdowns in competence and leadership at the CDC and elsewhere.

The question of masks, however, was bungled in a more complicated and diffuse way. It’s now clear that most public health officials and medical experts knew full well from the very first news about covid-19 that even surgical or fabric masks but especially N95 or other rated masks, would provide some measure of personal and collective protection for any wearer. And yet many voices stressed until late March 2020 that masks weren’t useful to the general public, that social isolation was the only effective counter-measure, that no one but medical workers or people in close contact with covid-19 patients should be wearing masks. Why not tell people to wear masks from the outset?

The answer seems to be only very slightly about any degree of uncertainty about the empirical evidence for mask-wearing. What really seems to have driven the reluctance to recommend mask-wearing are three basic propositions:

1) That if the benefits of mask-wearing were acknowledged, this would spur a massive amount of panic buying and hoarding of rated masks, which were after all a commonly available commodity, less for protection against infectious disease and more for protection against inhaling minute particulate matter in woodworking, drywalling and other projects.

2) That the general public would not know how to properly wear any mask, whether a simple fabric mask with non-woven filters or a rated mask, in order to insure actual protection from infection–that the masks only conferred meaningful protection if fitted correctly, if not touched constantly by hands during a period of exposure, if the mask-wearer did not touch their face otherwise, if rigorous hand-washing preceded and followed mask-wearing, and if some form of protective eyewear were also worn–and would hence not receive the expected protection from even non-rated masks.

3) That wearing masks might give people a false sense of security and prompt them to circumvent the more critical and impactful forms of social distancing and isolation that were (accurately) seen as more critical to mitigating the damage of the pandemic.

—————-

There are two basic problems with the line of reasoning embedded in those propositions. The first is that they reflect how profoundly unwilling educated professionals are to speak to democratic publics in a way that notionally imagines them as capable of understanding more complicated procedures and more complicated facts.

I know what you are saying: well, have you watched the YouTube videos of people testifying angrily about masks, in which they appear to be barely capable of understanding how to tie their own shoes, let alone how to deal with a public health emergency like this pandemic? Yes, and yes, those folks are appalling and yes they seem to represent a larger group of Americans.

The problem in part is that their behavior and the public culture of educated professionals have involved in relational tandem to one another–and to be caught up in the expression of and enforcement of social stratification. Because we expect people to be irrational and incapable of understanding, we offer partial explanations, exaggerations and half-true representations of research findings and recommended procedures and justify doing so on the grounds that it is urgent to get the outcomes we need to prevent some greater harm–to get people to behave properly, to get funds allocated, to get policies enacted. But it is not a secret that we are doing so. The news gets out that we amplified early reports of famine in order to get the aid allocated in time to make a difference, that we amplified the impact of one variable in the causation of a complicated social problem because it’s the only one we can meaningfully act upon, and so on. The people we’re trying to nudge or change or move know they’re being nudged. They know it from our affect, they know it from their own fractured understandings of the information flowing around them, they know it because it’s a habit with a long history. So they amplify their resistance in turn, even before the Republicans manipulate them or Donald Trump malevolently encourages them.

And in turn what this does is also commit experts to an increasingly unreal or inaccurate understanding of social outcomes in a way that corrodes their own expertise. The experts start to be vulnerable to manipulation by other experts who provide convenient justifying explanations for nudging or manipulation. “Make the plates half as big and it’s like magic! People eat less, obesity falls, the republic is saved! You don’t have to actually talk to people any more or try to understand them in complex terms!” Most of that thinking rests on junk modelling and Malcolm Gladwell-level simplifications once you peel it back and take a close look.

Even when the causes of behavior are in some sense simple, so many experts look away if it turns out the causes are in the wrong domain or are something they themselves are ideologically forbidden to speak to with any clarity. Take for example the fear of hoarding in the early reluctance to clearly recommend mask usage. It’s true that hoarding was a problem and it’s clear it could have been far worse still had the general public come to believe that owning a package of N95 masks was as important as stocking up on toilet paper or making a sourdough starter.

But what’s the problem there? It’s not in the least bit irrational under our present capitalist dispensation to buy up as much of a commodity that you suspect is about to gain dramatically in value. Buy low, sell high is a commandment under capitalism. In our present crisis, we’ve all felt outrage at the men who fill storage units full of hand sanitizer and PPE and called them hoarders. But they’re just the down-market proles that the nightly news feels comfortable mocking. There’s been just as much up-market hoarding, but there we call it business. The President of the United States has helped fill the troughs for various hogfests with his promotion of hydroxychloroquine and so on, but beyond that, organized profiteering has unfolded on more spectacular and yet sanctified scales.

At whatever scales, if the problem is hoarding rather than altruism in a public health crisis, if the problem is someone pursuing profit instead of saving lives, then name the problem for what it is: capitalism as we know it and live it. That’s not ideology or philosophy, it’s plain empirical fact. It’s fine to say that you are facing a problem whose cause is utterly beyond your capacity to address and beyond your expertise to understand. It is not fine to avoid doing that in order to launder the problem so that it comes out being something you know how to describe and feel you can do something to affect. In this case that “something” is to offer a half-truth (masks aren’t useful) in the thought that it might impede or slow down a basically rational response that threatens your capacity to act in a crisis.

I keep saying that expertise needs to respect and emulate the basic idea of the Hippocratic Oath, most centrally: first, do no harm. It is less harmful to name a problem for what it is, even when you cannot deal with it as such and your expertise does not really extend to it. It is less harmful to tell democratic publics what you know to the extent that you know it than to try and amplify, exaggerate or truncate what you know because you’re sure (with some justification) that they will not understand the full story if you lay it out. I understand the impulses that drive expert engagements with publics, but those impulses, even with the best of intentions, end up fueling a fire that far more malicious actors have been building for decades.

]]>
https://blogs.swarthmore.edu/burke/blog/2020/06/27/masking-and-the-self-inflicted-wounds-of-expertise/feed/ 4
Helpful Hints for Skeptics https://blogs.swarthmore.edu/burke/blog/2017/05/21/helpful-hints-for-skeptics/ https://blogs.swarthmore.edu/burke/blog/2017/05/21/helpful-hints-for-skeptics/#comments Sun, 21 May 2017 13:03:36 +0000 https://blogs.swarthmore.edu/burke/?p=3164 Continue reading ]]> I suppose I knew in some way that there were people whose primary self-identification was “skeptic”, and even that there were people who saw themselves as part of the “skeptic community”. But it’s been interesting to encounter the kinds of conversations that self-identified members of the skeptic community have been having with one another, and especially the self-congratulatory chortling of some such over something like the lame “hoax” of gender studies.

Skepticism is really just a broad property of many forms of intellectual inquiry and a generalized way to be in the world. Most scholars are in some respect or another skeptics, or they employ skepticism as a rhetorical mode and as a motivation for their research. Lots of writers, public figures, and so on at least partake of skepticism in some fashion. I’m a bit depressed that people who identify so thoroughly with skepticism that they see that as their primary community and regard the word as a personal identifier don’t seem to be very good at being skeptical.

So a bit of advice for anyone who aspires to not just use skepticism as a tool but to be a skeptic through-and-through.

1) Read Montaigne. Be Montaigne. He’s the role model for skepticism. And take note of his defining statement: What do I know? If you haven’t read Montaigne, you’re missing out.

2) Regard everything you think you know as provisional. Be sure of nothing. When you wake up in the morning, decide to argue that what you were sure of yesterday must be wrong. Just to see what shakes loose when you do it.

3) Never, ever, think your shit doesn’t stink. If you’re spending most of your time attacking others, regarding other people as untrue or unscientific or unrational who need to have your withering skeptical gaze upon them, you’re not a skeptic. Skepticism is first and last introspective. You are the best focus of your own skepticism. Skepticism that is relentlessly other-directed is just assholery with a self-flattering label. Skepticism requires humility.

4) Always doubt your first impulses. Always regard your initial feelings as suspect.

5) Always read past the headline. Always read the fine print. Always read the details. Never be easy to manipulate.

6) Never subcontract your skepticism. “Skeptical community” is in that sense already a mistake. No one else’s skepticism can substitute for your own. Yes, no person is an island, and yes, you too stand on the shoulders of giants. But when it comes to thinking a problem through from as many perspectives as possible, when it comes to asking the unasked questions, every skeptic has to stand on their own two feet.

7) Never give yourself excuses. If you don’t have the time to think something through, to explore it, to look at all the perspectives possible, to ask the counter-intuitive questions, then fine: you don’t have the time. Don’t decide that you already know all the answers without having to do any of the work. Don’t start flapping your gums about the results of your skepticism if you never did the work of thinking skeptically about something.

8) Never be obsessive in your interest in a single domain or argument. If you have something that is so precious to you that you can’t afford to subject it to skepticism, if you have an idee fixe, if you’re on a crusade, you’re not a skeptic.

9) Never resist changing sides. Always be willing to walk a mile in other shoes. Skepticism should be mobile. If you have a white whale you’re chasing, you’re not a good skeptic. A good skeptic should be chasing Ahab as often as the other way round–and sometimes should just be carving scrimshaw and watching while the whale and the captain chase each other.

10) Be curious. A skeptic is a wanderer. If you’re using skepticism as a reason not to read something, not to think about something, not to learn something new, you’re not a good skeptic.

]]>
https://blogs.swarthmore.edu/burke/blog/2017/05/21/helpful-hints-for-skeptics/feed/ 5
Some Work Is Hard https://blogs.swarthmore.edu/burke/blog/2017/05/20/some-work-is-hard/ https://blogs.swarthmore.edu/burke/blog/2017/05/20/some-work-is-hard/#comments Sat, 20 May 2017 14:59:42 +0000 https://blogs.swarthmore.edu/burke/?p=3157 Continue reading ]]> Dear friends, have you ever felt after reading an academic article that annoyed you, hearing a scholarly talk that seemed like nonsense to you, enduring a grant proposal that seemed like a waste of money to you, that you’d like to expose that entire field or discipline as a load of worthless gibberish and see it kicked out of the academy?

You probably didn’t do anything about it, because you’re not an asshole. You realized that a single data point doesn’t mean anything, and besides, you realized that your own tastes and preferences aren’t really defensible as a rigorous basis for constructing hierarchies of value within academia. You probably realized that you don’t really know that much about the field that you disdain, that you couldn’t seriously defend your irritation as an actual proposition in a room full of your colleagues. You realized that if lots of people do that kind of work, there must be something important about it.

Or maybe you are an asshole, and you decided to do something about your feelings. Maybe you even convinced yourself that you’re some kind of heroic crusader trying to save academia from an insidious menace to its professionalism. So what do you have to do next?

Here’s what you don’t do: generate a “hoax” that you think shows that the field or discipline that you loathe is without value and then publish it in a near-vanity open-access press that isn’t even connected to the discipline or field you disdain. This in fact proves nothing except that you are in fact an asshole. It actually proves more: that you’re a lazy asshole. At a minimum, if you think a “hoax” paper shows low standards in an entire field of study, standards that are lower than other disciplines or fields of study, you need to publish your hoax in what that field regards as its most prestigious, carefully-reviewed, field-defining journal. If, for example, you can write an entire article that is not only dependent upon fraudulent citations but is deliberate word salad gibberish (and you carefully indicate your intentions as such to an objective third party prior to beginning the effort) and publish it in Nature or the Journal of the American Medical Association or the American Historical Review or American Ethnologist, etcetera etcetera, you may have demonstrated something, though most likely it would be that something’s gone wrong with the editors or editorial board of that prestigious, discipline-defining journal. If you publish it in a three-year old open-access journal with no reputation that publishes an indifferent array of interdisciplinary work across a huge range of subjects and disciplines, you’ve demonstrated that your check cleared. That’s it. Oh, also that you’re an asshole. And lazy.

Let me put it this way: if there are a lot of people in your profession who have undergone the same basic tests of professional capability that you have–they have the same degree, they have functioned as teachers and as scholars in their home institutions, they have undergone tenure review and promotion review (which includes an institution-wide evaluation), they sit alongside you in committees, and so on, then if you want to deem everything they do as completely lacking in value, as programmatically valueless, you have a hard job ahead of you. Because you’re not just arguing against one or two practicioners whose ethics or capabilities you question, you’re not even just arguing against a whole field, you’re arguing that there is something deeply systematically wrong with the entirety of your profession, with all of academia.

That hard job entails being deeply and systematically informed about the field you are attacking. You have to show an expertise that qualifies you to understand what that field is and to show how and when it established its (to you, illegitimate) place in the profession. This is important both because it is a demonstration of the profession you are trying to preserve and it is a sign of your ethical relationship to other professionals. You don’t just trash people because you have a flip opinion or you always do an eyeroll when that guy down the hall says something that you personally think is silly or risible. You don’t just trash an entire field because you read a bad article once or heard a dumb talk once. You don’t cherry-pick, especially if you’re allegedly a scientist or otherwise committed to rigorous standards of proof. You read and think about the most highly-cited, most field-defining, most respected and assigned, work in the field you dislike. If you’re going to do something like this, you have to do it right.

I’m not wild about evolutionary psychology as a field, for example. I’ve heard some work presented in that field that seems horribly weak by common social science standards. I have serious questions about the work of many of its most prominent representatives. I worry a lot about the bad uses that evolutionary psychological arguments are put to by activists, politicians and the general public. But if I set out to argue that the field should be in no way represented in academia, or that it is a fraud? I would spend a year or more reading evolutionary psychology carefully, I would think hard about the history and development of the field, I would examine its connections and affinities within its own discipline and other disciplines, I’d assure myself that there is almost no one who calls himself or herself an evolutionary psychologist who would pass muster for me, and then and only then would I go after the field as a scholarly act. Otherwise, I’d confine myself to some mild sniping and some targeted critique of specific published works that are relevant to some other claim I’m making. Because I can tell you already, knowing something about the field, that it’s got plenty of legitimacy inside of it. I may be critical of it, but it deserves its place at the table. It exists as a real and serious attempt to answer a series of important questions using a series of legitimate methods. It connects to many other subdisciplines like behaviorial economics. If I did all that work, I’d find that at best I have a critical engagement with evolutionary psychology, not the right to argue for its expulsion from the profession. Because I know this, I value its presence and I’m content if my colleagues in psychology decide that it is a field they would like to invest resources in. If I worry sufficiently about it, I will do more work so that I earn the right to have that worry become a constitutive force in arguments about legitimacy and about resources.

That’s what being a scholar is about: knowing your shit, and treating knowledge responsibly. What’s that? It’s hard to do, and you’re busy? Then shut the fuck up and get back to work. Save it for beer talk at your next professional association meeting. If you’re going to step into the public sphere, if you’re going to make judgments of value in a faculty meeting, then it’s work. It has to be done with rigor and craft like any other scholarly work, in direct proportion to how seriously you want to be taken and how serious the critique you’re offering might be.

]]>
https://blogs.swarthmore.edu/burke/blog/2017/05/20/some-work-is-hard/feed/ 26
Trumpism and Expertise https://blogs.swarthmore.edu/burke/blog/2016/12/15/trumpism-and-expertise/ Thu, 15 Dec 2016 17:55:19 +0000 https://blogs.swarthmore.edu/burke/?p=3053 Continue reading ]]> The conventional wisdom was that the Cold War ended when the Soviet Union fell and its satellite states became independent once again.

I think actually that the Cold War just ended right now in 2016. What is it that has ended? Basically an interstate system built to systematically offload volatility and risk onto Western Europe’s former colonies while reducing uncertainty and volatility in interstate relations within the core, whether that was within Europe, between the West and the East, or between the major economic hubs of the global system. In my own current research, I’m thinking about the way that interstate relations were ritualized and formalized to express this sort of predictability between the major Cold War powers and the new states of independent Africa. I recently heard a fantastic talk by the historian Nikhil Singh that added to my thinking on this point, in which he observed that another part of this infrastructure of relations involved assertions about global collaborations towards modernity and progress, that the new temporality of the world-system stressed the relative simultaneity of modernity between states and within states, that the developing world was only just “behind”, that systems of governance and management were all at once just now modernizing, rather than the indefinitely deferred maybe-someday modernity imagined by the architects of indirect rule in modern European empires.

That’s what is ending now, after a long sickly period of invalidism since 1992 or so. All over the world it’s ending. Some places never got to see that less-risk, less-uncertainty world, because they were always tagged as the sites where proxy war would happen or state failure would be tolerated. By the early 2000s, nowhere seemed to be the site of a managed, controlled form of methodical progress. But the elaborate protocols and hierarchies of the infrastructure of Cold War relationships, with their managerial certainties about the importance of expertise and experience, survived more or less intact past the fall of the Berlin Wall. The world that area studies was meant to service, a world where dominant states had to shepherd their flocks with well-trained men and women who spoke languages, knew histories and cultures, understood the particular protocols for each state, that’s the world that’s grinding to a halt. We are now fully in what Ziauddin Sardar calls a “post-normal” world, shaped by complex feedback loops of causality and outcomes that our traditional modes of management and expertise are ill-prepared to deal with or understand.

—————-

Do you actually need to be an expert to head an executive department of the United States government (or its counterparts)? It is plain that for the last three decades, you have not needed to be in the sense that a lack of direct expert knowledge of your area of responsibility would outright mean you would not be appointed or confirmed.

Have the executive departments of the United States government operated better when their top official is a well-trained subject specialist with direct prior experience in that area of administration? I’m not sure that this holds up either. In some cases, I think too much expertise for the Cabinet officer has been a problem, in fact: the policies that get put forward in that circumstance are sometimes too circumscribed, too technocratic, too narrowly conceptualized.

So where is the real domain of expertise? Two places, I think: the undersecretaries who do the real work of leading on particular policies and specific administration, and the “deep state” that executes the will of the appointees below the level of the Cabinet (who in turn are trying to follow the direction of the Cabinet appointees, the President, and to a lesser extent Congress).

I think it’s fair to say that the Administration now taking shape is showing an unprecedented degree of hostility towards the standard post-1945 relationship between expertise and executive administration in this respect. Many of the Cabinet and non-Cabinet heads proposed so far by Trump actively disdain their own department and argue that sources of information and policy insight are better found away from any system of authenticated or trained expertise, regardless of the ideological predisposition of said experts. Given the strength of this view so far, I think we can expect that Trump’s appointees will seek to have all their immediate subordinates align with this overall distaste for the standard markers and sources of expert knowledge.

The “deep state” is another matter. Not only are many civil servants legally protected and standard systems of appointment and seniority insulated from direct political control, many of them also do work where the expertise they possess is opaque to appointed-level authorities but also required by dense interlocking bodies of statute and regulation. Reaching into the worlds where visas are granted, borders are patrolled, inspections are conducted and so on is more than the work of four or eight years. I suspect much of this work, with the requisite expertise required to carry it out, will go significantly unperturbed unless or until it is subject to a strong and persistent directive from the top. (Say, for example, to massively restrict certain kinds of visas or to aggressively deport undocumented residents in new ways, and so on.)

So here’s the question: will an active hostility to expertise in the top three or four layers of executive authority produce bad outcomes at a novel and consistent scale in the coming years?

The answer, I think, is yes, but not all at once, and not as consistently as we might be inclined to presuppose. Let’s start with one of the first issues to arise out of Trump’s approach to government, namely, his disinterest in diplomatic protocols in calls to heads of state and in receiving his daily intelligence briefing. Here are two cases where he has announced as matter of policy that he will not be guided in the same way as past chief executives by expert advice. What will come of that?

Why, for example, does a head of state (or his immediate executive underlings) follow the advice of protocol experts and the diplomatic corps in speaking with counterparts? Three reasons, principally. First, as part of that Cold War system of reducing net uncertainty and risk, by making sure that no miscommunication of intent takes place. Second, as part of an overall system of standardization of communication that performs a certain kind of notional equality between states as a marker of progress towards global modernity. Third, as a persuasive strategy, wherein the rhetorical, cultural and political expertise of diplomatic staff allows the head of state to produce favored outcomes through a form of knowledge arbitrage or information asymmetry, wherein the most expertly informed leader most adroitly matches or confounds the agenda of his conversational partner.

1) On uncertainty and risk. Trump has already communicated his view that better deals are made by a negotiator who is unpredictable, and his general Cabinet seems to believe similarly that the United States should no longer be seen as a reliable, predictable partner with a persistent long-term agenda that favors shared interests and overall stability, but instead as a highly contingent actor who will seek maximum national advantage in all interactions, even if that destabilizes existing agreements and frameworks. He seems to believe this approach is best carried out with a minimum of prior expert knowledge, treating all negotiating partners as similarly pursuing maximum national advantage.

Is he right or wrong about expertise here? Well, first, this is not so much about expertise as it is about philosophy, ethics and morality. It’s a view of human life. But it is also about expertise: it’s a damn fool negotiator who spurns useful information about the person he’s bargaining with, and at least some of that information is not available to intuition, no matter how good the intuition might be. Trump reads the room intuitively in only three ways, though I’ll give him credit for some real skills in this respect: he knows what ramps or riles a crowd up and how to keep adjusting to changes in the crowd’s mood, he knows instinctively how to emasculate or frighten weak men like his primary rivals, and he knows how to bluster when he’s up against someone who isn’t going to back down. He is the equivalent of the poker player Phil Hellmuth. But that style can be beaten, and it can be beaten by someone with more information who also knows that the intuitive negotiator can’t turn his style off when necessary. (I suspect this is why a lot of Trump’s actual deals have been pretty bad for him in their specifics: he can be outplayed by someone who knows the specifics better and understands Trump’s personality well enough to play at him rather than be played. His supposed unpredictability is actually pretty predictable)

Trump may be right that the desire for stability and risk management, managed by conventional systems of academically-vetted expertise, has made the United States in particular a lumbering colossus that can be exploited, targeted, predicted, and manipulated. Much as I think academic disciplinarity in general often prefers predictability and incrementalism over idiosyncrasy and invention, despite much rhetoric to the contrary. But I suspect he will be wrong that expertise is of little importance to the negotiator, and I know that a more unpredictable and uncertain world is a more dangerous one by far. The plutocrats who make up a significant percentage of his Cabinet should be as scared of that as anyone else: “disruption” has a different meaning when there are no rules or limits on interstate relations and international institutions.

2) On the notional equality of states and the belief in progress. Experts were an important part of how we maintained both visions in the Cold War: the proposition that you had to recognize the equal-but-different character of each nation, its defining cultural practices, ways of thought, and so on, was the only equality that an unequal world could offer. Everybody got their own CIA Factbook listing, every country got its own briefing in the same format, every nation had its own scholarly literature. And every expert could produce an account–even a left-wing or dissenting account–of what progress in each notionally equal national unit might look like. Nations in this sense functioned as proxy individuals in a basically liberal framework; just as each individual was notionally entitled to have their distinctiveness recognized by psychologists, by teachers, by doctors, by civil servants, by colleagues, by law enforcement, so too was each nation attended to.

Do we need that? Well, there are other visions of progress, other possible worlds–and other discourses of equality and justice that do not rest on giving everyone their own seat at the United Nations. Some of those other visions require expertise, perhaps of a kind other than what most of the present infrastructure of expertise stands ready to supply.

The Trump Administration is not gearing up for an opposite vision of progress, however, but for its abandonment. Since the end of the Cold War, most leaders have become sheepish about progress-talk. It’s best saved for bland, vague ceremonial speeches or as part of an outraged denunciation of the enemies of progress, say, following a terrorist attack. The Trump Administration and its counterparts rising around the world aren’t interested in even that much, though I would expect a few muttered gestures of this sort at the usual times to persist.

Do we need progress and a system of notional equality between nations or societies? Hell yeah. Are experts important to it? Yes, but not as important as rethinking some of the vision underpinning progress, which experts have been strikingly bad at doing for the entire post-1945 era. Walt Rostow and his heirs, of varying ideologies, can go ahead and sit down and wait until the infrastructure gets rebuilt. The deep ideas and feelings that can sustain a vision of a better world need attention from ordinary people in their everyday lives, from philosophers and hermits, from novelists and dreamers, from tillers of the soil and computer programmers. What Trump is doing here is not first and foremost about a vulgarian assault on expertise, it’s far more fundamental and disastrous than that.

3) On the need for expertise to achieve known objectives and aims.

Here I think it’s unmistakeable: hostility to expertise is stupid. That’s not hypothetical. You did not have to be an expert on the Middle East to know that the American invasion of Iraq was a dumb idea: occupations are almost always dumb ideas, and the people who claimed otherwise in 2002 by citing the US occupation of Germany and Japan after World War II were obviously dumb and/or dishonest in making that point before we ever got to its lack of expert knowledge of history. But the Bush Administration made what was always going to be something of a mess into a catastrophe by insisting that people who had expert knowledge of the Middle East, about Iraq, or even about counterinsurgency, be kept out of the planning of the invasion and the occupation. They got played again and again by unreliable allies, they provoked and motivated Iraqi resistance largely through blundering and incompetence, they wasted both blood and treasure due to inexpert fecklessness.

There are innumerable examples like this in the last sixty years of international relations, and more in the larger swath of world history. It is true enough that expertise alone does not guarantee better outcomes. Left to their own devices, without common sense or wisdom, experts will do things that very nearly as catastrophic as what non-experts do. But the solution to the fallability of experts is not to rubbish them altogether.

Here I think it is safe to say: bad things are going to happen if the Trump Administration is as serious as it appears to be about doing without expert advice in international and domestic policy.

————–

However, there is also this: experts of all kinds have some housecleaning to do in the wake of this election.

First, I’ll return to a point I’ve made many times on this blog. Professionals cannot claim that only they are capable of securing the quality of their services if they don’t actually self-police. Expertise lost some of its legitimacy as a force in public culture and governance through a long period of tolerance for ill-considered or badly supported guidance to policy makers and the public by some experts. I’m not talking here about research fraud, which I think we do well enough with given the difficulty of detecting it consistently, or about extremist outliers who provide patently unbalanced or unsupported advice, but instead the kind of mainstream social science and some natural science that makes overly strong claims about policy or action based on narrowly significant research findings, or is too constrained by over-specialization and so misses the forests for the trees. We have led a lot of people astray, or we have allowed poor-quality journalism or self-interested clients (like industries or particular ideologically-driven policy communities) to distort and misuse what we produce. We need to publish less and polish more, and to abandon narrow single-variable modes of explanation and intervention in dealing with genuinely complex problems. If we’re actually confident that expertise is necessary for governance and for institutional action more generally, then we should be thinking harder about how we make sure that what we deliver is of the highest quality (much as surgeons might generally see that they have a collective interest in preventing poorly-trained surgeons from killing or maiming patients). As much as possible, the dumb kind of cherrypicking favored by pundits like Ezra Klein or slick non-fiction writers like Malcolm Gladwell needs to be contested at every turn. If you buy expertise, we should force clients to buy the whole of it, and relentlessly challenge people who just cite the one thing from our work or guidance that they find flattering, sellable or instrumentally useful. We need to look at Philip Tetlock’s critique of expert political judgment and his accompanying analysis of “superforecasting” and take a lot of the diagnostic there to heart.

Second, in light of this, we have to see some portion of Trumpism’s vision of expertise as rooted in that history of exaggeration and misuse. And part of the problem is that our horrified reaction to Trumpism in turn at least can look like (and might actually be) another kind of “economic anxiety”, namely, a fear of losing one of our major markets for what we have trained to do, and thus a customer base of students looking to be trained similarly.

It would paradoxically help our shared reputation and perhaps rebuild public trust if we could acknowledge the degree of self-interest we have in the system operating as it has operated. Technocrats are as disliked as they are in part because they cast themselves as neutral arbiters who simply are providing information and knowledge without self-interest in either the service or the outcomes. The economies which support their work are frequently opaque even within insider circles, let alone to wider publics. Experts, whether they are pundits or staff members of large organizations or academics or public intellectuals, should have to disclose more clearly where they make their money, and how much the delivery of specific kinds of counsel or research outcomes to specific clients is required to get paid off.

Third, we should also be more confident in a sense that Trumpism is going to be a shitshow if it actually goes ahead and cuts expertise out of the loop as it is seeming to do thus far. I understand that it’s hard to watch bad things happen to our common, shared interests as a people and a world, but it is important in some sense that this horrific experiment be run without intervention. To whatever extent possible, real experts should withhold their guidance if the people now in charge show no respect for the entire idea of expert guidance, even if the consequences are serious, and document every case where the advice of experts was not sought or was superceded if provided. No one will thank us for confronting them with such an archive later on, any more than gravely ill patients welcome being scolded by a doctor who is exasperated by a patient’s systematic failure to follow medical advice, but this is precisely the kind of documentation we’re going to need in the future to re-establish the place of expertise in public life.

]]>
The Vision Thing https://blogs.swarthmore.edu/burke/blog/2016/10/11/the-vision-thing/ https://blogs.swarthmore.edu/burke/blog/2016/10/11/the-vision-thing/#comments Tue, 11 Oct 2016 17:26:58 +0000 https://blogs.swarthmore.edu/burke/?p=3027 Continue reading ]]> We’re having a “visioning exercise” here at Swarthmore this fall. I couldn’t attend an early gathering for this purpose, and I’m teaching during the next one. This might be just as well, as I’m having to fight back a certain amount of skepticism about the effort even as I feel that the people who’ve organized this deserve a chance to achieve whatever goals they had in mind. I’ve been a part of past strategic planning and we did some of our own work through meeting with groups of various sizes and trying to find out what their “visions” for Swarthmore might be. I found those efforts to be a moderately useful way to tackle a very difficult problem, which is to get various members of an institutional community to have a meaningful conversation about their aspirations for the short and medium-term future of the organization.

I suppose my mild discomfort is with the proposition that we need a consultant to accomplish this aim. Faculty at a wide range of academic institutions tend to be skeptical about consultants on campus. With some reason. I’ve been in more than ten conversations over the last decade with consultants brought on campus for various reasons. One of them was an unmitigated disaster, from my point of view. A few have been revelatory or profoundly useful. Most of have been the equivalent of slipping into lukewarm bathwater: not uncomfortable, not desired, a kind of neutral and inoffensive experience that nevertheless feels like it’s a missed opportunity.

It is too easy for faculty to slip into automatic, knee-jerk negativity about consultants. So I want to think carefully about when I might (and have) found them useful as a part of deliberation or administration in my career.

1. When the consultants have deep knowledge about an issue that has high-stakes implications for academia, where that issue is both technically specific and outside the experience of most or all faculty and existing staff, and yet where there are meaningful decisions to be made that have broad philosophical implications that everyone is qualified to evaluate. There’s no point to hiring a consultant to tell you about an issue that is so technical that no one listening can develop a meaningful understanding of it during a series of short visits. If such an issue is important, you have to hire a permanent administrator who can deal with it. If such an issue is trivial, you ignore it or hire a short-term contractor to deal with it out of sight and mind. If you’re bringing in someone to talk with the community, there has to be something for them to decide upon (eventually).

2. When the community or some proportion of it is openly and unambiguously incapable of making decisions about its future, and acknowledges as much. The classic situation is when an academic department is in “receivership” because of hostility between two or more factions within the department. At that point, someone who is completely outside the situation and who is seen as having no stakes whatsoever in its resolution is tremendously useful. In general, a consultant who is trying to mediate existing disputes can be very helpful. But this takes having concrete disputes that most parties confess have become intractable–you can’t mediate invisible, passive-aggressive disputes, because you can’t even be sure they exist and because the parties to the dispute may contest whether they are in fact involved.

3. When the consultant is using a method to study the campus and its community that by nature is hard to use if you’re an insider. I think primarily this means that if you decide you need an ethnographic examination of your own community, you look for a consultancy that can do that. More generally, any time there’s some thought that your own community is too insular, too prideful, too self-regarding, too limited in its understanding of the big picture, you might legitimately want a consultant to come in. But note that in this case the role of a consultant is more confrontational or even antagonistic: you’re hiring someone to tell you truths that you might not want to hear. This is generally not what consultants do, because they’re usually trying to be soothing and friendly and to not get the people who hired them into trouble by stirring up a hornet’s nest. In a way, you’d need some degree of internal consensus about a need for an “intervention” of some kind for this to work–some agreement that there is an understanding that is possible that is beyond the grasp of people in the community, for some reason. Your consultants would need a skill set and a set of methods suited to this sort of delivery of potentially unwelcome news. I feel as if this the hardest kind of consultancy to buy in the present market, but maybe the kind that most possible buyers could use most.

4. When hiring the consultant is a bridge to some later group of contractors or partners that you know you’re going to need but don’t presently have any relationship to. Maybe you need a new building, maybe you’re going to create a totally new academic program, maybe you’re going to invest in a completely new infrastructure of some kind. You need the consultant even if you know the technical issues because that’s how you build new collaborative relationships with people who will eventually be service providers or who will recommend service providers to you. This is almost consultant as matchmaker.

5. When many people agree there are “unknown unknowns” surrounding the strategic situation that an institution is facing. Probing for issues that neither the institution nor the consultant are accustomed to thinking about, trying to find opportunities that would never occur in the course of everyday thinking about the current situation.

I have a modest problem is when consultancy is used to defer responsibility for a decision that administrators and faculty already know they want to make, or when a consultancy is a deliberate red flag waved at some bulls, a distraction. I understand the managerial realpolitik involved here, and if faculty were totally honest about it, they’d probably admit that they have their own ways of shifting responsibility or distracting critics when they make decisions within their own units and departments. This is a minor and basically petty feeling on my part: there are good, pragmatic reasons to pay for a service that provides some protective cover when facing a decision, as long as the consultant doesn’t end up producing something so inauthentic or generic that it ends up being a provocation in its own right.

I have a bigger problem with consultancy being used as a substitute for something an institutional community should be doing on its own. Then it becomes something like an ill-fitting prosthesis being used to avoid undergoing the painful ordeal of physical therapy. A community of intelligent, well-meaning people with a good deal of communicative alignment and shared professional and cultural norms should be able to find a way to talk, think and decide collectively. If a small institution of faculty, staff, students and associated publics need continuous assistance to accomplish those basic functions, then that’s a fairly grim prognosis for the possibility of larger communities and groups that have very great degrees of difference within them being able to do the same.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/10/11/the-vision-thing/feed/ 1
Experts Say https://blogs.swarthmore.edu/burke/blog/2016/09/22/experts-say/ https://blogs.swarthmore.edu/burke/blog/2016/09/22/experts-say/#comments Thu, 22 Sep 2016 13:03:34 +0000 https://blogs.swarthmore.edu/burke/?p=3020 Continue reading ]]> It’s a small thing, but for me it sums up why I find the mainstream press wearisome, why I find the circle jerk between the conventional wisdom of reporters and the infrastructure of expertise to be another wretched exhibit in the case against the powers that be. The New York Times has a little companion piece today on police shootings. The headline: Why First Aid Is Often Lacking in Critical Moments After a Police Shooting.

The headline promises an explanation, a look into causality, an analysis. The first two paragraphs describe why people might be asking this question: because videos have shown police seemingly indifferent to the ultimately fatal injuries they have inflicted on black men. So we’re already two paragraphs in, hunting a lede that will respond to the headline.

Third paragraph: “Experts in policing have agreed that the way officers respond–or fail to–is often a problem, but they say such failures are not necessarily the fault of the officers, and that law enforcement agencies are starting to address them.”

Fourth paragraph: Quotation from a former police chief who is also the head of a foundation that advises police. Upshot of the quotation: police don’t have a policy on rendering first aid to people they’ve shot. But they’re getting around to it!

—————-

This is not an explanation. This does not fulfill the headline. This is not an analysis. At best one could say that a legitimate headline might be, “Why Police Claim First Aid Is Often Lacking After a Shooting”. You have to get very nearly to the bottom of the inverse pyramid of the story, where reporters are told to bury the least important information, to find another expert questioning whether policy is why people are left to die without even an effort at rendering aid, and suggesting instead that it’s due to the distance between officers and the communities they serve–a polite way to suggest it’s racism. This assertion receives an immediate two-paragraph refutation from the reporter himself, attributed vaguely to more “experts”. It’s human nature! Adrenaline keeps you from helping a person you just shot! The shooter feels traumatized! Oh, and it’s training–you’re looking at the scene as evidence. So, more policy. If only we could get the right policy.

This is a story intended to frame a consensus, to provide nice white people a nice white person thing to say, to curry favor with police, to be part of the establishment. This is putting clothes on the Emperor. This is not analysis. It is not a fulfillment of the headline that sells the story. It’s not even “Some people say and other people say”. It’s “The experts say and say and say and one person says something else and the experts say and say and say that one person is wrong. And so too, you the video-watching public, you are wrong. The experts (who happen to be the people in the videos) say you are wrong.”

But what the experts say is immediately something that can be challenged with common sense. Are police officers robots, who do nothing but what policy commands? Doesn’t that mean, among other things, that policy commands the shooting of black men who have committed no crime, based on nothing more than a feeling of ‘threat’? So here we have policy that says: do what you feel. Because the feeling you have outweighs the need to have evidence that the feeling is justified. But on first aid and its rendering? Don’t do anything unless you’re told to do it. You are a policy robot. And we’re only just now getting around to having a policy, fifty years or more into the era of legal rulings and formal police policies governing the use of deadly force. Rome wasn’t built in a day, you know: policies take time.

It would be laughable if it weren’t unspeakable. The reporter for the NYT should say, “No, really, Mr. Jim Bueermann, president of the Police Foundation, what do you really think is the explanation? Because we both know ‘there is no policy’ is a weak and contemptible answer.” Police, like every other group of working professionals, do many things in their working lives which are not precisely and specifically informed by the writ of policy. They have important formal constraints and important precise procedures, more than most. But at least some of what they have to do is about general training, general outlook, intuition and improvisation. To say that nothing happens in policing but that which policy instructs or forbids is at best the cluelessness of a practiced bureaucrat deep in the bowels of some dank cubicle farm. At worst–and most likely–it is a conscious public relations strategy intended to defer, to divert, to doublespeak and occlude past the plain evidence.

If journalists are going to explain, let them explain, with all their powers of observation and clarity at their command. If they are going to quote and mouthpiece, let them mouthpiece more than just the most favorable view of an unfavorable thing. Say the truths, all of them, hard or unsettling. If you can’t do that, don’t just dump the least establishment-friendly voice down there at the bottom, a buried lede left to bleed out alongside dying men on roadways.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/09/22/experts-say/feed/ 1
On the Arrival of Rough Beasts https://blogs.swarthmore.edu/burke/blog/2016/05/05/on-the-arrival-of-rough-beasts/ https://blogs.swarthmore.edu/burke/blog/2016/05/05/on-the-arrival-of-rough-beasts/#comments Thu, 05 May 2016 16:08:54 +0000 https://blogs.swarthmore.edu/burke/?p=2962 Continue reading ]]> One of the things I find most interesting about the history of advertising is the long-running conflict between the “creatives” and their more quantitative, data-driven opponents within ad agencies. It’s a long-running, widespread opposition between a more humanistic, intuitive, interpretative style of decision-making and professional practice and a more rules-driven, empirical, formalistic approach.

The methodical researchers are generally always going to have to create advertisements and construct marketing campaigns by looking at the recent past and assuming that the near-term future will be the same. In an odd way, I think their practices have been the analog equivalent to much of the algorithmic operations of digital culture, trained through the methodical tracking of observable behavior and the collection of very large amounts of sociological data. If you know enough about what people in particular social structures have done in response to similar opportunities, stimuli or messages, the idea goes, you’ll know what they will do the next time.

My natural sympathies, however, are with the creatives. The creatives are able to do two things that the social science-driven researchers can’t. They can see the presence of change, novelty and possibility, even from very fragmentary or implied signs. And they can produce change, novelty and possibility. The creatives understand how meaning works, and how to make meaning. They’re much more fallible than the researchers: they can miss a clue or become intoxicated with a beautiful interpretation that’s wrong-headed. They’re either restricted by their personal cultural literacy in a way that the methodical researchers aren’t, and absolutely crippled when they become too addicted to telling the story about the audience that they wish was true. Creatives usually try to cover mistakes with clever rhetoric, so they can be credited for their successes while their failures are forgotten. However, when there’s a change in the air, only a creative will see it in time to profit from it. And when the wind is blowing in a stupendously unfavorable direction, only a creative has a chance to ride out the storm. Moreover, creatives know that the data that the researchers hold is often a bluff, a cover story, a performance: poke it hard enough and its authoritative veneer collapses, revealing a huge hollow space of uncertainty and speculation hiding inside of the confident empiricism. Parse it hard enough and you’ll see the ways in which small effect sizes and selective models are being used to tell a story, just as the creatives do. But the creative knows it’s about storytelling and interpretation. The researchers are often even fooling themselves, acting as if their leaps of faith are simply walking down a flight of stairs.

This is only one manifestation of a division that stretches through academia and society. I think it’s a much more momentous case of “two cultures” than an opposition between the natural sciences and everything else. If you want to see this fault line somewhere else besides advertising, how about in media-published social analysis of this year’s presidential election in the United States? Glenn Greenwald and Zaid Jilani are absolutely right that not only have the vast majority of analysts palpably misunderstood what was happening and what was going to happen, but that most of them are now unconvincingly trying to bluff once again at how the data makes sense, the models are still working, and the predictions are once again reliable.

The campaign analysts and political scientists who claim to be working from rock-solid empirical data will never see a change coming until it is well behind them. Up to the point of its arrival, it will always be impossible, because their models and information are all retrospective. Even the equivalent of the creatives in this arena are usually wrong, because most of them are not really trying to understand what’s out there in the world. They’re trying to make the world behave the way they want it to behave, and they’re trying to do that by convincing the world that it’s already doing exactly what the pundit wants to the world to do.

The rise of Donald Trump is only the most visible sign of the things that pundits and professors alike do not understand about which way the wind is blowing. For one, Trump’s rise has frequently been predicted by one set of intuitive readers of American political life. Trump is consequence given flesh, the consequence that some observers have said would inevitably follow from a relentless disregard for truth and evidence that’s been thirty years on the making, from a reckless embrace of avowedly instrumental and short-term pursuit of self-interest, from a sneering contempt for consensus and shared interests. He’s the consequence of engineering districts where swing votes don’t matter and of allowing big money to flood the system without restraint. He’s what many intuitive and data-driven commenters have warned might happen if all that continued. But the election analysts can’t think in these terms: the formal and understood rules of the game are taken to be unchanging. The analysts know what they know. The warning barks from the guard-dogs are just an overreaction to a rustle in the leaves or a cloud over the moon.

But it’s more than that. The pundits and professors who got it wrong on Trump (and who are I think still wrong in understanding what might yet happen) get it wrong because the vote for Trump is a vote against the pundits and professors. The political class, including most of the Republican Party but also a great many progressives, have gotten too used to the idea that they know how to frame the narrative, how to spin the story, how to massage the polls, how to astroturf or hashtag. So many mainstream press commenters are now trying to understand why Trump’s alleged gaffes weren’t fatal to his candidacy, and they’re stupidly attributing that to some kind of unique genius on Trump’s part. The only genius that Trump has in this respect is understanding what was going on when his poll numbers grew rather than dropped after those putative gaffes. The content of those remarks was and remains secondary to his appeal. The real appeal is that he doesn’t give a shit what the media says, what the educated elite say, what the political class says. This is a revolt against us–against both conservative and progressive members of the political class. So of course most of the political class can’t understand what’s going on and keep trying to massage this all back into a familiar shape that allows them to once again imagine being in control.

Even if Trump loses, and I am willing to think he likely will by a huge margin, that will happen only because the insurgency against being polled, predicted, dog-whistled, manipulated and managed into the kill-chutes that suit the interests of various powers-that-be is not yet coalesced into a majority, and moreover, is riven internally by its own sociological divisions and divergences. But even as Trump was in some sense long predicted by the gifted creatives who sift the tea leaves of American life, let me also predict another thing: that if the political class remains unable to understand the circumstances of its own being, and if it is not able to abandon its fortresses and silos, the next revolt will not be so easily contained.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/05/05/on-the-arrival-of-rough-beasts/feed/ 1
Oath for Experts Revisited https://blogs.swarthmore.edu/burke/blog/2015/09/22/oath-for-experts-revisited/ https://blogs.swarthmore.edu/burke/blog/2015/09/22/oath-for-experts-revisited/#comments Tue, 22 Sep 2015 20:44:49 +0000 https://blogs.swarthmore.edu/burke/?p=2883 Continue reading ]]> I was just reminded by Maarja Krustein of a concept I was messing around a while back, of getting people together to draft a new “oath for experts”. I had great ambitions a few years back about this idea, about trying to renovate what an expert ought to act like, to describe a shared professional ethic for experts that would help us explain what our value still might be in a crowdsourced, neoliberal moment. The Hippocratic Oath is at least one of the reasons why many people still trust the professionalism of doctors (and are so pointedly scandalized when it is unambiguously violated).

We live in a moment where increasingly many people either believe they can get “good enough” expertise from crowdsourced knowledge online or where experts are all for sale to the highest bidder or will narrowly conform their expertise to fit the needs of a particular ideology or belief system.

I think in both cases these assumptions are still more untrue than true. Genuine experts, people who have spent a lifetime studying particular issues or questions, still know a great deal of value that cannot be generated by crowdsourced systems–in fact, most crowdsourcing consists of locating and highlighting such expertise rather than spontaneously generating a comparable form of knowledge in response to any query. I still think a great many experts, academic and otherwise, remain committed to providing a fair, judicious accounting of what they know even when that knowledge is discomforting to their own political or economic interests.

Mind, you, crowdsourcing and other forms of networked knowledge are nevertheless immensely valuable, and sometimes a major improvement over the slow, expensive or fragile delivery of authoritative knowledge that experts in the past could provide. Constructing accessible sources of everyday reference in the pre-Internet world was a difficult, laborious process.

It’s also undoubtedly true that there are experts who sell their services in a crass way, without much regard for the craft of research or study, to whomever is willing to pay. But this is why something like an oath is necessary, and why I think everyone who depends upon being viewed as a legitimate expert has a practical reason to join a large-scale professional alliance designed to reinvigorate the legitimacy of expertise. This is why professionalization happened during the 20th Century, as groups of experts who shared a common training and craft tried to delegitimate unscrupulous, predatory or dangerous forms of pseudo-expertise and insist on rigorous forms of licensing. I don’t think you can ever create a licensing system for something as broad as expertise, but I do think you could expect a common ethic.

The last time I tried to put forward one plank of a plausible oath, I made the mistake of picking an example that created more heat than light. I might end up doing that again, perhaps by underestimating just how many meal tickets this proposed oath might cancel. But let’s try a few items that I personally would be glad to pledge, in the simplest and most direct form that I can think of:

1) An expert should continuously disclose all organizations, groups and companies to whom they have provided direct advice or counsel, regardless of whether the provision of this advice was compensated or freely given. All experts should maintain a permanent, public transcript of such disclosures.

2) An expert should publically disclose all income received from providing expert advice to clients other than their main employer. All experts should insist that their main employer (university, think tank, political action committee, research institute) disclose its major sources of funding as well. The public should always know whether an expert is paid significantly by an organization, committee, company or group that directly benefits from that person’s expert knowledge.

3) Any expert providing testimony at a criminal or civil trial should do so for free. No expert should be provided compensation directly or indirectly for providing expert testimony. Any expert who serves as a paid consultant for a plaintiff or a defendant should not provide expert witness at a trial involving that client.

4) All experts should disclose findings, information or knowledge that contradicts or challenge their own previous conclusions or interpretation when that information becomes known to them in the course of their own research or analysis. Much as newspapers are expected to publish corrections, experts should be prepared to do the same.

]]>
https://blogs.swarthmore.edu/burke/blog/2015/09/22/oath-for-experts-revisited/feed/ 4
Yes, We Have “No Irish Need Apply” https://blogs.swarthmore.edu/burke/blog/2015/07/29/yes-we-have-no-irish-need-apply/ https://blogs.swarthmore.edu/burke/blog/2015/07/29/yes-we-have-no-irish-need-apply/#comments Wed, 29 Jul 2015 15:48:46 +0000 https://blogs.swarthmore.edu/burke/?p=2848 Continue reading ]]> Just came across news of the publication of Rebecca Fried’s excellent article “No Irish Need Deny: Evidence for the Historicity
of NINA Restrictions in Advertisements and Signs”, Journal of Social History, 10:1093, 2015, from @seth_denbo on Twitter.

First, the background to this article. Fried’s essay is a refutation of a 2002 article by the historian Richard Jensen that claimed that “No Irish Need Apply” signs were rare to nonexistent in 19th Century America, that Irish-American collective memory of such signs (and the employment discrimination they documented) was largely an invented tradition tied to more recent ideological and intersubjective needs, and that the Know-Nothings were not really nativists who advocated employment (and other) discrimination against Irish (or other) immigrants.

Fried is a high school student at Sidwell Friends. And her essay is just as comprehensive a refutation of Jensen’s original as you could ever hope to see. History may be subject to a much wider range of interpretation than physics, but sometimes claims about the past can be as subject to indisputable falsification.

So my thoughts on Fried’s article.

1) Dear Rebecca Fried: PLEASE APPLY TO SWARTHMORE.

2) This does really raise questions, yet again, about peer review. 2003 and 2015 are different kinds of research environments, I concede. Checking Jensen’s arguments then would have required much more work of a peer reviewer than more recently, but I feel as if someone should have been able to buck the contrarian force of Jensen’s essay and poked around a bit to see if the starkness of his arguments held up against the evidence.

3) Whether as a peer reviewer or scholar in the field, I think two conceptual red flags in Jensen’s essay would have made me wary on first encounter. The first is the relative instrumentalism of his reading of popular memory, subjectivity and identity politics. I feel as if most of the discipline has long since moved past relatively crude cries of “invented tradition” as a rebuke to more contemporary politics or expressions of identity to an assumption that if communities “remember” something about themselves, those beliefs are not arbitrary or based on nothing more than the exigencies of the recent past.

4) The second red flag, and the one that Fried targets very precisely and with great presence of mind in her exchanges with Jensen, is his understanding of what constitutes evidence of presence and the intensity of his claims about commonality. In the Long Island Wins column linked to above, Jensen is quoted as defending himself against Fried by moving the goalposts a bit from “there is no evidence of ‘No Irish Need Apply'” to “The signs were more rare than later Irish-Americans believed they were”. The second claim is the more typical sort of qualified scholarly interpretation that most academic historians offer–easy to modify on further evidence, and even possible to concede in the face of further research. But when you stake yourself on “there was nothing or almost nothing of this kind”, that’s a claim that is only going to hold up if you’ve looked at almost everything.

I often tell students who are preparing grant proposals to never ever claim that there is “no scholarship” on a particular subject, or that there are “no attempts” to address a particular policy issue in a particular community or country. They’re almost certainly wrong when they claim it, and at this point in time, it takes only a casual attempt by an evaluator to prove that they’re wrong.

But it’s not just that Jensen is making what amounts to an extraordinary claim of absence, it is that his understanding of what presence would mean or not mean, and the crudity of his attempt to quantify presence, that is an issue. There may be many sentiments in circulation in a given cultural moment that leave few formal textual or material signs for historians to find later on. Perhaps I’m more sensitive to this methodological point because my primary field is modern Africa, where the relative absence of how Africans thought, felt and practiced from colonial archives is so much of a given that everyone in that field knows to not overread what is in the archive and not overread what is not in the archive. But I can only excuse Jensen so far on this point, given how many Americanists are subtle and sensitive in their readings of archives. Meaning, that even if Jensen had been right that “No Irish Need Apply” signs (in ads, in doors, or wherever) were very rare, a later collective memory that they were common might simply have been a transposition of things commonly said or even done into something more compressed and concrete. Histories of racism and discrimination are often histories of “things not seen”.

But of course as Fried demonstrates comprehensively, that’s not the case here: the signage and the sentiment were in fact common at a particular moment in American history. Jensen’s rear-guard defense that an Irish immigrant male might only see such a sentiment once or twice a year isn’t just wrong, it really raises questions about his understanding of what an argument about “commonality” in any field of history should entail. As Fried beautifully says in her response, “The surprise is that there are so many surviving examples of ephemeral postings rather than so few”. She understands what he doesn’t: that what you find in an archive, any archive, is only a subset of what was once seen and read and said, a sample. A comparison might be to how you do population surveys of organisms in a particular area. You sample from smaller areas and multiply up. If even a small number of ads with “No Irish Need Apply” were in newspapers in a particular decade, the normal assumption for a historian would be that the sentiment was found in many other contexts, some of which leave no archival trace. To argue otherwise–that the sentiment was unique to particular newspapers in highly particular contexts–is also an extraordinary argument requiring very careful attention to the history of print culture, to the history of popular expression, to the history of cultural circulation, and so on.

Short version: commonality arguments are hard and need to be approached with care. They’re much harder when they’re made as arguments about rarity or absence.

5) I think this whole exchange is on one hand tremendously encouraging as a case of how historical scholarship really can have a progressive tendency, to get closer to the truth over time–and it’s encouraging that our structures of participation in scholarship remain porous enough that a confident and intelligent 9th grader can participate in the achievement of that progress as an equal.

On the other hand, it shows why we all have to think really carefully about professional standards if we want to maintain any status at all for scholarly expertise in a crowdsourced world. I’ve said before that contemporary scholars sometimes pine for the world before the Internet because they felt safe that any mistakes they make in their scholarship would have limited impact. If your work was only read by the fifty or so specialists in your own field, and over a period of twenty or thirty years was slowly modified, altered or overturned, that was a stately and respectable sort of process and it limited the harm (if also the benefit) of any bolder or more striking claims you might make. But Jensen’s 2002 article has been cited and used heavily by online sources, most persistently in debates at Snopes.com, but also at sites like History Myths Debunked.

For all the negativity directed at academia in contemporary public debate, some surveys still show that the public at large trusts and admires professors. That’s an important asset in our lives and we have serious collective interest in preserving it. This is the flip side of academic freedom: it really does require some kind of responsibility, much as that requirement has been subject to abuse by unscrupulous administrations in the last two years or so. We do need to think about how our work circulates and how it invites use, and we do need to be consistently better than “the crowd” when we are making strong claims based on research that we supposedly used our professional craft to pursue. It’s good that our craft is sufficiently transparent and transferrable that an exceptional and intelligent young person can use it better than a professional of long standing. That happens in science, in mathematics, and other disciplines. It’s maybe not so good that for more than ten years, Jensen’s original claims were cited confidently as the last word of an authenticated expert by people who relied on that expertise.

]]>
https://blogs.swarthmore.edu/burke/blog/2015/07/29/yes-we-have-no-irish-need-apply/feed/ 14
The Potential Condescension of “Informed Consent” https://blogs.swarthmore.edu/burke/blog/2014/08/13/the-potential-condescension-of-informed-consent/ https://blogs.swarthmore.edu/burke/blog/2014/08/13/the-potential-condescension-of-informed-consent/#comments Wed, 13 Aug 2014 20:11:06 +0000 https://blogs.swarthmore.edu/burke/?p=2665 Continue reading ]]> Many years ago, I was involved in judging an interdisciplinary grant competition. At one point, there was an intense discussion about a proposal where part of the research involved ethnographic research that concerned illegal activity in a developing country. We were all convinced of the researcher’s skill and sensitivity and the topic itself was unquestionably important. We were also convinced that it was plausible and that the researcher could handle immediate issues of safety for the researcher and the people being studied. The disagreement was about whether the subjects could ever give “informed consent” to being studied in a project that might ultimately identify enough about how they conducted their activities to put them at risk no matter how carefully the researcher disguised the identities of the informants.

I had to acknowledge that there was potential risk. When I teach Ellen Hellman’s classic sociological study of African urban life, Rooiyard, I point out to the students that she learned (and disclosed) enough about how women carried out illegal brewing to potentially help authorities disrupt those activities, which is one of the reasons (though surely not the only one) for the degree of suspicion that Hellman herself says she was regarded with.

But I thought this shouldn’t be an issue for the group because I believed (and still believe) that the men being studied could make up their own minds about whether to participate and about the risks of disclosure. Several of the anthropologists on the panel disagreed strongly: they felt that there was no circumstance under which these non-Western men in this impoverished society could accurately assess the dangers of speaking further with this researcher (who already knew the men and had done work with them on other aspects of their social and cultural lives). The disparities in power and knowledge, they felt, made something like “informed consent” impossible. Quite explicitly, my colleagues were saying that even if the men in the study felt like it was ok to be studied, they were wrong.

Now this was long enough ago that on many campuses, Institutional Review Boards were only just getting around to asserting their authority over qualitative and humanistic research, so in many ways our committee was providing that kind of oversight in the absence of it existing on individual campuses. Over multiple years of participating, I only saw this kind of question come up three or four times, and this was the most “IRB-like” of all these conversations.

I was alarmed then and have remained alarmed at the potential for unintended consequences from this perspective. Much as we might like to blame those consequences on bureaucratic overreach or administrative managerialism, which today often functions as all-purpose get-out-of-jail-free card for faculty, the story at least starts with wholly good intentions and a generative critique of social power.

From a great many directions, academics began to understand about forty years ago that asymmetries of power and wealth didn’t simply disappear once someone said, “Hey, I’m just doing some research”. There were a great many critical differences between an ethnographic conversation between an American professor and an African villager on one hand and a police interrogation room on the other, but those differences didn’t mean that the former situation was a frictionless meeting between totally equal people who just decided to have a nice conversation about a topic of mutual interest.

The problem with proceeding from a more self-aware, self-reflexive sense of how power pervades all social relations and interactions to a sense that everyone with less power must be protected from everyone with more power is that this very rapidly becomes a form of racism or discrimination vastly more objectionable than the harm it alleges to prevent. What it leads to is a categorical assertion that entire groups of people are systematically less able to understand and assess their self-interest, less able to understand the consequences of their actions, less able to be trusted with their own agency as human beings. The difference between this view and the imperial and racist version of colonial subjects is small to nonexistent. Yes, there may be contexts like prisons or the aforementioned interrogation room where it takes specific attention to protect and recognize moments of real consent and communication, but it is important that we see those contexts as highly specific and bounded. There are moments where it is strategically, ethically, and even empirically important to defend universals, and this is one of them. Subjectivity has difference, but the rights and perogatives of modern personhood should be assumed to apply to everyone.

A good researcher, in my experience, knows when something’s been said in a conversation that it’s best not to translate into scholarship. Much as a good colleague knows when to keep a confidence that they weren’t directly asked to keep. We’re all sitting on things that were said to us in trust, sometimes by people who were trying to impress us or worried about what we might think, that we never use and often consciously try to forget that we heard. The problem occurs when this kind of sensitive, quintessentially situational judgment call gets translated into a rule, a committee, a structure, a dictum because we’re afraid of, and occasionally encounter, a bad researcher (or a good one who makes a bad judgment call).

I accepted my colleagues’ call in that long-ago conversation though I thought and still think they were wrong, because it was one project being evaluated in one discussion for one organization. I don’t accept it when I think I think the call is being made categorically, in whatever context. If you want an example of what can happen when that sort of view of human subjects settles in to stay and becomes a dictum, I think a distinction between an American doctor being judged capable of making informed consent to taking an experimental drug for ebola and a Sierra Leonean doctor being judged of not being capable of informed consent.

]]>
https://blogs.swarthmore.edu/burke/blog/2014/08/13/the-potential-condescension-of-informed-consent/feed/ 1