Swarthmore – Easily Distracted https://blogs.swarthmore.edu/burke Culture, Politics, Academia and Other Shiny Objects Fri, 20 Mar 2020 15:20:24 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.15 An Actual Trolley Problem https://blogs.swarthmore.edu/burke/blog/2020/03/20/an-actual-trolley-problem/ https://blogs.swarthmore.edu/burke/blog/2020/03/20/an-actual-trolley-problem/#comments Fri, 20 Mar 2020 15:20:24 +0000 https://blogs.swarthmore.edu/burke/?p=3301 Continue reading ]]> I’ve always seen a certain style of thought experiment in analytic philosophy and psychology as having limited value–say for example the famous “trolley problem” that asks participants to make an ethical choice about whose life to save in a situation where an observer can make a single intervention in an ongoing event that directs inevitable harm in one of two directions.

The problem with thought experiments (and associated attempts to make them into actual psychological experiments) is that to some extent all they do is clarify what our post-facto ethical narrative will be about an action that was not genuinely controlled by that ethical reasoning. Life almost never presents us these kinds of simultaneous, near-equal choices, and we almost never have the opportunity to reason clearly in advance of a decision about such choices. Drama and fiction as well as philosophy sometimes hope to stage or present us these scenarios either to help us understand something we did (or was done to us) in the confusion of events, or perhaps to re-engineer our intuitions for the next time. What this sometimes leads to is a post-facto phony ideological grandiloquence about decisions that were never considered in their actual practice and conception as difficult, competing ethical problems. Arthur Harris wasn’t weighing difficult principles about just war and civilian deaths in firebombing Dresden, he was wreaking vengeance plain and simple. Neoliberal institutions today frequently act as if they’re trying to balance competing ethical imperatives in purely performative way en route to decisions that they were always going to make, that were always going to deliver predictable harms to pre-ordained targets.

But at this moment in late March 2020, humanity and its various leaders and institutions are in fact looking at an honest-to-god trolley problem, and it is crucial that we have a global and democratic discussion about how to resolve it. This is too important to leave to the meritocratic leaders of civil institutions and businesses, too important to be left to the various elected officials and authoritarian bureaucracies, too important to be deferred to just one kind of expertise.

The terms of the problem are as follows:

Strong national quarantines, lockdowns, and closure of nonessential businesses and potential gathering places in order to inhibit the rapid spread of the novel coronavirus COVID-19 will save lives in all countries, whether they have poorly developed health infrastructures, a hodgepodge of privately-insured health networks of varying quality and coherence or high-quality national health systems. These measures will save lives not by containing the coronavirus entirely but simply by slowing the rapidity of its spread and distributing its impact on health care systems which would be overloaded even if they had large amounts of surplus capacity. The overloading of health care facilities is deadly not just to people with severe symptomatic coronavirus infections but to many others who require urgent intensive care: at this same moment, there are still people having heart attacks, life-threatening accidental injuries, poisonings, overdoses, burns from fires, flare-ups of serious chronic conditions, and so on. There are still patients with new diagnoses of cancer or undergoing therapy for cancer. There are still people with non-COVID-19 pneumonias and influenza, still people with malaria and yellow fever and a host of other dangerous illnesses. When a sudden new pandemic overwhelms the global medical infrastructure, some of the people who die or are badly disabled who could have been saved are not people with the new disease. Make no mistake: by the time this is all said and done, perhaps seventy percent of the present population of the planet or more will likely have been exposed to and been carriers of the virus, and it’s clear that some percentage of that number will die regardless of whether there was advanced technology and expertise available to care for them. Let’s say it’s two percent if we can space out the rate of infection: that is still a lot of people. But let’s say it’s eight percent, including non-COVID 19 people who were denied access to medical intervention, if we don’t have strong enforced quarantines at least through the first three months where the rate of infection in any given locale starts to rise rapidly. That’s a lot more people. Let’s say that a relatively short period of quarantine at that level–three months–followed by moderate social distancing–splits the difference. A lot of people, but fewer than in a totally laissez-faire approach.

Against that, there is this: in the present global economy, with all its manifest injustices and contradictions, the longer the period of strongly enforced quarantine, the more that another catastrophe will intensify that will destroy and deform even more lives. There are jobs that must continue to be done through any quarantine. Police, fire and emergency medical technicians must work. Most medical personnel in emergency care or hospitals must work. Critical infrastructure maintenance, all the way down to individual homes and dwellings, still has to be done–you can’t leave a leaking pipe in the basement alone for four months. Banks must still dispense money to account holders, collect interest on loans, and so on. And, as we’re all discovering, there are jobs which can be done remotely in a way that was impossible in 1965 or 1985. Not optimally from anyone’s perspective, but a good deal of work can go on in that way for some months. But there are many jobs which require physical presence and yet are not regarded as essential and quarantine proof. No one is getting routine tooth cleaning. The barber shops are closed. Restaurants and bars are closed. Ordinary retail is closed. Amusement parks and concert halls are closed. All the people whose lives depend on those businesses will have no money coming in the door. Three months of that might be barely survivable. Ten months of that are not. Countries with strong social-democratic safety nets have some insulation against the damage that this sudden enforced unemployment of a quarter to a half of the population. Countries like the United States with almost no safety nets are especially exposed to that damage. But the world can’t go on that way for the full length of time it might take to save the most lives from the coronavirus pandemic. And make no mistake, this will cost lives as well. Quite literally from suicide, from sudden loss of access to shelter and health care, from sudden inability to afford the basic necessities of everyday life. But also from the loss of any future: the spiralling catastrophe of an economic downturn as grave as the Great Depression will deform and destroy a great deal, and throw the world into terrifying new disequilibrium.

It cannot be that saving the most lives imaginable from the impact of the pandemic is of such ethical importance that the destructiveness of the sudden collapse of the world economy is unimportant. It cannot be that business as usual–already deformed by inequality and injustice–must march forward over the deaths caused by the unconstrained, unmanaged spread of COVID-19. Like many people, this problem is not at all abstract for me. I’m 55, I have high blood pressure, I have a history of asthma, I’m severely overweight and when I contract the disease, I may well die. I have a mother that I love who is almost 80, aunts and uncles whom I love who are vulnerable, I have valued colleagues and friends who are vulnerable, and of course some who may die in this have no pre-existing vulnerabilities but just draw a bad card for whatever reason. But there has to be a point where protecting us to the maximum degree possible does more harm to others in a longer-lasting and more devastating way.

And this trolley problem cannot be left to the civic institutions and businesses that in the US were the first to act forcefully in the face of an ineffective and diffident national leadership. Because they will decide it on the wrong basis and they will decide it in a way that leaves all of us out of the decision. They will decide it with lawyers in closed rooms, with liability and insurance as their first concerns. They will decide it following neoliberal principles that let them use the decision as a pretext to accomplish other long-standing objectives–streamlining workforces, establishing efficiencies, strengthening centralized control.

It cannot be left to political authorities alone. Even in the best-case scenario, they will decide it in closed rooms, following the technocratic advice of experts who will themselves stick to their specialized epistemic networks in offering counsel: the epidemiologists will see an epidemic to be managed, the economists will see a depression to be prevented. In the worst-case scenario, as in the United States, corrupt leaders will favor their self-interest, and likely split differences not out of some transparent democratic reasoning but as a way to avoid responsibility.

This has to be something that people decide, and that people are part of deciding. For myself, I think that we will have to put a limit on lockdowns and quarantines and that limit is likely to be something like June or July in many parts of the United States and Europe. We can’t do this through December, and that is not about any personal frustration with having to stay at home for that length of time. It’s about the consequences that duration will wreak on the entirety of our social and economic systems. But it is not anything that any one of us can decide for ourselves as a matter of personal conscience. We the people have to decide this now, clearly, and not leave it to CEOs and administrators and epidemiologists and Congressional representatives and well-meaning governors and untrustworthy Presidents. This needs not to be a stampede led by risk-averse technocrats and managers towards the path of least resistance, because there’s a cliff at the end of all such paths. This is, for once, an actual trolley problem: no matter what we do, some people are going to die as a result of what we decided.

]]>
https://blogs.swarthmore.edu/burke/blog/2020/03/20/an-actual-trolley-problem/feed/ 1
Dialogue and Demand https://blogs.swarthmore.edu/burke/blog/2019/08/01/dialogue-and-demand/ https://blogs.swarthmore.edu/burke/blog/2019/08/01/dialogue-and-demand/#comments Thu, 01 Aug 2019 15:34:29 +0000 https://blogs.swarthmore.edu/burke/?p=3247 Continue reading ]]> Why is a call for conversation or dialogue met so often with indifference or hostility?

That I am thinking about this question might feel peculiar to Swarthmore, but I could just as readily be addressing Johns Hopkins (the scene of protest against the creation of a private police force on campus this past spring), Wesleyan when I was an undergraduate in the 1980s, really higher education all the way back to the mid-1960s. It may seem that I’m talking about a challenge that is peculiar to academia, but in fact I think this is an issue for most contemporary civic and corporate institutions.

So what am I thinking about? Roughly speaking, the kind of impasse in the life of an institution where some group of people within the institution or reliant upon it are demanding concrete, specific changes in how the institution operates and the people with authority over the institution respond to that demand by calling for dialogue and conversation. This usually in turn infuriates or provokes the constituencies demanding changes and leads them to escalate or amplify their demands, which then in turn antagonizes, alienates or worries other groups who might have supported the initial demands but not the intensified or more militant requests, which leads to more people calling for some form of dialogue or deliberation, which then intensifies the us-or-them divide within the institution about the way forward.

I think this general dynamic has been described very well by Moises Naim in his book The End of Power. Naim starts by asking why people who are at the top of the hierarchy in many organizations and institutions–CEOs, college and university presidents, heads of executive agencies in government, leaders of non-profit community groups, and so on, frequently report that they feel powerless to act within their organizations beyond vague, broad or gestural kinds of leadership. The former president of the University of Virginia, Teresa Sullivan, described this view well in the midst of a controversial attempt by her board to displace her when she said that she and her peers invariably had to lead towards change slowly, through “incremental buy-in”. Even that is more active than many leaders of institutions, academic and otherwise, might put it–more typical perhaps is a description of leadership as custodial, as stewardship, on behalf of collectively-determined values or a mission that derives from the inchoate massing of all ‘stakeholders’ in the institution.

Naim observes that in private, leaders and their closest advisors are often not so sanguine. Instead, they express intense frustration about what they feel they can’t do. They can’t admonish or discipline people who are technically subordinate to them but too far away in the hierarchy for that admonishment to feel proportionate or fair. They can’t instruct a division or office within their organization to straightforwardly execute a policy that the leadership wants but the division opposes. They cannot quickly dispense with rules, regulations or even “traditions” that the leader and their close associates deem to be impediments to their vision of progress. They cannot undertake new initiatives unilaterally, no matter how sound they believe their own judgment to be. They can’t reveal the truth as they understand it from facts that are private or confidential.

Naim argues that the contemporary world is being compressed between two simultaneous developments. The first is that power has gotten “big”: that it is increasingly attached to large-scale, centralized and increasingly hierarchical institutions. The second is that power is “decaying”: that it is harder and harder to wield at scale, through a centralized apparatus, and from the top of hierarchies downward as a command exercise. It is harder in part because organizations now have internal structures as well as external constraints that cause this decay. What Naim observes is that people within institutions or dependent upon their actions are simultaneously being consulted or included or brought into dialogue and deliberation at the same time that they feel it is increasingly impossible for their suggestions, advice or observations to actually inform what their institutions do with power.

People know that these institutions are “big”: that the institutions do in fact routinely wield power. A college like Swarthmore year in and year out determines the academic outcomes of 1600 students; it hires, disciplines, tenures (or not) employees; it undertakes expensive construction projects with substantial economic implications; it participates in numerous collective or shared decisions across academia; it buys services and commodities; it invests and accumulates. But if you ask, it’s very hard to find anyone within the institution who ascribes the power to do any of those things directly and unilaterally to themselves or to their offices. The “big” capacity of an institution’s power comes from everywhere and nowhere. As a result, Naim suggests, there is only one form of actual influence over institutional action that most stakeholders, community members or citizens have left, what he calls “the veto”–that people can block or impede or frustrate institutional action. Not necessarily because they actually object that intensely to what is being proposed, but because it is the only action they can actually take in which their own agency is visible, important and has actual impact. In every other deliberative or active moment that people are supposedly included in and consulted about, there is no accountable tracing of whether or how their advocacy and their evidence has weighed on institutional power, and there are repeated encounters with decision-making processes that are either occluded or exclusive, and with accounts of decisions that are in no one’s hands, that are made but made from nowhere in particular. Even when you’ve been in “the room where it happens”, present at the scene where a decision was concretely made by people who have the power to decide, you often leave uncertain of what exactly happened and whether it’s going to be done as it was decided. You will also often not be allowed to speak at all about what was said, what was decided, or by whom. When people rise to block or impede decisions–to exercise the veto out of frustration–that further decays power while doing nothing to change its concentrated ‘bigness’.

———

I think the descriptive usefulness of Naim’s analysis is all around us now. The 2019 American discourse about the “deep state” and desires for various forms of authoritarian or direct-rule escape from its supposed clutches seems entirely consistent with the picture that Naim laid out in 2013. The prevalence of what is now being called “cancel culture” across social media is another manifestation of Naim’s veto, arising from people who feels that in some fashion they are being told that they are included in processes that select or identify cultural and political prominence and authority, if only through access to algorithms that rank and rate, but feeling as if the only real power they have is to reject a selection that has been made without real, transparent and accountable structures of representation and consultation.

I suspect that every working professional across several generations both feels this sense of exclusion and is aware of how they have excluded other people within their own institutional worlds. After twenty-five years of working at my present institution, I can cite innumerable examples of processes in which I have been formally included, cases where my opinion has been solicited, and cases where I’ve taken advantage of what are supposed to be always-open channels for communication to offer feedback in which the difference between my participation and my absence is impossible to discern. Sometimes I’ve seen a point I raised emerge almost entirely verbatim from one of the people involved in the earlier consultation two, five or ten years later with no perceptible connection to that earlier process. Mostly, my participation–sometimes about issues or decisions that I think are highly consequential or urgent–disappears without a trace (often simultaneously with confirmation that what I believed to be urgent was in fact urgent). Committees spent a year (or more) working on a policy that disappears into trackless invisibility afterwards–where it’s not clear even whether administrative leadership thought the policy impossible or risible, whether they earnestly meant to implement it but then the person who would have had responsibility left, or whether it was simply forgotten.

This isn’t distinctive to me. We all feel this way. Women feel this way even more. People of color feel this way even more. We all have had the experience of sounding an alarm that no one hears. Of providing advice that rests on decades of experience that seems to be ignored. Of trying to push towards an outcome that would satisfy many only to watch dismayed as an outcome that satisfies almost no one is chosen instead.

If we have power or responsibility within an institution, many of us have been on the other end. We’ve been the void that doesn’t answer, the soothing managerial assurance that all opinions are helpful, the person who absorbs and later appropriates a solution or idea that someone else advocated. And thus most of us know well why participation in a process doesn’t scale smoothly into an impact on a process. Think of job searches where you have been on the inside of the final decision but where many people provided feedback on a candidate. Some of that feedback you ignore because the person providing it didn’t see all the candidates or is missing some critical piece of information (that probably wasn’t available). Some of that you consider very carefully and respectfully but end up simply disagreeing with. Some of that you dismiss out of hand because the person consulted is someone who had to be consulted but who is widely regarded as wrong or irresponsible. Some of it you ignore because it’s expressed in a cryptic or confusing way. Some of it you ignore because you’re just really busy and the decision is already robustly confirmed by other information, so why keep discussing it?

None of which you can tell someone about. The people who made the decision can’t say:

a. You didn’t work hard enough for us to value your input equally.
b. We really did consider what you said, but here’s why we disagreed with you, specifically.
c. We asked your feedback because you’d be insulted if we didn’t but we don’t respect your views at all.
d. We had no idea what you meant and we didn’t have time to sort it out.
e. Our cup overfloweth: thank you for the advice but we turned to have as much as we needed before we even got to you.

You can’t even say the one thing that would be comforting (we considered your advice, and disagreed) because then you have to provide an external, visible transcript of a conversation that it is unethical (or illegal, even) to transcribe and circulate.

——————-

The number of decisions that power considers impossible to transcribe or even describe has grown along with power itself. Here I think we arrive at the heart of the problem with “conversation” as an alternative to “demands”.

Take my previous example of a job search in academia. Most of the people solicited for opinions understand why there is no account of whether or how their opinion mattered, except perhaps students. Why there will be no “conversation” about the decision after it is made, and why the parties to the conversation will be limited and sequestered. But even in this fairly clear case, academic departments could probably do a better job with students. In one hiring process in the last six years, we chose a candidate who was not consistently the number #1 preference of the students that we asked to participate. So I met as department chair with them afterwards to talk about how a decision like this gets made, and to give them a carefully limited version of our reasoning. I knew there was a risk involved that one or more students would indiscreetly repeat what I’d said so that it would get back to the candidate, so I didn’t share anything too private. The important thing for me was to talk frankly about how and why hiring decisions unfold as they do, including pointing out that these are decisions where typically ten to twenty candidates are very nearly evaluatively equal–if nothing else because the students who may be considering academia need to understand that about the labor market at the other end.

I also explained the legal constraints on anything connected to personnel decisions and then why most of us also find it unprofessional to talk about a colleague directly with students, most of the time. And we talked a bit more beyond that about why student impressions of faculty are sometimes perceptive and useful and sometimes simply wrong. I pointed out that I once proudly asserted decades ago that a graduate professor I knew was reticent because of the lingering effects of McCarthyism on older academics, which turned out to be the kind of thing that was ever so vaguely right as a generic guess and ever so completely wrong about the actual person, as I learned on longer acquaintance.

This is what I think a “conversation” as an alternative to a “demand” might look like. It may be many people have conversations of the kind I just described, as ad hoc, one-off, personal and effectively private conversations that do not become a public fact about power and authority within the institution. The public or shared or visible spaces within an institution are not routinely alive with this sort of conversation. It isn’t shared.

You could suggest that my approach in this case was managerial: that I chose to talk with the students in order to manage the possibility of their unhappiness in response to a perceived exclusion from decision-making. I think you’d be right that this is how offers of dialogue or conversation are often perceived by stakeholders who want to change the policies or culture of their institutions.

What is missing from these offers, what makes them not-really-conversations that only fuel the movement towards what Naim calls the veto, are three major attributes:

a) Too much of the subject of the conversation is veiled or off-limits.
b) The powerful do not fully disclose or describe both the constraints on their actions AND their own strong philosophical or ethical commitments.
c) When disclosed, the constraints are not up for debate; there is nothing contingent in the conversation.

In effect, what is missing is what defines a democratic public sphere. Which is an absence that nullifies the offer of a conversation or a dialogue as a part of decision-making or life in community. You can’t have a conversation that’s meaningful, trustworthy and part of a process of deliberation and decision-making in the weird kind of fractured “public” that academic institutions, civic institutions and businesses maintain, where information flows in trickles or pools in hidden grottos, in which most of the participants can’t discuss even a small proportion of what they know or disclose the tangible reality behind most decisions that have been made or are being contemplated.

———-

Title IX/sexual assault conversations in higher education are a major example of this issue, not just at Swarthmore but almost everywhere. In the case of Title IX, I am for the most part neither a petitioner nor the powerful, so I can see to some extent both why so many institutions trend towards Naim’s veto and why it is hard to have the conversations that might approach power differently.

Let’s start with what is off-limits. The specifics of the last decade of actual cases can’t be discussed in any kind of public or even private conversation within institutions. That would usually be illegal (several kinds of illegal), it would usually be an invitation to a lawsuit (several kinds of lawsuit), and it would broadly be considered to be unethical by almost everyone with an interest in the issue. And yet the generalities of those specifics are precisely what is at stake. What can the forms of centralized, hierarchical, ‘big’ power within academic institutions plausibly do about what’s in those specifics? How can anybody talk about that question without granular, particular attention to how it would work in specific cases, at the moment of the incident and its aftermaths?

That’s not all that is off-limits. Mostly the people with power over the disposition of cases or the setting of policy cannot fully disclose or discuss what they’re being told within one set of meetings: what the lawyers say about what can or cannot be said. Within another set of meetings: what trustees say about what they think should or should not be done. Within another set of meetings: what the specific managers of specific cases believe or think about those cases at various stages of investigation or judgment or therapy. Again, mostly because they can’t. In most of these cases, the legal constraints are real and specific. But all of those off-limits deliberations and conversations erupt into the public space, sometimes even as quotations that can’t be attributed or even acknowledged as quotations. So legal advice, even if it might be questionable or flawed, can’t be examined or questioned directly–it often can’t even be labeled as such. Practicioner beliefs about best practices in counselling or therapy can only be described in the vaguest ways, shorn of all the specifics that would make them valid or invalid, helpful or questionable.

The fracturing of this not-public runs all the way down to the bottom of this hoped-for conversation. No one–including student advocates–gets to a point of disclosure about the deeper fundamentals of their views on any of the issues at stake–about sexuality, about justice, about gender, about equity, about safety and freedom, about the rights and responsibilities of institutions and of those who work for and study within institutions. There is no incentive or reward to disclose if there is no real possibility of tracing how a dialogue will or will not inform decisions and policies. Nobody wants to start a conversation in which they will lay their deepest convictions out on the table if they have no sense at all of what will be done with or to those exposed beliefs and narratives after everyone leaves the table. Conversation is an intimate word, but the familiarity that even small colleges allow between students, faculty and administration is not intimate familiarity between equals who have consented to mutual exposure. What adminstrator would ever want to say clearly what they think and know to students who might turn around and demand the termination of that employee? What student would ever want to have a genuinely informing, richly descriptive and philosophically open conversation about sexuality, violence and justice with an administrator if the student is the only person obliged to participate in the conversation in that spirit?

The only hope for those kinds of dialogues is the classroom, precisely because the instrumental character of any given discussion is not directly fed back into institutional governance and because classrooms are semi-private and leave little visible trace to anyone who was not a direct participant. When we otherwise offer dialogue as an alternative to demands, we dramatically underimagine what it would take for dialogue to be a meaningful substitute, which is nothing short of redesigning the visibility of decisions and the flow of information in a way that no one is really ready for and perhaps that no one really wants.

]]>
https://blogs.swarthmore.edu/burke/blog/2019/08/01/dialogue-and-demand/feed/ 6
Save the Children https://blogs.swarthmore.edu/burke/blog/2018/05/01/save-the-children/ https://blogs.swarthmore.edu/burke/blog/2018/05/01/save-the-children/#comments Tue, 01 May 2018 14:46:11 +0000 https://blogs.swarthmore.edu/burke/?p=3227 Continue reading ]]> Jonathan Haidt is consistently unimpressive.

Responding in this Chronicle piece about Jeffrey Adam Sachs’ great essay for the Niskanen Center, Haidt concedes that the speech-related episodes that he and his pals get so agitated about are confined to a relative handful of highly selective institutions. The evidence for a significant shift in attitudes among all college-attending students is thin and contested.

But Haidt says that since students at elite institutions are going to be the leaders of tomorrow, we should be disproportionately worried about how they think.

This is a classic kind of fallacious reasoning in populist social science that seeks to stoke up some form of middlebrow moral panic. I first became familiar with it while researching claims by social scientists during the 1970s about the effects of “violent” cartoons on children.

The argument runs like this: children or young people are being moved away from adults on some kind of important social norm by a lack of institutional vigilance–that it’s up to the adults to control what children and young people see, say or do so that social norms will be protected. There’s an odd kind of philosophical incoherence somewhere in there in many cases–a kind of softly illiberal vision of parenting and education that is invoked in many cases to defend adult liberalism as the social norm worth preserving–but leave that for the moment.

What’s more important in terms of social science is that this is a *prediction*: that if the external stimulus or bad practice is permitted, tomorrow’s adults will have a propensity to behave very differently in relationship to the norm being invoked. The anti-children’s television crusaders said: tomorrow’s kids will be more violent. Haidt is saying: tomorrow’s kids will have less respect for free speech.

There’s a sleight of hand going on here always. Because usually this is being said against a *contemporary* crisis about the issue at hand. The television crusaders were responding to the violence of 1968-75: the Vietnam War, protests on campus, rising rates of violent crime. But the people involved in those forms of violence *didn’t watch cartoons on Saturday morning*. They were the previous generation. The people who are most threatening to free speech in the United States today are not 20-year old Middlebury students: they’re the President of the United States and his administration, the Congress, the people in charge. People who grew up under the norms that Haidt and Brooks etc. are trying to defend.

So it turns out that past dispensations that were allegedly friendly to the norms being defended actually produced the most serious threat to them.

And of course, it usually turns out that the prediction is wrong as well. Violence has been steadily more and more represented in mass media for children and adults since 1965; rates of violent crime have gone steadily down since the mid-1970s. You can always claim in a particular case that there’s a particular link–a mass shooter who turns out to have played Call of Duty or whatever–but that’s not how a general social scientistic prediction about a variable and a population works. If watching cartoons where bad guys got punched in the face made you more likely to be violent, that’s a prediction that there would be more interpersonal violence overall in the future. It didn’t happen. That’s not how it works. The same thing here: if free speech norms are enduring and important, I guarantee you that a bunch of kids at Middlebury standing up and turning their backs on Charles Murray does not represent a future trend that will affect a generation. Frankly, anything Middlebury or Swarthmore students do will have negligible collective impact–they are not a good marker of generational typicality.

It might even be that actually testing out the propositions embedded in a belief in free speech rather than dully worshipping them as received orthodoxy produces a more meaningful lifelong relationship to them. It certainly is that Haidt and others are producing a nostalgic myth about where a commitment to free speech comes from.

]]>
https://blogs.swarthmore.edu/burke/blog/2018/05/01/save-the-children/feed/ 1
A New Year https://blogs.swarthmore.edu/burke/blog/2018/01/16/a-new-year/ https://blogs.swarthmore.edu/burke/blog/2018/01/16/a-new-year/#comments Tue, 16 Jan 2018 20:13:35 +0000 https://blogs.swarthmore.edu/burke/?p=3215 Continue reading ]]> This is not the first time I’ve gone quiet on this blog simply because I was busy. Fall 2017 was in many ways the busiest semester I’ve ever had at Swarthmore: I taught two courses, I chaired my department, I became the co-director of the Aydelotte Foundation, and I sold my house and moved.

But I have gone quiet for other reasons as well. I am struggling to understand what the good of writing in public is at a time when I’m prepared to encourage others to do so.

When I began blogging in a pre-WordPress era, I was already a long-time participant in online conversation, all the way back to pre-Usenet BBSs, including the pay service GEnie. So I think I held no illusions about what were already problems of long-standing in online culture: trolling, harassment, mobbing, deception, anonymity, and so on.

Nevertheless, I started a blog for two major reasons. First, to have an outlet for my own thinking, as a kind of public diary that would let me express my thinking about professional life, politics, popular culture and other issues as I saw fit, and perhaps in so doing keep myself from talking too much among friends and colleagues. I don’t think I’ve succeeded in that, because I still overwhelm conversations around me if I’m not thoughtful about restraining myself.

The second was to see if I could participate usefully in what I hoped would grow into a new and more democratic public sphere, one that escaped the exclusivity of postwar American public discussion. I think I did a good job at evolving an ethic for myself and then inhabiting it consistently. That had a cost to the quality of my prose, because being more respectful, cautious and responsible in my blogging usually meant being duller and longer in the style of my writing.

In the end, I feel as if both goals have ended up being somewhat pointless. It’s not clear to me any longer what good I can contribute as a public diarist. Much of what I think gets thought and expressed by someone else at a quicker pace, in a faster social media platform. More importantly, the value of my observations, whatever that might be, was secured through combining frankness and introspection, through raising rather than brutally disposing of open questions. This more than anything now seems quaintly out of place in social media. I feel as if it takes extreme curation to find pockets of social media commentary given over to skepticism and exploration, through collectively playful or passionate engagement with uncertainty and ambiguity.

More complicatedly, the more I am tied to my institutional histories and imagined as being a “responsible agent” within them, the harder it gets to talk frankly about what I see. It was comforting to think that almost no one read my blog and almost no one cared about it, in some sense. Now I’m only too aware that if I speak, even if I’m careful to abstract and synthesize what I’m observing, I can’t help but seem as if I am testifying about the much larger archive of real experiences and painful confidences I have been entrusted with. If I abstract too much, I find that friends and colleagues politely gaslight me: I can’t have seen what I think I’ve seen. But I can’t be more direct, and I don’t want to be. Trying to observe real stories and real problems with some degree of honesty can curdle into the settling of scores, and can tempt people–older white men especially–into a narrative of institutional life in which they are always the heroes of the story. Some stories and experiences explored honestly end up with everyone muddling through with good intent; others end up implicating everyone in certain kinds of bad faith or short-sightedness, including the people doing the exploring.

This brings me to the second goal: to be part of a new and more democratic public sphere. I have been for thirty years a person enthusiastic about the possibilities and often the realities of online culture. I am losing that enthusiasm rapidly. It’s not just that all the old problems are now vastly greater in scope and more ominous by far in the threat they can pose to participants in digital culture, but that there are new problems too. The threat to women, to people of color, to GLBQT people, is bigger by far, but even as someone who has all sorts of protections, I find myself unnerved by online discussion, by its volatility and speed, by the ways that groups settle on intense and combative interpretations and then amplify both. I remember only dimly that for a long time I saw myself as trying to create bridges in conversations to online conservatives. With a blessed few exceptions, those conversations mostly felt like agreeing to trust Lucy to hold the football steady one more time, like being the mark in a long confidence game whose goal was to move the Overton window. What did I think I was doing talking to David Horowitz, for example? Or writing critiques of ACTA reports as if anyone writing them cared remotely about evidence or accuracy? And yet I’m not feeling that much more comfortable about online conversation with people with whom I ostensibly agree or among whom I have allegedly built up long reservoirs of trust. That sense of trust and social groundedness felt very real as recently as five years ago, but now it feels as if the infrastructures of online life could pull any foundation into wreckage in an instant without any individual human beings meaning or wanting to have that happen.

I almost thought to critically engage a recent wave of online attacks on a course being taught by my colleague here at Swarthmore. I even tried one engagement with a real person on Twitter and for a brief moment, I thought at least the points I was making were being read and understood. But the iron curtain of a new kind of cultural formation snapped down hard within three tweets, and it was difficult for me to even grasp who I had been talking to: a provocateur? an eccentric? a true believer? The rest of the social media traffic about the issue was rank with the stink of bots and 8chan-style troublemaking. Even when it was real people talking, even if I might be able to have a meaningful conversation with them in person if I happened to be in their physical presence, nothing good could come of online engagement, and many bad things could instead happen.

So I need to think anew: what is this space for? What’s left to say? Public debate, per se, is dead. Being a diarist might not be, but I will need to find ways to undam the river of my own voice.

]]>
https://blogs.swarthmore.edu/burke/blog/2018/01/16/a-new-year/feed/ 12
Slow Poisons https://blogs.swarthmore.edu/burke/blog/2017/04/12/slow-poisons/ https://blogs.swarthmore.edu/burke/blog/2017/04/12/slow-poisons/#comments Wed, 12 Apr 2017 17:01:59 +0000 https://blogs.swarthmore.edu/burke/?p=3105 Continue reading ]]> A prologue first to what I’m going to say about “academic bullying”.

Considering that the word is used so broadly to discuss a wide range of procedures, practices, attitudes, and ideological positions, maybe we need a better term than “neoliberalism”. And yet, there’s often a real connection between everything referred to in that wide range, so perhaps no other word will serve us better.

I understand perfectly well, for example, how a whole series of workplace rules, practices and norms that have become common across the economy, including in academia, are connected by some common propositions or principles even when they seem ostensibly to be concerned with different issues. Among the connections are:

1) Get as much labor from workers as you can, in part by decomposing some of the barriers between civic life, home life and work life.
2) Get as much labor for free from workers as you can, in part by taking advantage of older cultures of professionalism and civic obligation.
3) Make transparency a one-way street: encourage (or compel) workers to make as much of their working lives as can be imagined visible to and recorded by management or administration, but strongly restrict the ability of workers to get a transparent accounting of what happens with the information they share or give.
4) Shift workers into contractor positions or other workplace forms that reduce or eliminate the responsibility of employers to provide benefits or any long-term commitments to those workers.
5) Treat employees as psychological/economic models or objects rather than as reasoning citizens; privilege managerial approaches that nudge, manipulate, incentivize, and placate employees rather than engage with them in complex, honest terms.

I could go on, and I have in past blog entries.

Another thing I’ve said before, however, is that the answer to neoliberal reworkings of work practices is not to fight back by reducing professional or other labor participation to the market terms that neoliberalism exalts. Meaning if we think there is such a thing as professionalism, and that we want to defend it (or restore it) in the face of neoliberal reworkings, we shouldn’t get involved in just trying to get neoliberalism to pay people off to a greater degree. It’s ridiculous, for example, that current for-profit academic publishers continue to not only rely on a massive amount of free labor that is not only provided by academics but is very nearly required of them in order to have a hope of accessing a tenure-track position and then retaining it. But the answer is not to compel those publishers to pay us some small share of the value we’re producing. It is to take all the value we produce and shift it to a non-profit consortial structure that resides within our professional worlds.

I ache sometimes in academic life because this should be joyous work, and for all that we could fulminate about administrations and neoliberalism and public funding, the possibility of passion and joy, of mission and meaning, still seem graspable. Those possibilities still seem something that could suffuse academic labor everywhere: there is nothing inevitable or required about the spread of grossly exploitative adjunct teaching in most of academia.

So here we come to the problem: neoliberalism sometimes takes hold because we ourselves, with at least some power over our world, can’t manage to imaginatively and fulsomely inhabit the alternative cultures and processes of academic labor that are at least possible. Our own sociality in faculty communities often compresses that space of better possibility from the other direction of neoliberal rules and procedures, and almost nothing humane is left in between.

Yes, we can adopt a kind of neo-Stoical response and control what we individually can control: ourselves. To be passionate and joyful and encouraging and supportive ourselves, and let the rest fall as it must. To demonstrate rather than remonstrate. This is the weakness of some calls to get away from the negativity of “critique”–they end up an example of what they hope to proscribe, a critique of critique. We would be better off showing rather than telling, better off doing than complaining about what other people do. The problem is that all professions are very much defined by their shared ethos, their common structures of collaboration and governance. A novelist or artist or entrepreneur or political consultant often operates in a workplace structure that translates individual sensibility into the surrounding environment. An academic who just does their own thing, on the other hand, is likely to feel the strong tug of faculty governance or administrative oversight in formal terms. More importantly, that kind of neo-Stoicism takes a kind of masterful psychological disposition of some kind: a mind armored against the world, a mind with a detached openness to it all, or a kind of blithe self-regard that is undented by any negativity. (In which case, is probably part of the problem rather than the solution.) Some of us can’t manage it at all, and some of us lose the discipline required over time. Some of us have had the possibility of that insulation stripped from us before we ever started by racial discrimination, by gender discrimination, by other forms of structured bias.

————-

So, prologue over: this is where academic bullying comes in. This research on academic bullying described in the Chronicle of Higher Education will probably surprise no one, but it’s valuable. Bullying may in some sense be almost the wrong word for what I suspect most of the respondents in the study were thinking about. That conjures up imagines of a tough kid demanding lunch money, or a crowd yelling mockery at a crying child. That may be how it feels at times in academia, but the circumstances and content are different. Incivility is another word the researchers used, for a slightly different range of interactions, and that too may not entirely get at what I suspect people were reporting. This is more about pervasive negativity, about how every process and decision, however minor, is mysteriously made difficult and contentious, about how and when ‘standards’ are enforced or demanded, about how blame gets assigned. About how people get trivialized and discouraged, often through indirect, unreportable interactions. Perhaps not even by things said directly to them, but by an invisible network of statements in the social infrastructure around them.

The research described in the article notes that the most common category of reports involve faculty who are tenured (both victim and perpetrator), usually between a very senior faculty member and an associate or younger full professor. The perpetrators are evenly men and women; so are the victims.

We saw some of this at Swarthmore in the faculty-specific results of a campus-climate survey from a while back. Largely the response to the results has focused on student life and on the domain of harm that in some sense we know the most about and understand the best, along lines of race, gender and sexuality. But this wider universe is genuinely harder to grapple with. I don’t have any particularly good ideas myself about it.

Still, it sticks with me. I continue to be troubled by what the faculty respondents showed (I think we had about a 40-45% response rate, if I remember correctly, so there’s a small numbers problem here), which is that a very significant number of people said that they had been bullied or treated poorly by faculty colleagues, that politics, scholarship and faculty governance issues were one of the major instigating reasons. But also very strongly–nearly unanimously, if I’m remembering the results–the faculty respondents also said that there’s nothing that can be done about it and that they especially did not want administrative intervention. That we’re resigned to it.

That feels really screwed up to me. But the research reported in the Chronicle suggests we may be typical. I’ve been struck in both formal assessments and informal visits and conversations on social media where I’ve looked into other campus cultures that this is what a lot of faculty experience–that sense that there’s a small number of people who are cunningly abusive, who understand perfectly well what the red lines are and avoid them carefully, but who are constantly picking away at colleagues, who make most collective work difficult, who passive-aggress others, and who know how to mobilize a defensive screen if anyone gets upset with it.

I keep coming back myself to a moment from a few years back. It was hearing a senior colleague in another department disparage a tenured but more junior colleague about that person’s scholarly productivity. I realized that if this was being said to me, casually, it was likely being said by this person regularly: I am not particularly a confidant of the disparager, and the remark was as conversational as “hey, nice weather today”. I also realized that not many people would know what I know: that the person doing the disparaging is less productive as a scholar than the person being disparaged; that the person being disparaged does amazing teaching and service work; that the person doing the disparaging has not read nor is actually interested in the work of the disparaged person despite the fact that they’re in the same discipline. So here you have someone trying to knock down another person’s reputation over something that they don’t even care about–it’s not as if the complaining person just can’t wait to read more scholarship by the targeted person, or values what that colleague says as a scholar and intellectual.

The longer I’m in academia, the more I am aware of how much of this kind of activity is swirling around me, generated by a small number of people who know they’re never in danger of being confronted about it. It’s never worth picking a fight over in the sense that you can’t stop it–it’s legitimate expression, in some sense–and all you’ll do is become a target of the same sabotage, if you aren’t already. But it kills the joy and excitement that should crackle through our halls, the delight we should be taking in the thinking and teaching of others. That’s the issue, in the end: that we need some signs of that better world in order to stand against the onset of worse and worse ones.

]]>
https://blogs.swarthmore.edu/burke/blog/2017/04/12/slow-poisons/feed/ 13
The Room Where It Happens https://blogs.swarthmore.edu/burke/blog/2016/12/08/the-room-where-it-happens/ https://blogs.swarthmore.edu/burke/blog/2016/12/08/the-room-where-it-happens/#comments Thu, 08 Dec 2016 15:44:56 +0000 https://blogs.swarthmore.edu/burke/?p=3055 Continue reading ]]> It would be in a way a comfort–and also a terror–to think, “Well, that’s those people, it’s the way they think, we cannot stop them and there is no way to engage them.”

It’s true, there is no way to engage them–that is what this article shows about Lenny Pozner’s efforts to confront conspiracy theorists who deny that his child died at Sandy Hook. And there is no way to stop them through some force or power that we can muster.

What I think could do is start to recognize our connections to conspiratorial readings as well as our alienation from them. I know some of my close colleagues are less enamored than I am with some recent scholarly writing about the dangers of the “hermeneutics of suspicion”, and I take some of their points seriously.

But I do think that we have for almost fifty years been walking ourselves into a series of practices of reading the textual and cultural worlds around us as a series of visible clues to invisible processes. In some measure because that is the truth of those cultural worlds, in multiple ways. Texts have meanings that they do not yield up to an initial reading. They affect us in ways that are deferred, delayed, or mysterious. So we are right to pursue interpretations that look for how what is visible both produces invisible outcomes and is a sign of invisible circulations in the world.

It is also the truth that we are not witness to many of the moments that control our lives, and some of those are found in “the room where it happens”: in the private chambers of political and social power. But many more are nowhere to be found, produced out of the operations of complex systems that no one controls, in the arcs that fire between sociocultural synapses. We want desperately to see into both kinds of invisibility, and so we pore over the visible as a map to them.

We know that things persist which our society says we no longer profess. Racism, sexism, bias of many kinds, are visible, but you can’t trace them easily back to the visible text of political structure or even to deliberate professions of ideology, to intentional statements made willfully by individuals about how they will dispense the powers at their command. Steve Bannon is not Bull Connor, even if they have inside of them the same awful invisible edifice.

What this leads to–leads *us* to, as well as alt-right conspiracy theorists–is an assertion from the visible of the inevitability of the invisible, of a description of invisible specificity. I have listened to colleagues tell me with a straight face what happened in the room that I was in and they were not in, and have told them that what they’ve said is not even a permissible interpretation, it’s just wrong. To no avail: the people in question just kept telling the story of non-events as fact. I have listened at full faculty meeting to one faculty member offer a description of what happened in a process of decision-making which she was not part of, only to be contradicted by five other faculty members who were part of it, and to the describer insisting that what she said was true while also insisting that she wasn’t saying that what her colleagues had said was untrue. What she said had happened while they were not in that room–but there was no room that they had not been in.

I think we could all compile examples, and we’re tempted to just say: that’s just that person being silly. Or it’s just minor. Or it’s an aberrant result of psychological imbalance.

This is letting ourselves off too lightly. It’s deep in our bones: we have battered ourselves against the shell that hides the invisible, we have produced an escalating tower of knowledge that stretches ever further into the sky without ever finding the heaven of truth, and we’re tired. We know still that there are rooms and entire worlds where it happens and we’re tired of being happened to. So we search for a crack, a clue, a fragment, a trail. We detect, we investigate. We deduce, believing in Holmesian fashion that the remaining impossibilities must be the truth. We describe things that never happened in the belief that they must have, and we attribute things that happened in immanence, in the air that surrounds us and chokes us, to specific agents and specific locations, to the devils we can name.

We, we, we. And them. Not all invisibilities are alike, and the work of inventing some of them is, as Pozner puts it beautifully in working through his own trauma, smothering everything human. It is the same paradox of witchcraft-finding in southern Africa: the quest to locate and confront evil becomes the evil it sets out to fight. But we are not homo evidentius, fighting an alien subspecies of homo conspiratorius. This is another strain of an illness that we also suffer from.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/12/08/the-room-where-it-happens/feed/ 1
The Vision Thing https://blogs.swarthmore.edu/burke/blog/2016/10/11/the-vision-thing/ https://blogs.swarthmore.edu/burke/blog/2016/10/11/the-vision-thing/#comments Tue, 11 Oct 2016 17:26:58 +0000 https://blogs.swarthmore.edu/burke/?p=3027 Continue reading ]]> We’re having a “visioning exercise” here at Swarthmore this fall. I couldn’t attend an early gathering for this purpose, and I’m teaching during the next one. This might be just as well, as I’m having to fight back a certain amount of skepticism about the effort even as I feel that the people who’ve organized this deserve a chance to achieve whatever goals they had in mind. I’ve been a part of past strategic planning and we did some of our own work through meeting with groups of various sizes and trying to find out what their “visions” for Swarthmore might be. I found those efforts to be a moderately useful way to tackle a very difficult problem, which is to get various members of an institutional community to have a meaningful conversation about their aspirations for the short and medium-term future of the organization.

I suppose my mild discomfort is with the proposition that we need a consultant to accomplish this aim. Faculty at a wide range of academic institutions tend to be skeptical about consultants on campus. With some reason. I’ve been in more than ten conversations over the last decade with consultants brought on campus for various reasons. One of them was an unmitigated disaster, from my point of view. A few have been revelatory or profoundly useful. Most of have been the equivalent of slipping into lukewarm bathwater: not uncomfortable, not desired, a kind of neutral and inoffensive experience that nevertheless feels like it’s a missed opportunity.

It is too easy for faculty to slip into automatic, knee-jerk negativity about consultants. So I want to think carefully about when I might (and have) found them useful as a part of deliberation or administration in my career.

1. When the consultants have deep knowledge about an issue that has high-stakes implications for academia, where that issue is both technically specific and outside the experience of most or all faculty and existing staff, and yet where there are meaningful decisions to be made that have broad philosophical implications that everyone is qualified to evaluate. There’s no point to hiring a consultant to tell you about an issue that is so technical that no one listening can develop a meaningful understanding of it during a series of short visits. If such an issue is important, you have to hire a permanent administrator who can deal with it. If such an issue is trivial, you ignore it or hire a short-term contractor to deal with it out of sight and mind. If you’re bringing in someone to talk with the community, there has to be something for them to decide upon (eventually).

2. When the community or some proportion of it is openly and unambiguously incapable of making decisions about its future, and acknowledges as much. The classic situation is when an academic department is in “receivership” because of hostility between two or more factions within the department. At that point, someone who is completely outside the situation and who is seen as having no stakes whatsoever in its resolution is tremendously useful. In general, a consultant who is trying to mediate existing disputes can be very helpful. But this takes having concrete disputes that most parties confess have become intractable–you can’t mediate invisible, passive-aggressive disputes, because you can’t even be sure they exist and because the parties to the dispute may contest whether they are in fact involved.

3. When the consultant is using a method to study the campus and its community that by nature is hard to use if you’re an insider. I think primarily this means that if you decide you need an ethnographic examination of your own community, you look for a consultancy that can do that. More generally, any time there’s some thought that your own community is too insular, too prideful, too self-regarding, too limited in its understanding of the big picture, you might legitimately want a consultant to come in. But note that in this case the role of a consultant is more confrontational or even antagonistic: you’re hiring someone to tell you truths that you might not want to hear. This is generally not what consultants do, because they’re usually trying to be soothing and friendly and to not get the people who hired them into trouble by stirring up a hornet’s nest. In a way, you’d need some degree of internal consensus about a need for an “intervention” of some kind for this to work–some agreement that there is an understanding that is possible that is beyond the grasp of people in the community, for some reason. Your consultants would need a skill set and a set of methods suited to this sort of delivery of potentially unwelcome news. I feel as if this the hardest kind of consultancy to buy in the present market, but maybe the kind that most possible buyers could use most.

4. When hiring the consultant is a bridge to some later group of contractors or partners that you know you’re going to need but don’t presently have any relationship to. Maybe you need a new building, maybe you’re going to create a totally new academic program, maybe you’re going to invest in a completely new infrastructure of some kind. You need the consultant even if you know the technical issues because that’s how you build new collaborative relationships with people who will eventually be service providers or who will recommend service providers to you. This is almost consultant as matchmaker.

5. When many people agree there are “unknown unknowns” surrounding the strategic situation that an institution is facing. Probing for issues that neither the institution nor the consultant are accustomed to thinking about, trying to find opportunities that would never occur in the course of everyday thinking about the current situation.

I have a modest problem is when consultancy is used to defer responsibility for a decision that administrators and faculty already know they want to make, or when a consultancy is a deliberate red flag waved at some bulls, a distraction. I understand the managerial realpolitik involved here, and if faculty were totally honest about it, they’d probably admit that they have their own ways of shifting responsibility or distracting critics when they make decisions within their own units and departments. This is a minor and basically petty feeling on my part: there are good, pragmatic reasons to pay for a service that provides some protective cover when facing a decision, as long as the consultant doesn’t end up producing something so inauthentic or generic that it ends up being a provocation in its own right.

I have a bigger problem with consultancy being used as a substitute for something an institutional community should be doing on its own. Then it becomes something like an ill-fitting prosthesis being used to avoid undergoing the painful ordeal of physical therapy. A community of intelligent, well-meaning people with a good deal of communicative alignment and shared professional and cultural norms should be able to find a way to talk, think and decide collectively. If a small institution of faculty, staff, students and associated publics need continuous assistance to accomplish those basic functions, then that’s a fairly grim prognosis for the possibility of larger communities and groups that have very great degrees of difference within them being able to do the same.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/10/11/the-vision-thing/feed/ 1
Dramatic Arc https://blogs.swarthmore.edu/burke/blog/2016/04/20/dramatic-arc/ https://blogs.swarthmore.edu/burke/blog/2016/04/20/dramatic-arc/#comments Wed, 20 Apr 2016 14:44:52 +0000 https://blogs.swarthmore.edu/burke/?p=2955 Continue reading ]]> Me at the beginning of a class meeting where I’ve assigned one of my favorite books.

Me realizing that maybe a quarter of the class read it with any real attention despite the fact that I already said it’s going to be an essay question on the final.

Me inside as we wind down the class.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/04/20/dramatic-arc/feed/ 6
A Chance to Show Quality https://blogs.swarthmore.edu/burke/blog/2016/03/29/a-chance-to-show-quality/ https://blogs.swarthmore.edu/burke/blog/2016/03/29/a-chance-to-show-quality/#comments Tue, 29 Mar 2016 20:30:05 +0000 https://blogs.swarthmore.edu/burke/?p=2943 Continue reading ]]> Romantic ideals of originality still remain deeply embedded in how we recognize, cultivate and reward merit in most of our selective systems of education, reputation and employment. In particular we read for the signs of that kind of authentic individuality in writing that is meant to stand in for the whole of a person. Whether it’s an essay for admission to college, a cover letter for a job, an essay for the Rhodes or Fulbright, an application for research funding from the Social Science Research Council or the National Science Foundation, we comb for the signs that the opportunity-seeker has new ideas, has a distinct sensibility, has lived a life that no one else has lived. Because how else could they be different enough from all the other worthies seeking the opportunity or honor so as to justify granting them their desires?

Oh, wait, we also want to know, almost all of the time, whether the opportunity-seeker is enough like everyone else that we can relate their talents, ideas, capabilities, plans and previous work to the systems which have produced the applicants. We want assurances that we are not handing resources, recognition and responsibility to a person so wholly a romantic original that they will not ever be accountable or predictable in their uses. We want to know that we are selecting for a greatness that we already know, a merit that we already approve of.

This has always been the seed that grows into the nightmare of institutions, that threatens to lay bare how much impersonality and distance intrudes upon decisions that require a fiction of intimacy. Modern civic institutions and businesses lay trembling hands on their bankrolls when they think, however fleetingly, that there is a chance that they’re getting played for fools. That they are dispensing cheese to mice who have figured out what levers to push. That when they read the words of a distinctive individual, they are really reading the words of committees and advisors, parents and friends. That they are Roxane swooning over Christian rather than Cyrano, or worse, that they are being catfished and conned.

The problem is that when we are making these choices, which in systems of scarcity (deliberately produced or inevitably fated) must be made, we never really decide what it is that we actually value: unlikeness or similarity, uncertainty or predictability, originality or pedigree. That indecision more than anything else is what makes it possible for people to anticipate what the keepers of a selective process will find appealing. Fundamentally, that boils down to: a person with all the qualifications that all other applicants have, and a personal experience that no one else could have had but that has miraculously left the applicant even more affirmed in their qualifications. Different in a way that doesn’t threaten their sameness.

I’ve been involved in a number of processes over the years where those of us doing the selecting worried about the clear convergence in some of the writing that candidates were doing. We took it to be a sign that some candidates had an advantage that others didn’t, whether that was a particularly aware and canny advisor or teacher, or it was some form of organized, institutional advice. I gather that there are other selective institutions, such as the Rhodes Foundation, that are even more worried, and have moved to admonish candidates (and institutions) that they may not accept advice or counsel in crafting their writing.

The thing is, whenever I’ve been in those conversations, it’s clear to me that the answer is not in the design of the prompt or exercise, and not in the constraints placed on candidates. It’s in the contradictions that selective processes hold inside themselves, and in the steering currents that tend to make them predictable in their tastes. When you try to have it all, to find the snowflake in the storm, and yet also prize the snowfall that blankets the trees and ground with an even smoothness, you are writing a human form of algorithm, you are crafting a recipe that it takes little craft to divine and follow. The fault, in this case, lies in us, and in our desires to be just so balanced in our selection, to stage-manage a process year in and year out so that we get what we want and yet also want what we get.

Maybe that was good enough in a time with less tension and anxiety about maintaining mobility and status. But I suspect the time is coming where it will not be. Not because people seek advantage, but because anything that’s predictable will be something relentlessly targeted by genuine algorithms. Unpredictability is never a problem for applicants or advisors, always for the people doing the selection or the grading or the evaluation. If you don’t want students to find a standard essay answer to a standard essay prompt, you have to use non-standard prompts. If you don’t want applicants to tell you the very moving story of the time they performed emergency neurosurgery on a child in the developing world using a sterilized safety pin and a bottle of whisky, you have to stop rewarding applicants who tell you that story in the way that has previously always gotten your approval. If what we want is genuine originality, the next person we choose has to be different from the last one. If what we want is accomplished recitation of training and skills, then we look for the most thorough testing of that training. When we want everything, it seems, we end up with performances that very precisely thread the needle that we insistently hold forth.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/03/29/a-chance-to-show-quality/feed/ 3
Technologies of the Cold War in Africa (History 90I) Syllabus https://blogs.swarthmore.edu/burke/blog/2016/01/13/technologies-of-the-cold-war-in-africa-history-90i-syllabus/ Wed, 13 Jan 2016 16:29:56 +0000 https://blogs.swarthmore.edu/burke/?p=2917 I saw last year that some smart academics were using Piktochart to design more graphical, visual syllabi, so I took a stab at it.

Loading...

Loading…

]]>