A Declaration of Dependence

When in the course of human events, it becomes necessary for one orange-faced man to dissolve the legal restrictions which he finds incovenient and to assume among the powers of the earth, the permanently dominant station to which the Laws of Nature and of Nature’s God entitles the orange-faced man and his political party, an indecent indifference to the opinions of mankind requires that they should incoherently tweet the causes which impel them to ignore, trifle with or mock the law.

We hold these lies to be self-evident, that billionaires are entitled to pay no taxes and the Russians are good guys who the United States should generally obey, that MAGA-hat wearing white men are endowed by their Creator with certain unalienable Rights, that among these are protection of Mediocrity, Things Being Just the Way They Want Them and A Life Free From Dealing With Brown People, Women and Others They Do Not Like. That to secure these rights, Governments are corrupt among Men, deriving their unjust powers from the inattention of a tweeting scumbag, — That whenever any Form of Government becomes threatening of these ends, it is the Right of the MAGAs and billionaires to ignore it or corrupt it, and to let Government wheeze out its last breaths in some miserable alleyway, undercutting its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Downfall and Misery along with everyone else even though they don’t think it’s going to be a problem for them.

A lack of prudence, indeed, will dictate that Governments long established should be let decay and degenerate for short-term greed and fear; and accordingly all experience hath shewn that mankind are less disposed to suffer when they take the trouble to have good governments, while evils are accumulable by abolishing the governments which have suppressed them. But when a long train of reasonable regulations and social reforms, pursuing invariably the same Object evinces a design to actually make the world better, it is their right, it is their duty, to sabotage and derogate such Government, and to remove all Guards for their future security.

Such has been the patient sufferance of this independent Nation; and such is now the necessity which constrains the MAGA and billionaires to shit all over their former Systems of Government. The history of the first black President is a history of repeated injuries and usurpations due to his highly moral character, generally intelligent approach to governance and centrist moderation, all having in direct object the maintenance of incremental social reform and some degree of constitutional liberty. To mock this, let fake bullshit be submitted to a simultaneously horrified and bemused world.

The former black president has given his Assent to Laws, the most wholesome and necessary for the public good.

He has urged his Governors to pass Laws of immediate and pressing importance, but has also negotiated and offered deals till his Assent should be obtained; and when ignored by do-nothing Republicans, he has mostly accepted the impasse with mild frustration and hope for eventual agreement.

The former black president has refused to pass other Laws for the exclusive accommodation of white men and billionaires, unless those people would accept the right of Representation in the Legislature of everyone else, a right inestimable to them because it is common to tyrants and white supremacists only.

The former black president has called together legislative bodies at places normal and close to the depository of their Public Records, for the sole purpose of reluctantly accepting their lack of compliance with his recommendations and plans.

In every stage of these Oppressions We have Petitioned for Redress in the most arrogant and obstructionist terms: Our repeated Petitions have been answered only by repeated patience and attempts to understand what is motivating us to be such assholes. A Prince, whose character is thus marked by every act which may define a Democratic ruler of a constitutional republic, is unfit to be the ruler of MAGA-hat wearing white men and billionaires.

Nor have We been wanting in attention to our Russian brethren. We have welcomed from time to time in attempts by their dictator to extend an unwarrantable jurisdiction over us and our elections. We have reminded them of the circumstances of our desire to build hotels there and of moving on her like a bitch. We have appealed to their kindred greed and lack of interest in freedom or human rights, and we have conjured them by the ties of our common intolerances to totally encourage these usurpations, which would inevitably strengthen our connections and correspondence. They have been totally delighted hear to the voice of injustice and of consanguinity, partly because they used a bunch of trolling operations to make it speak out. We must, therefore, acquiesce in the necessity, which announces our Subjugation, and hold them, as we hold other countries with dictators, Totally Great Friends With Whom We Have Perfect Conversations, and Hereby Abandon All That Shit About Having Our Own Country with a Constitution.

We hearby unseriously and incoherently tweet and ramble that these United States of America are basically over and Committed to Doing Whatever Putin Wants and Hey Whatever Other Dictators Ask For During Phone Calls.

Posted in Politics | 2 Comments

The Concern Troll In Everyone

What we commonly call “concern trolling” in online discussion has far deeper rhetorical roots in the public sphere. In many ways, it’s a style that was honed to near-perfection by centrist liberal intellectuals and academics in the 1960s, the sort that were brilliantly vivisected by Garry Wills in Nixon Agonistes. These were the men and women (mostly men) who were in their own view outside of and beyond ideology. They might favor a particular policy or course of action, but most of them claimed to be making a series of serially independent reasoned judgments, taking each issue on its merits, according to its facts. This was not just a personal preference. They argued that this approach was the essence of academic professionalism, of expert participation in public debate, and even the only right way to be a proper citizen in democratic and community deliberation. Journalists (in the U.S., not so much elsewhere) also commonly adopted this basic posture, that they were obliged to have no priors or assumptions, to treat everything they covered in a neutral, dispassionate manner that deferred to the facts of a given event or issue.

Intellectuals, scholars, experts, pundits or journalists who had strong or distinctive points of view were either bracketed off as a splinter movement (“New Journalism”) or disparaged as ideologues who approached issues and problems with a prior politics that led to a selective, biased understanding of the issues at hand and the facts of the matter. Intellectual and cultural historians, literary scholars and others who study the history of public debate in the United States with a long view are aware of just how different the resulting structure of public discourse in American life was after 1960 from the period between 1880 and 1950. It really was a shift: in the prior era of high modernism, intellectuals and experts didn’t hesitate to advocate policies and programs directly, for the most part–for good and ill.

Despite the supposed rejection of “objectivity” as a goal or evaluative marker in the social sciences, many experts and scholars have held on to the performative affect of the no-ideology intellectual continuously since the 1970s. In public writing, that increasingly meant that intellectuals and pundits only advocated ideas and proposals that came from someone else, from research or data or pilot programs etc. that resulted from the direct agency of some other expert, some other civic or institutional leader. Somewhere else and someone else. The pundit became a window to some distant light, transparent to its illumination but having no direct responsibility for incandescence. And increasingly, the light was directed towards those the pundit considered in darkness, in a gesture of olympian magnaminity. “Were I you,” he (almost always a he, almost always white) says, “I would look seriously at this finding, this study, this other expert, this situation. For were I you, I would act differently than you do, once I saw this finding, this study, this other expertise. Indeed, if you only will live in the light I cast in your direction, you might in fact be lucky enough to be me. That’s what I would be, were I you: I’d be me. Unbiased, generous, unhampered by ideology, just making up my own mind about everything rationally and without any priors right until the moment I come across it.”

Once upon a time, that noxious, phony performance was substantially confined to the op-ed pages of the New York Times and other major dailies, to the televisual punditry, to a small sector of public-facing academic social science, to a very particular subset of civic organizations. Bit by bit, however, it slipped into the wild, and now it infects our everyday public discourse across social media, public culture, civic institutions and conventional mainstream journalism.

In practice? Some examples of what this dissemination of deferral means:

1) Most of the public conversation between educated elites about electoral preferences studiously avoids the responsibility of actually having electoral preferences of one’s own that arise out of one’s own values and commitments. Instead, arguments about various candidates and issues are deferred onto some other “them” who have actual preferences who need to be appeased, mobilized or enlisted. So, for example, few people in these public conversations directly advocate for Joe Biden’s candidacy because they really like his positions or his specific qualities as a leader, but instead argue that Biden must be favored because he is the favored candidate of social groups who are not present in the conversation. Their (alleged) reasons for favoring him cannot be debated or discussed, only described, because they are not there to speak to them, and even if they were there, are presumed to not be willing to discuss their reasoning.

2) Much of the leadership within civic and academic institutions and often businesses as well advocates for particular policies or changes not because the leadership directly believe in or support such policies, but because the policies are “best practices” or “shared norms” that originated somewhere else. There is a quality of immaculate conception in these kinds of explanations, in fact: the policies adopted often have no specific place of birth and no initial author, but seem instead to have been adopted everywhere at once but nowhere also, with no sense that they are rivalrous to or critical of the norms they are set to displace.

3) Technocratic politicians and Silicon Valley companies (mostly) cleave to policies and products that have been produced almost through a competitive bidding process: they describe a problem they have identified and that they seek to provide some some of ameliorating solution for. The advocated solution doesn’t arise out of the political values or philosophy of the leader and their staff: the root value is to solve problems, what Evgeny Morozov calls “solutionism”. The solution is framed as neutral, as containing no values or agenda. (And hence, opposing the solution is held to be a corrupt investment in the continuation of the initial problem, because why else would you stand in the way of fixing a problem?) Once again, the people responsible for implementing or selling solutions are placed a sanitary step away from the conceptualization of and advocacy for a solution.

I think this is all tied to the much more abstract, multivalent erosion of 19th and 20th Century conceptions of publics and citizenship in the direction of the constellation of ideas and practices that we often call “neoliberalism”. The advantages of this deferral of direct responsibility for advocacy are obvious for individuals and institutions. David Brooks or Bret Stephens can throw up their hands and say that they’re not responsible for gross errors of fact or tendentious constructions of argument, because they’re only serving as a messenger for what is said and claimed by others that they believe their readers should know about. Institutions can shield themselves against risk and liability if they are only conforming to or compliant with decisions and practices adopted elsewhere. The failure of solutions can be blamed on the subcontractor that supplied them or simply on the intractability of the problem itself without putting any values or beliefs in danger.

The costs I think are also plain. Chief among them is that this deepens the isolation of elites from a wider civic culture, because all of these moves position elites and their institutions as the chess players on a board populated by other groups, other people, other communities but bearing no responsibility for either setting the rules of the game nor for the outcomes of play. We can scarcely begin to think successfully about what other worlds are possible when we absent ourselves and the direct power we actually have from the world we actually inhabit.

Posted in Politics | Leave a comment

Homeland Insecurity and American Terrorism

Let’s stop talking about guns and gun control.

Guns and gun control in America function as an intensifying signifier of cultural division. They amplify the idea that this historical moment is a division of rural white ‘traditional’ communities vs. urban diverse educated communities. For every responsible rural gun owner who has a rifle as a tool, for every home owner who keeps a gun out of a belief in self-protection, for every collector who is just interested in guns, using the gun itself as the totemic object that we believe is causing these shootings is a confirmation of an antagonism, of a sociocultural distance, of an us and them. The more we understand, accurately, the cultural depth of America’s attraction to guns, the more helpless we feel–because deep cultural formations are exceptionally hard to instrumentally change from above.

Yes, gun ownership in this country is a fetish; yes, strictly controlling the availability of weapons that are intended to kill many people as fast as possible would be transformative. Yes, guns in the home lead to many tragedies, from toddlers shooting family members by accident to depressed men committing suicide because of the availability of a convenient means for doing it. Yes, we need sane gun laws–licensing and permits, mandatory training, restrictions on manufacture, sale and ownership, and so on.
But guns are a distraction from what’s really going on.

What’s really going on is a slow-motion uprising by white men who feel lost, enraged and confused by the gradual ebbing of their unchallenged social, political and economic power. Most of the men who’ve joined this uprising and committed terrorist acts in its name don’t even quite know or reflexively understand their deeper sources of their rage and alienation. They’ve nearly invariably beaten or hurt women in their lives, they’ve felt confused or wounded by the world around them. They’ve sometimes understood themselves to be mentally unhealthy or lost. They’ve cut themselves off from social ties or never been able to make them in the first place.

But most of them are not mentally ill as we commonly understand it, any more than an alienated young man from London or Berlin or Detroit or Istanbul or Kano who goes to join ISIS or Boko Haram. Those insurgents are leaving because they feel there is nothing for them where they are, because they feel powerless or lost, because they want to make a new world that is also somehow an old and sanctified world where they are the powerful ones. They often know almost nothing about the Qu’ran or Islam.

So it is with America’s mass shooters: virtually all of them white and male, sensing that the safety of a world where mediocre, ordinary white men could still count on being bosses, on being in charge, is fading away.

This goes all the way back to James Huberty in San Ysidro in 1984. We are not in a fight with guns, any more than other societies trying to cope with insurgencies are in a fight with guns. The guns are a necessary but not sufficient condition of those insurgencies, but ordinary people don’t get blown up by an IED while travelling by bus because there are IEDs. They get blown up because insurgents and terrorists are mining the roadways. The IEDs are what allow buses to be blown up and many people to die: if insurgents just had butter knives, they’d kill very few people. But here, in some sense, the mantra of the gun owner is almost right: without the terrorist, the gun sits on the shelf, the IED never gets made.

Ordinary Americans don’t get shot because there are guns, and they don’t get shot because somehow we have the world’s worst mental health crisis. They get shot because there is a decentralized, distributed movement of white men who want their supremacy back. It’s always been visible to us but it is more apparent than ever now because there are open terrorist sympathizers in the White House, the Senate and the House of Representatives, in governors’ mansions and state legislatures. It is time to call them what they are and to understand truthfully that we the people are under attack.

Posted in Politics | 9 Comments

Dialogue and Demand

Why is a call for conversation or dialogue met so often with indifference or hostility?

That I am thinking about this question might feel peculiar to Swarthmore, but I could just as readily be addressing Johns Hopkins (the scene of protest against the creation of a private police force on campus this past spring), Wesleyan when I was an undergraduate in the 1980s, really higher education all the way back to the mid-1960s. It may seem that I’m talking about a challenge that is peculiar to academia, but in fact I think this is an issue for most contemporary civic and corporate institutions.

So what am I thinking about? Roughly speaking, the kind of impasse in the life of an institution where some group of people within the institution or reliant upon it are demanding concrete, specific changes in how the institution operates and the people with authority over the institution respond to that demand by calling for dialogue and conversation. This usually in turn infuriates or provokes the constituencies demanding changes and leads them to escalate or amplify their demands, which then in turn antagonizes, alienates or worries other groups who might have supported the initial demands but not the intensified or more militant requests, which leads to more people calling for some form of dialogue or deliberation, which then intensifies the us-or-them divide within the institution about the way forward.

I think this general dynamic has been described very well by Moises Naim in his book The End of Power. Naim starts by asking why people who are at the top of the hierarchy in many organizations and institutions–CEOs, college and university presidents, heads of executive agencies in government, leaders of non-profit community groups, and so on, frequently report that they feel powerless to act within their organizations beyond vague, broad or gestural kinds of leadership. The former president of the University of Virginia, Teresa Sullivan, described this view well in the midst of a controversial attempt by her board to displace her when she said that she and her peers invariably had to lead towards change slowly, through “incremental buy-in”. Even that is more active than many leaders of institutions, academic and otherwise, might put it–more typical perhaps is a description of leadership as custodial, as stewardship, on behalf of collectively-determined values or a mission that derives from the inchoate massing of all ‘stakeholders’ in the institution.

Naim observes that in private, leaders and their closest advisors are often not so sanguine. Instead, they express intense frustration about what they feel they can’t do. They can’t admonish or discipline people who are technically subordinate to them but too far away in the hierarchy for that admonishment to feel proportionate or fair. They can’t instruct a division or office within their organization to straightforwardly execute a policy that the leadership wants but the division opposes. They cannot quickly dispense with rules, regulations or even “traditions” that the leader and their close associates deem to be impediments to their vision of progress. They cannot undertake new initiatives unilaterally, no matter how sound they believe their own judgment to be. They can’t reveal the truth as they understand it from facts that are private or confidential.

Naim argues that the contemporary world is being compressed between two simultaneous developments. The first is that power has gotten “big”: that it is increasingly attached to large-scale, centralized and increasingly hierarchical institutions. The second is that power is “decaying”: that it is harder and harder to wield at scale, through a centralized apparatus, and from the top of hierarchies downward as a command exercise. It is harder in part because organizations now have internal structures as well as external constraints that cause this decay. What Naim observes is that people within institutions or dependent upon their actions are simultaneously being consulted or included or brought into dialogue and deliberation at the same time that they feel it is increasingly impossible for their suggestions, advice or observations to actually inform what their institutions do with power.

People know that these institutions are “big”: that the institutions do in fact routinely wield power. A college like Swarthmore year in and year out determines the academic outcomes of 1600 students; it hires, disciplines, tenures (or not) employees; it undertakes expensive construction projects with substantial economic implications; it participates in numerous collective or shared decisions across academia; it buys services and commodities; it invests and accumulates. But if you ask, it’s very hard to find anyone within the institution who ascribes the power to do any of those things directly and unilaterally to themselves or to their offices. The “big” capacity of an institution’s power comes from everywhere and nowhere. As a result, Naim suggests, there is only one form of actual influence over institutional action that most stakeholders, community members or citizens have left, what he calls “the veto”–that people can block or impede or frustrate institutional action. Not necessarily because they actually object that intensely to what is being proposed, but because it is the only action they can actually take in which their own agency is visible, important and has actual impact. In every other deliberative or active moment that people are supposedly included in and consulted about, there is no accountable tracing of whether or how their advocacy and their evidence has weighed on institutional power, and there are repeated encounters with decision-making processes that are either occluded or exclusive, and with accounts of decisions that are in no one’s hands, that are made but made from nowhere in particular. Even when you’ve been in “the room where it happens”, present at the scene where a decision was concretely made by people who have the power to decide, you often leave uncertain of what exactly happened and whether it’s going to be done as it was decided. You will also often not be allowed to speak at all about what was said, what was decided, or by whom. When people rise to block or impede decisions–to exercise the veto out of frustration–that further decays power while doing nothing to change its concentrated ‘bigness’.

———

I think the descriptive usefulness of Naim’s analysis is all around us now. The 2019 American discourse about the “deep state” and desires for various forms of authoritarian or direct-rule escape from its supposed clutches seems entirely consistent with the picture that Naim laid out in 2013. The prevalence of what is now being called “cancel culture” across social media is another manifestation of Naim’s veto, arising from people who feels that in some fashion they are being told that they are included in processes that select or identify cultural and political prominence and authority, if only through access to algorithms that rank and rate, but feeling as if the only real power they have is to reject a selection that has been made without real, transparent and accountable structures of representation and consultation.

I suspect that every working professional across several generations both feels this sense of exclusion and is aware of how they have excluded other people within their own institutional worlds. After twenty-five years of working at my present institution, I can cite innumerable examples of processes in which I have been formally included, cases where my opinion has been solicited, and cases where I’ve taken advantage of what are supposed to be always-open channels for communication to offer feedback in which the difference between my participation and my absence is impossible to discern. Sometimes I’ve seen a point I raised emerge almost entirely verbatim from one of the people involved in the earlier consultation two, five or ten years later with no perceptible connection to that earlier process. Mostly, my participation–sometimes about issues or decisions that I think are highly consequential or urgent–disappears without a trace (often simultaneously with confirmation that what I believed to be urgent was in fact urgent). Committees spent a year (or more) working on a policy that disappears into trackless invisibility afterwards–where it’s not clear even whether administrative leadership thought the policy impossible or risible, whether they earnestly meant to implement it but then the person who would have had responsibility left, or whether it was simply forgotten.

This isn’t distinctive to me. We all feel this way. Women feel this way even more. People of color feel this way even more. We all have had the experience of sounding an alarm that no one hears. Of providing advice that rests on decades of experience that seems to be ignored. Of trying to push towards an outcome that would satisfy many only to watch dismayed as an outcome that satisfies almost no one is chosen instead.

If we have power or responsibility within an institution, many of us have been on the other end. We’ve been the void that doesn’t answer, the soothing managerial assurance that all opinions are helpful, the person who absorbs and later appropriates a solution or idea that someone else advocated. And thus most of us know well why participation in a process doesn’t scale smoothly into an impact on a process. Think of job searches where you have been on the inside of the final decision but where many people provided feedback on a candidate. Some of that feedback you ignore because the person providing it didn’t see all the candidates or is missing some critical piece of information (that probably wasn’t available). Some of that you consider very carefully and respectfully but end up simply disagreeing with. Some of that you dismiss out of hand because the person consulted is someone who had to be consulted but who is widely regarded as wrong or irresponsible. Some of it you ignore because it’s expressed in a cryptic or confusing way. Some of it you ignore because you’re just really busy and the decision is already robustly confirmed by other information, so why keep discussing it?

None of which you can tell someone about. The people who made the decision can’t say:

a. You didn’t work hard enough for us to value your input equally.
b. We really did consider what you said, but here’s why we disagreed with you, specifically.
c. We asked your feedback because you’d be insulted if we didn’t but we don’t respect your views at all.
d. We had no idea what you meant and we didn’t have time to sort it out.
e. Our cup overfloweth: thank you for the advice but we turned to have as much as we needed before we even got to you.

You can’t even say the one thing that would be comforting (we considered your advice, and disagreed) because then you have to provide an external, visible transcript of a conversation that it is unethical (or illegal, even) to transcribe and circulate.

——————-

The number of decisions that power considers impossible to transcribe or even describe has grown along with power itself. Here I think we arrive at the heart of the problem with “conversation” as an alternative to “demands”.

Take my previous example of a job search in academia. Most of the people solicited for opinions understand why there is no account of whether or how their opinion mattered, except perhaps students. Why there will be no “conversation” about the decision after it is made, and why the parties to the conversation will be limited and sequestered. But even in this fairly clear case, academic departments could probably do a better job with students. In one hiring process in the last six years, we chose a candidate who was not consistently the number #1 preference of the students that we asked to participate. So I met as department chair with them afterwards to talk about how a decision like this gets made, and to give them a carefully limited version of our reasoning. I knew there was a risk involved that one or more students would indiscreetly repeat what I’d said so that it would get back to the candidate, so I didn’t share anything too private. The important thing for me was to talk frankly about how and why hiring decisions unfold as they do, including pointing out that these are decisions where typically ten to twenty candidates are very nearly evaluatively equal–if nothing else because the students who may be considering academia need to understand that about the labor market at the other end.

I also explained the legal constraints on anything connected to personnel decisions and then why most of us also find it unprofessional to talk about a colleague directly with students, most of the time. And we talked a bit more beyond that about why student impressions of faculty are sometimes perceptive and useful and sometimes simply wrong. I pointed out that I once proudly asserted decades ago that a graduate professor I knew was reticent because of the lingering effects of McCarthyism on older academics, which turned out to be the kind of thing that was ever so vaguely right as a generic guess and ever so completely wrong about the actual person, as I learned on longer acquaintance.

This is what I think a “conversation” as an alternative to a “demand” might look like. It may be many people have conversations of the kind I just described, as ad hoc, one-off, personal and effectively private conversations that do not become a public fact about power and authority within the institution. The public or shared or visible spaces within an institution are not routinely alive with this sort of conversation. It isn’t shared.

You could suggest that my approach in this case was managerial: that I chose to talk with the students in order to manage the possibility of their unhappiness in response to a perceived exclusion from decision-making. I think you’d be right that this is how offers of dialogue or conversation are often perceived by stakeholders who want to change the policies or culture of their institutions.

What is missing from these offers, what makes them not-really-conversations that only fuel the movement towards what Naim calls the veto, are three major attributes:

a) Too much of the subject of the conversation is veiled or off-limits.
b) The powerful do not fully disclose or describe both the constraints on their actions AND their own strong philosophical or ethical commitments.
c) When disclosed, the constraints are not up for debate; there is nothing contingent in the conversation.

In effect, what is missing is what defines a democratic public sphere. Which is an absence that nullifies the offer of a conversation or a dialogue as a part of decision-making or life in community. You can’t have a conversation that’s meaningful, trustworthy and part of a process of deliberation and decision-making in the weird kind of fractured “public” that academic institutions, civic institutions and businesses maintain, where information flows in trickles or pools in hidden grottos, in which most of the participants can’t discuss even a small proportion of what they know or disclose the tangible reality behind most decisions that have been made or are being contemplated.

———-

Title IX/sexual assault conversations in higher education are a major example of this issue, not just at Swarthmore but almost everywhere. In the case of Title IX, I am for the most part neither a petitioner nor the powerful, so I can see to some extent both why so many institutions trend towards Naim’s veto and why it is hard to have the conversations that might approach power differently.

Let’s start with what is off-limits. The specifics of the last decade of actual cases can’t be discussed in any kind of public or even private conversation within institutions. That would usually be illegal (several kinds of illegal), it would usually be an invitation to a lawsuit (several kinds of lawsuit), and it would broadly be considered to be unethical by almost everyone with an interest in the issue. And yet the generalities of those specifics are precisely what is at stake. What can the forms of centralized, hierarchical, ‘big’ power within academic institutions plausibly do about what’s in those specifics? How can anybody talk about that question without granular, particular attention to how it would work in specific cases, at the moment of the incident and its aftermaths?

That’s not all that is off-limits. Mostly the people with power over the disposition of cases or the setting of policy cannot fully disclose or discuss what they’re being told within one set of meetings: what the lawyers say about what can or cannot be said. Within another set of meetings: what trustees say about what they think should or should not be done. Within another set of meetings: what the specific managers of specific cases believe or think about those cases at various stages of investigation or judgment or therapy. Again, mostly because they can’t. In most of these cases, the legal constraints are real and specific. But all of those off-limits deliberations and conversations erupt into the public space, sometimes even as quotations that can’t be attributed or even acknowledged as quotations. So legal advice, even if it might be questionable or flawed, can’t be examined or questioned directly–it often can’t even be labeled as such. Practicioner beliefs about best practices in counselling or therapy can only be described in the vaguest ways, shorn of all the specifics that would make them valid or invalid, helpful or questionable.

The fracturing of this not-public runs all the way down to the bottom of this hoped-for conversation. No one–including student advocates–gets to a point of disclosure about the deeper fundamentals of their views on any of the issues at stake–about sexuality, about justice, about gender, about equity, about safety and freedom, about the rights and responsibilities of institutions and of those who work for and study within institutions. There is no incentive or reward to disclose if there is no real possibility of tracing how a dialogue will or will not inform decisions and policies. Nobody wants to start a conversation in which they will lay their deepest convictions out on the table if they have no sense at all of what will be done with or to those exposed beliefs and narratives after everyone leaves the table. Conversation is an intimate word, but the familiarity that even small colleges allow between students, faculty and administration is not intimate familiarity between equals who have consented to mutual exposure. What adminstrator would ever want to say clearly what they think and know to students who might turn around and demand the termination of that employee? What student would ever want to have a genuinely informing, richly descriptive and philosophically open conversation about sexuality, violence and justice with an administrator if the student is the only person obliged to participate in the conversation in that spirit?

The only hope for those kinds of dialogues is the classroom, precisely because the instrumental character of any given discussion is not directly fed back into institutional governance and because classrooms are semi-private and leave little visible trace to anyone who was not a direct participant. When we otherwise offer dialogue as an alternative to demands, we dramatically underimagine what it would take for dialogue to be a meaningful substitute, which is nothing short of redesigning the visibility of decisions and the flow of information in a way that no one is really ready for and perhaps that no one really wants.

Posted in Academia, Swarthmore | 6 Comments

College of Theseus

Most of us know to be skeptical about the public statements of a person paid to defend a particular organization or corporation. For the same reason, we tend to look askance on a pundit or expert who will derive some particular financial benefit if people heed his or her advice–a biochemist who is supposed to test a drug who owns shares in the company that will produce it, for example. There are often legal and ethical restrictions that apply in such cases.

You can’t so easily constrain a conventionalized narrative that mainstream reportage and experts collaboratively disseminate that just so happens to advance a strongly vested financial interest that is diffused across a particular business sector or range of organizations. Even if that story leaves out vitally important details, or is simply wrong in some crucial respect.

For example, almost every mainstream story I’ve read or heard about the financial struggles of Sears, Toys R Us, and other brick-and-mortar retailers leaves out the role of private equity, debt and cult-like management strategies employed by neophyte CEOs (often installed by private equity firms). The shorthand instead is always: couldn’t compete with Amazon. Which is a story that benefits Amazon and its shareholders: it is how Amazon survived years and years of continuous losses, because reporters and experts kept describing it as the inevitable future, kept using it as the singular causal explanation for every other event in retail.

Another example: autonomous cars. A ton of big players have a huge bet down on the table on autonomous cars, and virtually everyone writing about the issue is compliantly doing their best to make that bet pay off by describing autonomous cars as inevitable no matter what technical, political and economic challenges might remain in their implementation. Just saying something is inevitable doesn’t overcome fundamental material limitations: flying cars, jetpacks and moonbases were also once represented as inevitable in a near-term future, but all three turned out to be basically impossible within present circumstances. But in a sense the actual money knew that: no one but fringe visionaries put serious investment into those projects. With autonomous cars, there’s real money involved, and so every time an expert or a reporter casually and thoughtlessly treats them as a certainty, they are creating the certainty that they only claim to predict. If it turns out that you can’t simply unleash tens of thousands of perfectly working autonomous vehicles onto the current road network, it will be made to happen by changing the infrastructure. The autonomous car makers will buy out HOV lanes and put guides on them and get manually driven cars banned from them, in the name of safety or experimentation or innovation. Then they’ll argue that any accidents on non-guided roadways are actually human error, not autonomous car error, and push for eliminating manual drivers from all high-speed highways. Inch by inch it will happen–and “prediction” will have played its role.

The example that’s really got my goat this week, however, is the way that much of the press and a particular group of experts report on the closure or threatened closure of colleges and universities. Let’s take three examples that have been reported recently: Newbury College, Green Mountain College, and Hampshire College.

The reporting and prognostication tends to lump these closures together as a single phenomenon, stemming from a singular cause, interpreted within a conventionalized story. That’s usually something like, “College is too expensive, families are no longer certain of the value of traditional higher education, and this is just going to accelerate as we hit the edge of a demographic drop-off”. All of this is true enough in terms of pressures on the entire sector: college is expensive, its consumers are feeling doubtful about its value, and there’s a demographic drop-off coming. But it’s also a story that has a client behind it: various “disruptors” who have a huge bet down on the table that various kinds of for-profit online education will and must replace expensive, inefficient, “traditional” brick-and-mortar education. Those folks are getting impatient–or are starting to worry they’re going to lose their money. They’ve been moving fast but so far not that much has been broken. They’ve been angling to do the usual smash-and-grab theft of public goods but so far all they’ve been able to do is sneak a few bits of bric a brac into their pockets. So the story that all colleges are near to failing, about a kind of institutional singularity, is especially important for them to tell–and to urge others to tell for them.

The problem with that story is two-fold. First, even if we’re talking about “all of American higher education”, this is not the first time that the entire sector has been faced with severe economic and sociopolitical pressures and not the first time that these pressures have produced new institutional forms and marketing hooks–and waves of consolidation and failure. It’s not even the first time that people enamored of a new mass medium have specifically sought to use it to replace colleges and universities–it happened with television, it happened with radio, it happened with the postal service. And yet for the most part, the variety and richness of physical institutions of higher learning has remained intact in the United States through all those failures and consolidations and transformations. The current storyline forgets all of that. There is an unbroken clumpy mass of “traditional higher education” and then there is the disrupted, innovated future. Only occasionally does an expert or prognosticator go a bit deeper into the history before breaking out the shill for the brave new innovated future–Kevin Carey, for example, does an actually fair and responsible job of recounting how contemporary research universities in the US took on the shape they now have and understands that this doesn’t extend all that far back.

But it’s at the individual level of institutional closures that the conventionalized narrative is just plain misleading or even false. Because many of the places that have announced closures or crises recently have never been stable or successful institutions in the first place, or have always been outliers in certain respects.

Let’s take Newbury to start.

The United States is known, correctly, for a unique variety and quantity of institutions of higher education. This was primarily generated in the 19th Century between 1830 and 1890. Every institution created subsequently in the 20th Century was to some extent building on this unique earlier history, trying to fit into the infrastructure created in that era, but there were at least two significant waves of later institutional creation, one in the 1920s that capitalized on the new centrality of higher education to the training of professionals and specialists, one in the 1960s that was a response both to a massive new investment in public education and to the demographic bulge known as the “Baby Boomer generation”.

A lot of those 1960s institutions have lived on the edge of failure for their entire existence. They were responding to a temporary surge in demand. They did not have the benefit of a century or more of alumni who would contribute donations, or an endowment built up over decades. They did not have names to conjure with. They were often founded (like many non-profits) by single strong personalities with a narrow vision or obsession that only held while the strong personality was holding on to the steering wheel. Newbury is a great example of this. It wasn’t founded until 1962, as a college of business, by a local Boston entrepreneur. It relocated multiple times, once into a vacated property identified formerly with a different university. It changed its name and focus multiple times. It acquired other educational institutions and merged them with its main operations, again creating some brand confusion. It started branch campuses. It’s only been something like a standardized liberal-arts institution since 1994. In 2015 it chased yet another trend via expensive construction projects, trying to promise students a new commitment to their economic success.

This is not a college going under suddenly and unexpectedly after a century of stately and “traditional” operations. This is not Coca-Cola suddenly going under because now everyone wants kombucha made by a Juicero. This is Cactus Cooler or Mr. Pibb being discontinued.

Let’s take Hampshire College. It’s a cool place. I’ve always admired it; I considered attending it when I was graduating high school. But it’s also not a venerable traditional liberal arts college. It’s an experiment that was started as a response to an exceptionally 60s-era deliberative process shared between Amherst, Smith, Mount Holyoke and UMass Amherst. It’s always had to work hard to find students who responded to its very distinctive curricular design and identity, especially once the era that led to its founding began to lose some of its moral and political influence. You can think about Hampshire’s struggle to survive in relationship to that very particular history. You should think about it that way in preference to just making it a single data point on a generalized grid.

Let’s take Green Mountain College. “The latest to close”, as Inside Higher Education says–again fitting into a trend as a single data point. At least this time it is actually old, right? Founded in 1834, part of that huge first wave of educational genesis. But hang on. It wasn’t Green Mountain College at the start. It was Troy Conference Academy. Originally coed, then it changed its name to Ripley Female Academy and went single-sex. Then it was back to Troy Conference. Then during the Great Depression it was Green Mountain Junior College, a 2-year preparatory school. Only in 1974 did it become Green Mountain College, with a 4-year liberal arts degree, and only in the 1990s did it decide to emphasize environmental studies.

Is that the same institution, with a single continuous history? Or is it a kind of constellation of semi-related institutions, all of which basically ‘closed’ and were replaced by something completely different?

If you set out to create a list of all the colleges and universities by name which have ever existed in the United States, all the alternate names and curricular structures and admissions approaches of institutions which sometimes have existed on the same site but often have moved, you couldn’t help but see that closures are an utterly normal part of the story of American higher education. Moreover, that they are often just a phase–a place closes, another institution moves in or buys the name or uses the facilities. Sure, sometimes a college or university or prep school or boarding school gets abandoned for good, becomes a ruin, is forgotten. That happens too. We are not in the middle of a singular rupture, a thing which has never happened before, an unbroken tradition at last subject to disruption and innovation.

This doesn’t mean that we should be happy when a college or university closes. That’s the livelihood of the people who work there, it’s the life of the students who are still there, it’s a broken tie for its alumni (however short or long its life has been), the loss of all the interesting things that were done there in its time. But when you look at the story of any particular closure, they all have some important particulars. The story being told that flatters the disruptors and innovators would have us thinking that there are these venerable, traditional, basically successful institutions going about their business and then suddenly, ZANG, the future lands on them and they can’t survive. At least some of the institutions closing have been hustling or struggling or rebranding for their entire existence.

Posted in Academia, I'm Annoyed | 4 Comments

Never Gonna Give You Up

It’s been a while. Enough to look like this is over.

It remains important to me to think: today, I might blog. And to think: I have a place to do it in.

So why don’t I more often? That is the thing on my mind today.

————

Reason #1: Because I am storing up some of the thinking that went into this blog for other, as yet unseen, purposes. First, for the Aydelotte Foundation, which I presently co-direct. We’re going to go live in the spring with a new website, and I’ve been writing a lot of content for that, much of which would previously have been grist for my blog mill. Second, I’m working on a long-form manuscript that in some ways arises out of fifteen years of blogging, and that’s absorbing some of the energies that would have gone into this space.

Reason #2: Because the way we’ve come to read in our present public sphere is both boring and terrifying. This N+1 essay in their Fall 2018 issue helped me understand a lot of my own distress. There seems to be almost no appetite now among public readers for interesting, stylistic or exploratory writing. Readers swarm over everything now, stripping any writing down into a series of declarative flags that sort everyone into teams, affinities, objectives. There’s no appetite for difficult problems that can’t be solved or worked, or for testimonies that give us a window into a lived world. No pleasure in the prose itself, and thus none in the writing of it.

Reason #3: Because we seem to arrived at a point where justice means visiting extreme precarity on everyone who says anything rather than making it possible for previously suppressed voices to speak safely. This is a familiar inflection point in struggles for social justice: we despair of a transformation that emancipates and so we settle for a transformation that at least tries to spread the misery. That might even be the right thing to do, for a variety of reasons. Making the powerful fear the consequences of speech that discriminates or hates or creates fear may be all we can do for now. There have been plenty of opportunities for the powerful to instead take a more hopeful and constructive interest in the voices of people who were long excluded from the public, in sentiments that have been unheard, and that opportunity was long unpursued. But the consequence in our current public discourse is that almost everyone is one day away from having someone paint a bullseye on them, deserving or otherwise. There can’t be any pleasure or joy in public writing in our present mood. Moreover, the kind of provocative hooks that I used to really enjoy setting into my blogging feel risky and I don’t have the same taste for risk that I used to. I feel vulnerable and tentative and melancholy even when the visible sociologies of my life and my writing should suggest otherwise.

Reason #4: For all that amateur blogging has faded, there is still a tremendous volume of online writing, and its speed has accelerated. By the time I have thought through my take on something that’s at least a bit timely, it’s been thoroughly masticated and spit out in online conversations. The last thing we need is more roughage to block up the digestive systems.

Reason #5: As I’ve said before, I know too much now and that is producing some of the expected inhibitions–it feels as if almost anything I might say would be taken to be subtweeting even when it’s not.

Reason #6: Everyone I respect who writes online feels smarter and clearer than I feel I am myself. I feel less confident in what I think I know and more conscious of the vastness of the things I do not know. That I know that this is a common feeling does not particularly relieve me of having it.

Reason #7: It’s hard to feel like there’s a point to public writing at the heart of Trump’s ascendancy. Certainly there’s no point to even trying to speak to self-identified conservatives who have aligned themselves with Trump: the will-to-power mendacity and moral vacuity melts anything like honest engagement like a butterfly tossed in a furnace. But it is not merely Trump and his followers. When is the last time you can recall seeing anyone who was meaningfully persuaded by arguments or evidence that contradicted or challenged a belief or position they had previously articulated? When I see people telling me that the only way to deal with people who hold dangerous, untrue or morally bankrupt views is to engage them in a persistently reasonable way, to have a dialogue, I can’t help but think that this is just another untrue idea. Or at least it is a kind of religious dogma by self-anointed rationalist thinkers. It is not an evidence-based proposition about how people shift their values or come to hold new thoughts or ideas. It doesn’t matter whether we are talking about people with whom one has personal or familial standing or total strangers, whether this is about a neighborhood or a nation. What passes for reason and evidence among educated readers and writers often feels as if it is just a value system local to them, and no more likely even so to lead to thoughtful changes in perspectives or beliefs among them. I feel no more likely to persuade a person who is in every respect a peer to change a view they have committed to, no matter how strong my arguments or evidence might be, than I am to persuade a stranger with completely different values and social location. And yet, I feel I am persuadable: that I change what I think about specific issues and arguments quite frequently in response to what others say and argue. Perhaps I am wrong even about myself; perhaps this is an unearned vanity. If I am right, then it feels as if I have chosen the worst strategy in Prisoner’s Dilemma: vulnerable to persuasion in a world that increasingly sees persuadability as a vulnerability to be exploited.

And yet, I remain hopeful about blogging. I am not sure why. I am not sure when. This remains open for business, nevertheless.

Posted in Blogging | 10 Comments

Save the Children

Jonathan Haidt is consistently unimpressive.

Responding in this Chronicle piece about Jeffrey Adam Sachs’ great essay for the Niskanen Center, Haidt concedes that the speech-related episodes that he and his pals get so agitated about are confined to a relative handful of highly selective institutions. The evidence for a significant shift in attitudes among all college-attending students is thin and contested.

But Haidt says that since students at elite institutions are going to be the leaders of tomorrow, we should be disproportionately worried about how they think.

This is a classic kind of fallacious reasoning in populist social science that seeks to stoke up some form of middlebrow moral panic. I first became familiar with it while researching claims by social scientists during the 1970s about the effects of “violent” cartoons on children.

The argument runs like this: children or young people are being moved away from adults on some kind of important social norm by a lack of institutional vigilance–that it’s up to the adults to control what children and young people see, say or do so that social norms will be protected. There’s an odd kind of philosophical incoherence somewhere in there in many cases–a kind of softly illiberal vision of parenting and education that is invoked in many cases to defend adult liberalism as the social norm worth preserving–but leave that for the moment.

What’s more important in terms of social science is that this is a *prediction*: that if the external stimulus or bad practice is permitted, tomorrow’s adults will have a propensity to behave very differently in relationship to the norm being invoked. The anti-children’s television crusaders said: tomorrow’s kids will be more violent. Haidt is saying: tomorrow’s kids will have less respect for free speech.

There’s a sleight of hand going on here always. Because usually this is being said against a *contemporary* crisis about the issue at hand. The television crusaders were responding to the violence of 1968-75: the Vietnam War, protests on campus, rising rates of violent crime. But the people involved in those forms of violence *didn’t watch cartoons on Saturday morning*. They were the previous generation. The people who are most threatening to free speech in the United States today are not 20-year old Middlebury students: they’re the President of the United States and his administration, the Congress, the people in charge. People who grew up under the norms that Haidt and Brooks etc. are trying to defend.

So it turns out that past dispensations that were allegedly friendly to the norms being defended actually produced the most serious threat to them.

And of course, it usually turns out that the prediction is wrong as well. Violence has been steadily more and more represented in mass media for children and adults since 1965; rates of violent crime have gone steadily down since the mid-1970s. You can always claim in a particular case that there’s a particular link–a mass shooter who turns out to have played Call of Duty or whatever–but that’s not how a general social scientistic prediction about a variable and a population works. If watching cartoons where bad guys got punched in the face made you more likely to be violent, that’s a prediction that there would be more interpersonal violence overall in the future. It didn’t happen. That’s not how it works. The same thing here: if free speech norms are enduring and important, I guarantee you that a bunch of kids at Middlebury standing up and turning their backs on Charles Murray does not represent a future trend that will affect a generation. Frankly, anything Middlebury or Swarthmore students do will have negligible collective impact–they are not a good marker of generational typicality.

It might even be that actually testing out the propositions embedded in a belief in free speech rather than dully worshipping them as received orthodoxy produces a more meaningful lifelong relationship to them. It certainly is that Haidt and others are producing a nostalgic myth about where a commitment to free speech comes from.

Posted in Academia, Politics, Swarthmore | 1 Comment

The Kid With the Hammer

A certain kind of application of social science and social science methods continues to be a really basic limit to our shared ability in modern societies to grapple with and potentially resolve serious problems. For more than a century, a certain conception of policy, government and the public sphere has been determined to banish the need for interpretation, for difficult arguments about values, for attention to questions of meaning, in understanding and addressing anything imagined as a “social problem”. This banishment is performed in order to move a social scientistic mode of thinking into place, to use methods and tools that allow singular causes or variables to be given weight in relation to a named social problem and then to be solved in order of their casual magnitude.

Certainly sometimes that analysis is multivariable. It may even occasionally draw upon systems thinking and resist isolating individual variables as something to resolve individually. But what is left outside the circle always are questions of meaning that require interpretation, that require philosophical or value-driven understanding, that can’t be weighted or measured with precision. Which is why in some sense technocratic governance, whether in liberal societies or more authoritarian ones, feels so emotionally hollow, so unpersuasive to many people, so clumsy. It knocks down the variables as they are identified, often causing new problems that were not predicted or anticipated. But it doesn’t understand in any deeper way what it is trying to grapple with.

I’ve suggested in the past that this is an unappreciated aspect of military suicides since 2001, that the actual content of American wars, the specific experiences of American soldiers, might be different than other wars, other experiences, and that difference in meaning, feeling, values might be a sufficient (and certainly necessary) explanation of suicide. But that conversation never floats up to the level of official engagement with the problem, and not merely because to engage it requires an official acknowledgement of moral problems, problems in meaning and values, with the unending wars that began in 2001. It’s because even if military and political leaders might have a willingness to consider it, they don’t have the tools. It’s not in the PowerPoints, in the graphs, in the charts. It’s in the hearts, the feelings, the things spoken and unspoken in the barracks and the bedrooms. It’s in the gap between the sermons and the town meetings on one hand and the memories of things done and said in the battlefield. No one has to say anything for that gap to yawn wide for a veteran or veteran’s family–it is there nevertheless.

Here’s another example: a report on “teen mental health deteriorating”. It’s a classic bit of social scientistic reason. Show the evidence that there is something happening. That’s fine! It’s useful and true. You cannot use interpretation or philosophy to determine that truth. But then, sort the explanations, weigh the variables, identify the most significant culprit. It’s the smartphones! It’s social media!

Even this is plausible enough and not without its uses. But the smartphone here is treated as causal in and of itself, with some hand-waving at social psychology and cognitive science. Something about screen time and sociality, about what we’re evolved to do and about what we do when our evolution drives us towards too much of something. What’s left out is the hermeneutics of social media, the meaning of what we say on it and in it. Because that’s too hard to understand, to package and graph, to proscribe and make policy about.

And yet, I think that’s a big part of what’s going on. It is not that we can say things to each other, so many others, so easily and so constantly. It is the content and meaning of what we say. The structures of feeling that follow from reading a stranger with no standing in your own life pronouncing authoritatively in the genre of a social-justice-oriented “explainer” that you are commanded to do something, feel something, compared to a person with great standing in your own life providing delicately threaded advice about a recent experience that you’ve had? Those are hugely divergent emotional and social experiences, they produce loops and architectures of sentiment. Reading people who hate you, threaten you, express a false intimacy with you, who decide to amplify or redirect something you’ve said? Those experiences have an impact on a reader (and on the capacity to speak) that rests on how their content (and authors) have meaning to the reader, often in minutely divergent and rapidly shifting ways.

We blunder not in our diagnosis of a problem (teen mental health is more fragile) or even in roughly understanding an important cause. We blunder in our proposed solution: take away the smartphones! (Or restrict their use.) Because that shows how little we understand of what exactly is making people feel that their online sociality is a source of vulnerability and fragility and yet precious and important all the same. It’s not the device, it’s the content. Or in a more well-known formulation, not the medium but the message. That requires semantic understanding, it requires literary interpretation, it requires history and ethnography, to understand and engage. And perhaps change–but that takes also a different set of instruments for coordinating shared or collective action than the conventional apparatus of government and policy.

Posted in Academia, Oh Not Again He's Going to Tell Us It's a Complex System, Politics | 3 Comments

A Place at the Table, or the Whole Damn Dining Room?

What kind of problem is it if a substantial minority of a community’s citizens are deeply and persistently opposed to a policy that the majority support? It is, among other things, a political problem. I found myself in an argument on Twitter with Damon Linker in which he cast himself as defending that proposition against critics like myself or Daniel Drezner, whom he suggested are content to ignore this kind of political problem.

Quite the contrary: worrying about this exact problem is a persistent theme for me at this blog. Which is why I don’t think Damon Linker or Ross Douthat or Rod Dreher are in fact being honest in their professed concern with this problem. I think they’re using that concern as a form of opinion-laundering, as a vicarious way to advocate positions that they’d rather not attribute to themselves.

Why should anyone worry when a democratically-constituted body of any size finds that there is a substantial minority opinion that is persistently excluded from decision-making or policy formation? At what point is that a concern?

It is not, for example, a concern immediately after when two or more factions disagree with one another on the cusp of an important decision and ultimately, one faction loses out in a vote. It is not a concern because the concerns of the losing faction may disappear or erode over time if the majority’s preferences are enacted and produce good outcomes. In many all-male colleges in the United States that shifted over to co-ed admissions between 1955 and 1975, there was considerable opposition from some faculty and alumni, almost all of which evaporated rapidly after their various predictions of negative consequences turned out to be absurdly untrue or out of touch with the wider society.

It is only a concern when that strong disagreement turns out to be persistent and when it conditions the relationships between different factions across the totality of their political participation and social interactions. People persistently disagree about whether cilantro is delicious, but unless you’re a maniac who puts it on everything you serve to other people or a person who throws a cilantro-covered taco back at your host’s face, it’s a divide that has little meaning.

When there is a strong, persistent and meaningful division of this kind, what does the majority owe to the minority faction? And what is the minority faction entitled to do about it?

This is the juncture where I think there’s bad faith—or at least wild inconsistency—involved in a certain kind of performative swoon about the alienation of white voters who want dramatic restrictions on immigration. Bad faith of several different kinds, in fact. First, because the question of why one should be concerned has both a practical component and an ethical component that should need some degree of consistent attention. Second, because the solution to this concern is by no means, “Give the minority what they want or else”.

Why is this a practical problem? Basically, because we assume that convictions held by a substantial minority that are wholly unrepresented in the policies or decisions of a body politic eventually lead to that minority leaving if they can or an uprising if they can’t.

There are a few cases where schism is a fine if still often upsetting outcome, say, in a church or non-profit organization where both groups will be happier under their own banner. There are cases where schism is something no one has ever found a way to do easily: nations and states don’t fission easily. If they can’t leave, then an uprising or civil war is bad for everyone.

But note that in both cases, the commitment of the minority group to their convictions has to be sufficient that they simply cannot abide life under the policies of the majority, and that they are potentially happy to get their way in their own organization, community or country. They can’t have their cake and eat it too—they can’t insist that not only do they have to have their own way, they have to have it over the majority. Because at that point, the practical problem doesn’t abate. It gets worse, in fact: there is nothing more explosive in practical terms than a minority faction that controls the policies that a majority strongly oppose. Oddly, this doesn’t seem to perturb Damon Linker or Ross Douthat or any of the other people wringing their hands in public right now about immigration policy. If they’re worried about what a minority frustrated by not getting their way might do, they ought to worry doubly about what a majority that doesn’t have their views proportionately represented in policy might do. In practical terms, that’s much more threatening and dangerous.

In ethical terms, what does a majority owe to a minority? Consideration and engagement, at least. Where it is possible to devolve or schism authority to allow a minority faction to do as it wills in some limited or bounded space of authority, that might be an ethical as well as practical gesture. There are also structures for deliberation in democratic communities that do a better job at checking or modifying majoritarian authority than simple decision rules that give 50.01 percent unlimited authority to determine all outcomes. The United States has federalism and it also has a government where authority is divided on purpose between different branches as a gesture in that direction, and that’s by no means the only way to erode majoritarian power. The reason this is an ethical obligation as well as a practical one is pretty easy to come by. If you’ve ever been outvoted persistently in a group to which you belong, you recognize that your membership in that group very quickly stops feeling like a fair, equal and human relationship. At some point that stops feeling like democracy and starts feeling more like domination. It matters little if you get to cast a vote if there is never any chance whatsoever of the majority respecting your views. I actually agree that we’re not making a good and patient case for pluralism to many people around the world now. There is some obligation to make this conversation a better conversation, and to not simply shout people down: that’s another thing I’ve been saying for more than a decade through this blog.

Just as in practice, though, this is a two-edged sword. A minority view that fails to understand itself as a minority view, that thinks of itself as a majority view that has fallen on temporary hard times, is prone to demand consideration beyond what it has any right to. If I show up as an atheist in a church congregation in my small village, and I ask people to consider me as a human being who has arrived at my own spiritual views with great care, I might be entitled to that consideration. I might even ask for an opportunity to address the group once in a while. But if I demand the pulpit for five minutes every Sunday because otherwise I’m not represented in any of the proceedings, I’m asking for something I have no right to have. I’m not even entitled to some fixed share of the decisions that are made in a democracy, because that undercuts the whole idea of the body politic deliberating together. If we make decisions according to a pie chart in which everyone gets a designated percentage of the decision, we’re not one organization or country, we’re loose association of separate organizations or countries with no right to make demands of one another in the first place. Whether I’m in the majority or minority, I have to be prepared to not have my will enacted sometimes if I’m even remotely serious about democratic decision-making.

This is where I really think Linker and Douthat and others show how little they actually believe in the line they’re slinging about immigration and the views of a faction of white voters. Because the answer to the problem of a persistent minority view is not to always make sure that some aspect of that view is encoded into the end decisions, to ensure that all decisions have something for everyone built into them. The first duty is to ask: who are we dealing with here, and why is it that they’re outvoted? It’s to investigate, and witness. So, let’s say, a population who’ve been systematically excluded from power for profoundly illiberal reasons, because of their ethnicity or race or religion or gender, not because of the content of their views? That requires taking what they say seriously in new ways. A minority who’ve been excluded because they were once the shapers of policy and the majority decided they shouldn’t be? That’s a different consideration. If you’ve actually made policy and you failed or were rejected, then you’re not entitled to the same consideration. If you made policy and you just make it somewhat less—-rather than being excluded completely—-you’re not entitled to the same consideration. If you’re excluded because what you advocate is the destruction of the existing order in its entirety—-you’re not entitled to the same consideration.

Moreover, what on earth do Linker and Douthat and similar writers think is “exclusion”? Even before Donald Trump took office, it was not the case that people who wanted limits on immigration were excluded from the making of immigration policy. The Obama Administration was in fact more aggressive than its predecessors at deporting illegal immigrants. Border controls have been enforced fairly stringently for the last twenty years, and they weren’t exactly porous before that. If you push through, it turns out that what Linker and Douthat really mean is that people who want tight limits on immigration in order to maintain racial and ethnic purity feel as if they’re not welcome to say so in mainstream public discourse. Meaning, it’s not the lack of actual controls on immigration that’s at issue here, it’s the idea that there should be a “place at the table” for the underlying racial and ethnic rationale behind particular limits on particular kinds of immigrants—and that anyone who disagrees should be obligated to be polite in their disagreement.

In a democracy, not every excluded constituency with an opinion has equal status. It’s not a damn equation, it’s a history. Former slaveowners in 1875 still had opinions about slavery that were unreconciled to the new birth of freedom envisioned by Abraham Lincoln in the Gettysburg Address. They were owed nothing, and it is the everlasting shame of the United States that they were given so much they were as Reconstruction crumbled and failed. It doesn’t matter what their percentages were: the point was that the Republic was or should have been on that point committed to a new understanding of its foundations.

This is where the special pleading that Linker and Douthat and Dreher and others are indulging is laid bare. Because they are not equally concerned for every 25% of opinion that is left unfulfilled by majority opinion. They’re not concerned for the many desires of the American majority, let alone various minority factions, that have gone thwarted for forty years and have “no place at the table” in the making of national policy: for campaign finance reform, for gun control, for reproductive rights, for generous funding of public education. They’re not as concerned for making sure that Black Lives Matter or the Green Party has a place at the table and a share of policy. This is not a generalized practical or ethical position that they are taking, in which every thwarted minority faction that has strong, persistent views needs to be incorporated generously into the making and discussing of national policy. It’s only one group that counts: aggrieved white conservatives who want to control the future demography of the United States so that it remains majority white.

Linker has been beating this drum for a few years: that it ought to be possible and legitimate to have a “particularist” preference—to want to live in homogeneous communities. He likes to attribute this view to those other people whom he just wants to have a place at the table rather than advocate that particularism himself. But he doesn’t really mean all kinds of particularism, just this one particular particularism. What goes uninvestigated is whether that is in fact what whites who want strong restrictions on legal and illegal immigration are in fact seeking. Because it’s actually fairly easy within the present United States to move into racially homogenous communities if that’s all you’re after. Pack your bags and head to Idaho or Oregon or Vermont, they’re very white. Why is that not good enough? Because what we’re talking about aren’t genuinely particularist aspirations for cultural homogeneity. They’re not genuine separatism. They don’t want to build something that expresses some distinctive or special culture and requires discipline to do so. That’s what the Amish do. That choice is already available to any group of people in the United States who feel strongly enough about the maintenance of a distinctive way of life. There is already a “place at the table” for that kind of particularism. What the people that Linker and Douthat are pleading for want is heterogeneity, but where they hold a structurally-guaranteed upper hand. They don’t want an end to Latinos cleaning the toilets and washing the dishes in the towns and places they live.W hat Linker’s objects of sympathy want is the ad hoc power to exclude, expel, and control people that they arbitrarily decide are a threat to their own status. To have a few Mexicans or Laotians or blacks, but not too many. To have people who are racially or culturally different around as long as they keep it quiet and out of sight, or as long as that difference is something that the whites like: a restaurant, say. There isn’t a philosophically coherent or consistent argument about a desired way of life that can be given a place at the table behind all of this, beyond the desire to maintain a form of power over racially defined others, to seek a permanent guarantee of their second-class citizenship.

Which once again casts this all in a different light. Perhaps one thing a democracy shouldn’t make a place at the table for is a desire for something other than democracy. Perhaps one thing a free society shouldn’t make a place for at the table is a desire to impose unequal restrictions on the freedom of some subset of its citizens.

Posted in I'm Annoyed, Politics | 10 Comments

A New Year

This is not the first time I’ve gone quiet on this blog simply because I was busy. Fall 2017 was in many ways the busiest semester I’ve ever had at Swarthmore: I taught two courses, I chaired my department, I became the co-director of the Aydelotte Foundation, and I sold my house and moved.

But I have gone quiet for other reasons as well. I am struggling to understand what the good of writing in public is at a time when I’m prepared to encourage others to do so.

When I began blogging in a pre-WordPress era, I was already a long-time participant in online conversation, all the way back to pre-Usenet BBSs, including the pay service GEnie. So I think I held no illusions about what were already problems of long-standing in online culture: trolling, harassment, mobbing, deception, anonymity, and so on.

Nevertheless, I started a blog for two major reasons. First, to have an outlet for my own thinking, as a kind of public diary that would let me express my thinking about professional life, politics, popular culture and other issues as I saw fit, and perhaps in so doing keep myself from talking too much among friends and colleagues. I don’t think I’ve succeeded in that, because I still overwhelm conversations around me if I’m not thoughtful about restraining myself.

The second was to see if I could participate usefully in what I hoped would grow into a new and more democratic public sphere, one that escaped the exclusivity of postwar American public discussion. I think I did a good job at evolving an ethic for myself and then inhabiting it consistently. That had a cost to the quality of my prose, because being more respectful, cautious and responsible in my blogging usually meant being duller and longer in the style of my writing.

In the end, I feel as if both goals have ended up being somewhat pointless. It’s not clear to me any longer what good I can contribute as a public diarist. Much of what I think gets thought and expressed by someone else at a quicker pace, in a faster social media platform. More importantly, the value of my observations, whatever that might be, was secured through combining frankness and introspection, through raising rather than brutally disposing of open questions. This more than anything now seems quaintly out of place in social media. I feel as if it takes extreme curation to find pockets of social media commentary given over to skepticism and exploration, through collectively playful or passionate engagement with uncertainty and ambiguity.

More complicatedly, the more I am tied to my institutional histories and imagined as being a “responsible agent” within them, the harder it gets to talk frankly about what I see. It was comforting to think that almost no one read my blog and almost no one cared about it, in some sense. Now I’m only too aware that if I speak, even if I’m careful to abstract and synthesize what I’m observing, I can’t help but seem as if I am testifying about the much larger archive of real experiences and painful confidences I have been entrusted with. If I abstract too much, I find that friends and colleagues politely gaslight me: I can’t have seen what I think I’ve seen. But I can’t be more direct, and I don’t want to be. Trying to observe real stories and real problems with some degree of honesty can curdle into the settling of scores, and can tempt people–older white men especially–into a narrative of institutional life in which they are always the heroes of the story. Some stories and experiences explored honestly end up with everyone muddling through with good intent; others end up implicating everyone in certain kinds of bad faith or short-sightedness, including the people doing the exploring.

This brings me to the second goal: to be part of a new and more democratic public sphere. I have been for thirty years a person enthusiastic about the possibilities and often the realities of online culture. I am losing that enthusiasm rapidly. It’s not just that all the old problems are now vastly greater in scope and more ominous by far in the threat they can pose to participants in digital culture, but that there are new problems too. The threat to women, to people of color, to GLBQT people, is bigger by far, but even as someone who has all sorts of protections, I find myself unnerved by online discussion, by its volatility and speed, by the ways that groups settle on intense and combative interpretations and then amplify both. I remember only dimly that for a long time I saw myself as trying to create bridges in conversations to online conservatives. With a blessed few exceptions, those conversations mostly felt like agreeing to trust Lucy to hold the football steady one more time, like being the mark in a long confidence game whose goal was to move the Overton window. What did I think I was doing talking to David Horowitz, for example? Or writing critiques of ACTA reports as if anyone writing them cared remotely about evidence or accuracy? And yet I’m not feeling that much more comfortable about online conversation with people with whom I ostensibly agree or among whom I have allegedly built up long reservoirs of trust. That sense of trust and social groundedness felt very real as recently as five years ago, but now it feels as if the infrastructures of online life could pull any foundation into wreckage in an instant without any individual human beings meaning or wanting to have that happen.

I almost thought to critically engage a recent wave of online attacks on a course being taught by my colleague here at Swarthmore. I even tried one engagement with a real person on Twitter and for a brief moment, I thought at least the points I was making were being read and understood. But the iron curtain of a new kind of cultural formation snapped down hard within three tweets, and it was difficult for me to even grasp who I had been talking to: a provocateur? an eccentric? a true believer? The rest of the social media traffic about the issue was rank with the stink of bots and 8chan-style troublemaking. Even when it was real people talking, even if I might be able to have a meaningful conversation with them in person if I happened to be in their physical presence, nothing good could come of online engagement, and many bad things could instead happen.

So I need to think anew: what is this space for? What’s left to say? Public debate, per se, is dead. Being a diarist might not be, but I will need to find ways to undam the river of my own voice.

Posted in Academia, Blogging, Politics, Swarthmore | 12 Comments