Academia – Easily Distracted https://blogs.swarthmore.edu/burke Culture, Politics, Academia and Other Shiny Objects Mon, 24 Aug 2020 22:59:08 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.15 Values Before Risk Assessment https://blogs.swarthmore.edu/burke/blog/2020/08/24/values-before-risk-assessment/ https://blogs.swarthmore.edu/burke/blog/2020/08/24/values-before-risk-assessment/#comments Mon, 24 Aug 2020 21:18:53 +0000 https://blogs.swarthmore.edu/burke/?p=3332 Continue reading ]]> Why is it a problem to place consideration of risk at the forefront of collective or institutional decision-making processes?

Imagine that you had an array of specialized individual consultants that you could involve, one at a time, in your personal choices. What would go wrong if you always chose to have a specialist in risk management be the first or dominant consultant you used in making decisions?

Suppose you watch a documentary on climbing Mount Everest. It’s an upbeat travelogue, so it doesn’t dwell on the deaths, the frozen mountain of shit, the crowding. You say to your consultant, “I find that really interesting. I’m really drawn to it. What do you think?”.

If your first consultant is a risk manager, they’re going to tell you about the high death rate from climbing Everest and about the immense expense of climbing Everest, which may threaten your financial solvency. You aren’t ever going to get a chance to think about what interested you.

Did your heart thrill at the thought of standing on the highest mountain in the world? Or did you just want to go to the top of a mountain, any mountain? Did you want a motivating goal to drive a fitness program in roughly the same way people buy lottery tickets just to authorize dreaming about being rich? Did you want to just see what it looks like down in basecamp at Everest? Or are you interested in Nepal and Sherpa culture?

You will not get the chance to ask what you valued in the thought of Everest in that initial inchoate moment of feeling. If you just want to get way high up in some scenic mountains and you don’t care how, the risk manager will have a useful answer for you after you have come to that conclusion. Go to Zermatt, get on the cable car to the Klein Matterhorn, and enjoy. If you want a safe hike, hike up to the Gornergrat and then go back on the railway. If you just need an imaginary goal for fitness, you don’t need the risk manager to step in and explain that trying to train too fast for climbing Everest will risk injuries–you need someone to design a fitness program that steps you up methodically.

If on the other hand, what really grabbed you was the idea of standing at the top of the highest mountain in the world, you still need to think without risk first. Why do you value that? What’s valuable about it? What awoke in you at that thought? Only after you’ve thought that through should you ask, “Is it worth it?” Because then you’re asking: might I die? Be seriously injured? Spend my entire life savings? Get stuck in a queue for three years running? Be dismayed because the vision in your head is nothing like the current reality of climbing Everest? The risk manager’s job is in theory just to lay out relative risks to you and leave you to think on it, but in practice most risk management is about risk reduction, never really about risk amplification–and yet, in some value-driven decisions, the risk is either necessary or even desired. A man set on flying in a wingsuit through a narrow rock arch is looking for something risky and difficult to accomplish.

So if you listen to that litany of risks but you haven’t gotten clear in your own head what was driving the thought of “stand at the top of the highest mountain”, you may get talked down into something like “Go to the top of Mount Washington in New Hampshire” when actually what you really should get talked into is “train for a peak in the Andes that’s less crowded but equally dramatic and challenging”, because you don’t mind the risk of death and you don’t mind the expense, you just mind the idea of standing in line with a bunch of other rich people looking at a mountain of frozen shit and some corpses that nobody wants to move.

Evaluating risk and liability should happen when we begin to act on decisions, not when we first envision them. Risk evaluation should not propose, it should only dispose.

——————-

I understand why “neoliberal” seems like a word that is so expansively pejorative that it may seem to have no specific descriptive value. But we need a word for a common form of organizational design and a common culture of shared decision-making that has dominated the 21st Century so far. “Managerial” and “technocratic” grasp parts of that form but have their own shortcomings.

What are the specifics of the form that I think is best named by the word neoliberal? I’m going to work up a definition here without a lot of reference to many of the excellent academic works that try to do the same, partly to clarify for myself what I’m referring to with the term and to open up other possible words or labels.

First, a strange intertwining of utopian self-description layered over both internal deliberative process and external communications with a private, confidential and protected work process that acknowledges problems, shortcomings, and the social realities within the organization and in the organization’s situatedness in the world, and yet is also curiously enough the space where the organization’s actual goals and mission can be discussed, sometimes in fairly idealistic ways. This may sound as if it only applies to higher education, but I think this is a reasonable description of law firms, hospitals, corporations, think-tanks, non-governmental organizations and so on. The gap between how an organization presents itself to the world and how its internal cultures of work function is not particular to our neoliberal moment, but that gap is now especially intense and disorienting. It has become all but impossible to speak forthrightly in ways that are visible, public and transcribed about the gaps between what an organization performs as its values and the ways in which values are inhabited and invoked in its actual practices.

Neoliberal organizations sometimes appropriate metaphorical framings of their internal processes and nature that do not at all match how they actually work or why they exist. Especially favored is talk of “community”–the “Walmart community”, the “World Bank community”, the “United Way community”–and to talk of “mission”. We need not understand these appropriations entirely cynically–their adoption and use is not often a coherent, calculated, top-down strategy with clear instrumental intent. But generally they neither provide clarity nor opportunity for reflection. Mission and community are invoked instead as a deferment of and disguise for hierarchy. Sometimes cynically, sometimes mournfully: many organizations that have fallen in line with neoliberal sensibilities and practices do so with a sense of regretful surrender to a way of being that is everywhere and nowhere at once.

Second, organizations subject themselves and all who work on their behalf to the agonies of incremental progress towards goals that are chosen because they can be measured concretely and analyzed quantitatively. The reasoning here is that values are often unquantified, complicated, arguable, so they cannot be used as a way to judge institutional or individual performance. Increasingly, even, values are replaced under neoliberalism by missions and goals (and stating those in measurable terms is increasingly favored). Why is the measurement of performance important? Because institutions compete with one another and must prove their worth in commodified terms to clients and customers. The better the performance on goals, the more valuable (not value-driven) the institution is. And employees are seen in competitive terms to one another and to the larger labor markets they were hired from. To justify remaining on the payroll, they must every year deliver incrementally more value in the accomplishment of the mission. These missions are never rendered as startlingly new or fundamentally recommissioned, so progress must always be now and forevermore incremental, because to have a year in which progress happens with sudden speed amounts to a confession of persistent past failure–and sets up an impossible futureward expectation. The point here is that neoliberalism in this sense despises the idea of the individual or collective maintenance of values, because that is something that might simply happen year in and year out, in stewardship or duty. The lighthouse is maintained so the boats do not crash in the storm: it doesn’t have to prove that it has been .5% more effective in navigational efficiency compared to most lighthouses and hence should be preferred as the lighthouse of choice. Neoliberalism abhors the language of values except as a way to manipulate people who still believe in vocation or mission into providing .05% more quantifiable output in the coming year–or accepting 1% less support for doing so.

Third, neoliberalism assumes and even often mandates the dissolution of public goods and accordingly also forces individual organizations to regard forms of large-scale collaboration on behalf of public goods as both improvident and illicit. Governing authorities within neoliberal institutions, whether boards, owners, executives or even in rare cases, larger collectives, understand their due diligence as applying nearly exclusively to a single specific institution and often insist upon or reinforce its sovereign distinction from other institutions. Institutions can join associations, but they do so much as nations might join international organizations, as permanently separate, autonomous and voluntary participants in associational bodies. Much of this is explained in terms of compliance with antitrust statutes or other laws, and indeed, under neoliberalism, this is the one form of relation that institutions acknowledge to public goods or the wider society: a need to comply both with governmental regulation (in letter, at least) and often even with quasi-legal codes or regulatory obligations that are envisioned as necessarily and undebatably authoritative. E.g., neoliberalism insists on the autonomy of organizations except in terms of domination by other organizations or in terms of contractual obligations (thought even those are frequently subject to complicated evasion and abrogation). Competition, yes; compliance, yes. Collaboration? Reluctantly if is perceived to be allowed, and never at the risk of asserting genuine collective interest in a way that creates bonds of obligation, reciprocity and desire. Older institutional infrastructures that do so are treated as undigested and troublesome fragments.

Fourth, neoliberalism thinks about resources in two primary ways: as something to be ceaselessly accumulated and as something to be regarded, seemingly paradoxically, as forever scarce. No neoliberal institution, whether company or NGO or university or local non-profit ever sits comfortably on available resources, even asset-based wealth. The organization must always have more, and the organization must always imagine itself as never having enough. That is so pervasive a disposition that it spreads readily to everyone who works for any given organization, all the way down to entry-level employees. No one imagines being custodians of a secure resource, spending it wisely as, in the older meaning of the term, trustees. Everyone is looking for more, and everyone is eager to prove that they both need more and have done their part to get more. Thus do companies sitting on unspeakably large cash reserves and non-profits with endowments in the billions convince themselves that they suffer from scarcity and its numerous psychological and cultural afflictions. But at the same time, organizations are keenly aware that they have vast assets both tangible (property, capital equipment, investments) and potential (unused or underused intellectual property, underutilized space or services, etc.) and they work with great intensity to protect both what they own and what they might own someday. Neoliberalism both seeks rents and works to protect its existing rents; a neoliberal society is primarily an asset-based one. And asset-based societies favor the first in and punish the last in–they are in some grand sense Ponzi schemes. People chase IPOs with frankly idiotic companies like Juicero because they know that there is no other way to climb the ladder past the first few rungs: the existing base of accumulated wealth inside older neoliberal organizations is so vast that no new entrant can compete without the equivalent of an accumulative miracle. (Or, as in “disruption”, without essentially destroying some class of asset holders and grabbing like children at the pinata candies that spill out–but just as at most pinata parties, the greedy and the bullies get most of the candy.)

———–

I take this detour to explain why risk management, usually in the form of legal counsel, stands in front of conversation and deliberation within institutions about values, even in institutions that are not ostensibly devoted to profit or growth. That layer of institutional decision making exists to protect the assets that allow institutions to grow and compete in a world of rent-seekers–and the mission of protecting those assets dictates most of the rest of what I have described as neoliberal organizational culture. The institution makes utopian promises that it knows are impossible, but it cannot acknowledge the gap or apologize for it, because such statements invite a lawsuit from a stakeholder who experienced that gap. The institution seeks endless incremental improvement because dramatic reconsiderations of its purpose are far too risky–leave that for start-ups!–but simply maintaining a steady hand at an ongoing mission is also too risky in an environment that requires everyone to perform competition against others. Making decisions based upon deep underlying values–or deciding what those values really are–is too risky: what if the values lead you to some commitment that you can’t control or sublimate as necessary?

This all sounds terribly abstract, and I mean it to sound that way, because I honestly think this is a deep habitus that runs across many kinds of organizational cultures that we are all influenced by and often are unaware of, that we take to be common sense or pragmatically necessary. However, moments of crisis have a tendency to surface some of what is ordinarily buried inside everyday life. So I will turn to a less abstract example: how U.S. higher education has made decisions in the face of the covid-19 pandemic.

A necessary prelude to this analysis is that American higher education, like American businesses and civic organizations, has had to make decisions on its own in part because of the deliberately engineered failure of national leadership and the resulting divergent range of state and municipal leadership across the country. In societies with more coherent national leadership, institutional leaders have had to worry about a much more constrained range of decisions that are being left to them. In the US, higher education has been left to fill a howling void within a very constrained time frame. Under that kind of pressure, no group of leaders, no community of professionals, could be expected to get everything right, no matter how they went about making decisions.

However, if you look at higher education in that crucible, I think you can see everywhere the signs of risk management and legal counsel being involved in the proposition, not disposition, of decisions.

Institutions like many community colleges that already make extensive use of online education and that do not have to deal with students coming from across the country or from outside the United States to be in residence on campus have had a more natural pathway to the fall semester. They still have the same concerns about making classrooms safe for any in-person use if they decide to do that, the same problems with courses that aim to teach students physical skills for operating machines or using particular tools, and the same basic issues with operating in a disastrous economic environment.

On the other hand, institutions that put on-campus residential life at the heart of their operations almost universally made an abrupt decision in March of 2020 to send students home from dormitories and to close on-campus facilities to use by both students and employees. Since that time, most residential institutions, public and private, small and large, have been wrestling with the question of what to do with summer programs and with the fall 2020 term.

Resolving that question has depended from the very beginning on having a forecast or model of the pandemic’s likely course over the remainder of 2020. So the first and very sensible decision that almost every institution made was to wait until May or June to commit to any course of action and in the meantime consider the possible strategies they might employ in the fall.

The basic alternatives were also clear: operate normally with basic precautions, open fully but with unusual or extensive responses to the pandemic, open in a ‘hybrid’ or limited format with extensive pandemic management strategies, or close completely for a semester.

In the event that the pandemic was coming to some form of natural end by June, a normal opening would have been the obvious strategy, but even in March no reputable authority or forecaster saw that as likely or possible. So the real debate almost from the beginning has been between full residential opening with extensive pandemic management, hybrid or limited openings that saw some or most students study online from their homes or off-campus housing, and a complete shutdown for the entire fall term.

It would be unfair to suggest that risk assessment be excluded from the making of this decision. Anyone contemplating the stakes in this decision would be instantly aware that if the wrong option were chosen, the results could be illness and death, but also the chaos and financial costs of once again sending students home in an unplanned response to a deepening crisis.

But even here, that assessment ought to come after a deeper values-driven exploration of the question, “What do we care about the most in thinking about a semester? What do we value about our work together when it is operating normally?”

Let me lay out what it looks like to decide about covid-19 policies with those questions as the first you answer, rather than post-facto narratives that are attached, sometimes awkwardly or mysteriously, to decisions that were reached with risk, liability and image maintenance in mind.

A university or college could decide that first and foremost, they value students having the most deeply transformative and empowering educational experiences possible within the time that they are matriculated students, that the major reason the institution exists and should exist into the future is the provision of this experience. Answering in this fashion doesn’t have to be a consumerist answer: an institution could also maintain, in a values-driven way, that what it means by experience is not just a simple transactional service.

If this is the primary driving value, then a university might well decide that it is important to open despite covid-19. But it is at this point that considering other values and considering pragmatic challenges to a values-driven decision should enter the picture. For example, what other values might a university hold as rivalrous or at least important? A university might hold that it also values the production of knowledge in the form of scholarly research, clinical trials, and so on. It might also take the rhetoric of “community” seriously and value its faculty, staff and students not as employees and customers, but in terms of their human relations–and obligations–to one another. The university might regard its service to either a local public or some wider regional, state or national public to be deeply important, whether that is providing popular spectator sports for smaller cities and towns that otherwise have no local professional teams or it is as a civic benefactor, protector of open land, or source of cultural events.

It might even put one or more of those values above the provision of the most transformative educational experience for students, though I think few institutions would if it was put in these terms. Putting these things in terms of values and subordinating talk of revenues, liabilities or risks sorts values into primary and dependent columns. If the primary value is the most transformative and empowering education for enrolled students, then a university might decide that a precondition of that education are faculty and staff who are driven by their own autonomous and individual motivations to educate and produce knowledge, and that this in turn means requires an institution that genuinely means it when it pronounces itself a community. Which in turn may turn over yet more dependent values. For most of us, the word community implies non-hierarchical relationships between people living near one another. When we mean it in a positive sense, most of us think about life in community in terms of mutual obligation to one another, as collective and shared responses to life’s challenges.

It might not decide that, of course: there is a possible (maybe even existing) university where the people in control of it have decided that delivering the best education for students requires maximum hierarchical efficiency or it requires strong conformist alignment behind a single culture for both employees and students or it requires the maximum frictionless delivery of a commodified service to individual paying consumers. Each of those might dictate a different position on opening in the fall of 2020.

But for the university that says first that it values a vision of education as both individually and collectively empowering and transformative and that second it values the production of knowledge in service to wider publics and third it values organizing the labor of both of those commitments in terms of community, the decision about opening in the fall of 2020 rests on the interrelationship of these three values and what might keep them from being fully lived into.

They may have some intrinsic tensions. Scholarship and teaching frequently inform one another, but not invariably so. Communities that have to allocate a finite set of resources rarely make everyone feel happy even if they have completely democratic, consensus-driven deliberative processes. In the case of covid-19, other tensions enter in. What if some members of a community are more vulnerable to the disease? What if multigenerational communities specifically are vulnerable because of that? What if closure of other services or of critical infrastructure outside the university make it impossible to produce scholarship? The whole point about enunciating values clearly as a starting place is that you get to see where they conflict with one another and you get to decide how to resolve those conflicts. That might mean putting one value above another. It might mean deciding how to resolve conflicts between different values on a situational basis while continuing to insist that they are otherwise equal in the obligations they place upon people making decisions.

Risk, liability and revenues now finally enter the picture in their proper place. Does the university need revenues that only reopening can provide in order to exist in six months or a year so that it can continue to fulfill those values? Do people in community need to avoid the danger of infecting one another in order to live up to what is meant by community? Is providing students a transformative and empowering education incompatible with increasing the chance that either they or their teachers and supporters might be sickened or that their families might be sickened as a result of contact with their children (either on delivery to campus or return to home)?

Is it ever right to think of a fulfilling and transformative education as putting the life or safety of students at risk? I think it is perfectly possible to answer this question as “yes”: we accept that athletics involves the risk of serious injury, we accept that scholarly research in the world may involve the risk of injury, assault or death from accident or from the unpredictable actions of other people, we accept that the stresses of education may produce suffering or mental debility. If we came to the conclusion that in a population of 5,000 students, around .1% of those students would attempt to commit or commit suicide due in part to the stresses of the educational environment, most of us would still judge that the value of the education is such that we should continue to provide it. But we would also likely say that we need to put resources into reducing that number to zero or as close to it as possible–and that if particular features of that education were causally responsible for that small fraction of cases, they should be modified or abandoned. You don’t start from the risk, but you eventually put it into relation alongside the values. Even those of us who say, “It is never acceptable to put a single life at risk to education a thousand people” need to start first with what we value and get to risk assessment afterwards.

For the most part, institutions influenced by the culture of neoliberalism don’t build that way. Values or principles get declared as retroactive narratives designed to explain or justify commitments that were made to protect a particular configuration of assets from a perceived set of risks or liabilities.

I think you can see the signs of that all over how higher education as a whole has stumbled into the fall of 2020, all the way back to Brown University President Christina Paxson’s early op-ed that served more or less as a template for what would follow across the sector. Students must come back, there must be testing and social distance and mask-wearing, there must be plexiglass installed. But Paxson’s essay really doesn’t explain very thoughtfully why the provision of education at institutions like Brown is in fact important, because most of higher education takes for granted that what they offer is important and necessary without really thinking up to that importance from foundational values. You can almost sense a kind of fear behind that early summer thinking that this crisis might actually reveal that absence–that forced to explain why we must do what we do, the sector as a whole finds itself stumbling for the deep convictions that would provide a stirring and persuasive answer and that in the giving of this answer, the question of “What is to be done?” would begin to spell out its more specific answers.

If higher education has answered the question backwards, that’s because it has been working from an analysis of revenues and from a reactive analysis of risks as they appear, both of which represent an attempt to cope with a profound rupture in our lives as if they were a whack-a-mole game at a carnival. No answers that have deep meaning to sustain a community or that explain the reasons for the education to which we are so devoted can come via that route. I also think it is no coincidence that both forms of institutional process are understood in modern institutional life to be the most necessarily unknown and unshared information within the institution’s forms of self-knowledge–undercutting the value placed on community and on the production (and consumption) of formal knowledge as a public good. The contract, the lawsuit (or fear of one), the balance sheet are all documents that encode a particular vision of human values and human possibility and they are by their nature kept from community view and are exempt from its deliberations. One could propose that by their natures, they enable all the other values that we might uphold. That, at least, deserves an open discussion of the kind that neoliberal culture has generally foreclosed.

]]>
https://blogs.swarthmore.edu/burke/blog/2020/08/24/values-before-risk-assessment/feed/ 2
Mucking Out Mead https://blogs.swarthmore.edu/burke/blog/2020/07/28/mucking-out-mead/ https://blogs.swarthmore.edu/burke/blog/2020/07/28/mucking-out-mead/#comments Tue, 28 Jul 2020 17:59:10 +0000 https://blogs.swarthmore.edu/burke/?p=3312 Continue reading ]]> Via Mohamad Bazzi of New York University, I learned last week about several articles published in the last few years by Lawrence Mead, also of NYU. I had a vague awareness of Mead as a kind of post-Moynihan “pathology of poverty” scholar who had had some influence over public policy in the 1990s, but otherwise I hadn’t really encountered his work in detail before. Bazzi was responding to a July 2020 article in the journal Society entitled “Poverty and Culture”. After I read it, I looked at a 2018 article by Mead in the same journal, titled “Cultural Difference”. The two substantially echo each other and are tied to a 2019 book by Mead, which I really dread to look at.

1. In “Culture and Poverty”, Mead is talking about “structural poverty” (though he doesn’t use the term), and yet does nothing to reference a very large body of comparative social science on structural poverty that has been published between 1995-2020. His references to poverty scholarship are entirely to work from the mid-1990s or before.

2. Paragraph 3 in the article chains together a set of assertions: low-income neighborhoods lack “order”, marriage is in steep decline, poor children do poorly in school, and “work levels among poor adults remain well below what they need to avoid poverty”. These require separate treatment, but they are chained together here to form a composite image: structural poverty causes “disorder”, it is tied to low rates of marriage and school performance, and it’s because the poor don’t work enough. This is sloppy inferential writing, but it is only an appetizer before a buffet of same.

3. Poverty arises, says Mead, from not working or from having children out of wedlock who are not supported. Not just here but throughout his article (and similar recent work), Mead seems to completely unaware of the fact that in the contemporary United States, some people in structural poverty or who are close to the federal government’s official poverty line are in fact employed. It also takes some astonishing arrogance and laziness to say that arguments that racial bias, lack of access to education, or lack of access to child care play a role in causing structural poverty have been flatly and undebatedly disproven—with only a footnote to your own book written in 1992 as proof of that claim.

4. On page 2 (and in other recent Mead writings) we arrive at his core argument, which is basically a warmed-over version of Huntingdon’s “clash of civilizations”. though even that goes unreferenced; he has a few cites of modernization theory and then one of Eric Jones’ European Miracle and McNeill’s Rise of the West, again without acknowledging or seeming to even be aware of the vast plenitude of richly sourced and conceptually wide-ranging critiques of modernization theory and Jones’ 1987 book. He doesn’t even seem aware that McNeill’s own work later on cast doubt on the idea that the West’s internal culture was the singular cause of European domination after 1500.

5. So let’s spend time with the intensely stupid and unsupportable argument at the heart of this article that vaguely poses as scholarship but in fact is nothing of the sort. Mead argues that Europeans who came to the Americas were all “individualists” with an inner motivation to work hard in pursuit of personal aspiration and that they all “internalized right and wrong” as individually held moralities, whereas Native Americans, blacks and “Mexicans absorbed by the nation’s westward expansion” were from the “non-West” and were hence conformists who obeyed authoritarian power and who saw ethics as “more situational and dependent on context”, “in terms of what the people around them expect of them”.

6. So today’s poor are “mostly black and Hispanics, and the main reason is cultural difference. The great fact is that these groups did not come from Europe…their native stance toward life is much more passive than the American norm…they have to become more individualist before they can ‘make it’ in America. So they are at a disadvantage competing with European groups—even if they face no mistreatment on racial grounds”. This, says Mead, explains “their tepid response to opportunity and the frequent disorder in their personal lives”.

7. This entire argument would not be surprising if you were reading the latest newsletter from Stormfront or the local chapter of the Klan. But as scholarship it is indefensible, and that is not merely a rejection of the ideological character of the argument. Let me muck out the stables a bit here in factual terms just so it is clear to anyone reading just how little Mead’s argument has to do with anything real.

8. Let’s start with the African diaspora and the Atlantic slave trade. In West and Central Africa between 1400 and 1800, what kinds of societies that were in contact with the Atlantic world and were drawn into the slave trade are we dealing with in terms of moral perspectives, attitudes towards individualism and aspiration, views of work, and so on?

9. First off, we’re not dealing with one generically “African” perspective across that vast geographical and chronological space, and we’re not dealing with collective or individual perspectives that remained unchanged during that time. I’m going to be somewhat crudely comparative here (but what I’m calling crude is essentially about ten magnitudes of sophistication above Mead’s crayon scrawling: in his 2018 essay “Cultural Difference”, Mead says “most blacks came from Africa, the most collective of all cultures”.) Consider then these differences, quickly sketched:
a. Igbo-speaking communities in the Niger Delta/Cross River area between 1600-1800 famously did not have chiefs, kings or centralized administrative structures but were woven together by intricate commercial and associational networks, and in these networks both men and women strove to ascend in status and reputation and in wealth (both for themselves and their kin). There was a strong inclination to something we might call individualism, a tremendous amount of emphasis on aspiration and success and something that resembled village-level democracy.
b. Mande-speaking societies associated with the formation of the empire of Mali in the upper Niger and the savannah just west of the Niger and subsequent “tributary” empires like Kaaba in upper Guinea were structured around formal hierarchies and around the maintenance of centralized states with an emperor at the top of the hierarchy. But they also invited Islamic scholars to pursue learning and teaching within their boundaries (and built institutions of learning to support them) and reached out to make strong new ties to trans-Saharan merchants. Moreover, the social hierarchies of these societies also had a major role for groups of artisans often called nyamakalaw: blacksmiths, potters, weavers, and griot or ‘bards’, who not only were a vibrant part of market exchange but who also had an important if contested share of imperial authority that involved a great deal of individual initiative and aspiration.
c. The Asante Empire, one of a number of Akan-speaking states in what is now Ghana, rose to pre-eminence in the 18th and 19th Century, and both its rulers and its merchant “middling classes” showed a tremendous amount of personal ambition and investment in individual aspiration, as did their antagonists in the Fante states to the south, who were heavily involved in Atlantic trade (including the slave trade) and who were very much part of Atlantic commercial and consumer culture. Cities like Anomabu and Cape Coast (and others to their east) were commercial entrepots that in many ways resembled other cosmopolitan Atlantic port cities in Western Europe and the Americas.
d. (I can keep going like this for a long while.) But let’s throw in one more, just because it’s illustrative, and that’s the Kingdom of Dahomey. It was an authoritarian state—though so was most of “the West” in the 17th and 18th Century, coming to that soon—but it was also deeply marked by religious dissent from those who profoundly disagreed with their ruler’s participation in the Atlantic slave trade, as a number of scholars have documented, as well as very different kinds of personal ambitions on the part of its rulers.
e. The upshot is that you cannot possibly represent the societies from which Africans were taken in slavery to the Americas as conformist, as uniformly authoritarian, as fatalistic or uninterested in personal aspiration, or as unfamiliar with competitive social pressures. I think you can’t represent any of them in those terms (I’m hard-pressed to think of any human society that matches the description) but none of the relevant West or Central African societies do. It’s not merely that they don’t match, but that they had substantially different ideas and structures regarding individual personhood, labor, aspiration, social norms, political authority, etc. from one another.

10. Let’s try something even sillier in Mead’s claims (if that’s possible), which is the notion that “Hispanics” or “Mexicans” are “non-Western” in the sense that he means. Keep in mind again that the argument depends very much on a kind of notion of cultural difference as original sin—he doesn’t even take the easy Daniel-Moynihan route to arguing that the poor are stuck in a dysfunctional culture that is a consequence of structural poverty—an argument that has a lot of problems, but it is in its way non-racial (it’s the same claim that undergirded J.D Vance’s Hillbilly Elegy, for example): culture is a product of class structure which then reinforces class structure in a destructive feedback loop. Mead is pointedly rejecting this view in favor of arguing that cultural difference is an intact transmission of the values and subjectivities of societies from 500 years ago into the present, and that the impoverishing consequences of this transmission can only be halted by the simultaneous “restoration of internal order” (e.g., even tougher policing) and the black and brown poor discovering their inner competitive individualist Westerner and letting him take over the job of pulling up the bootstraps.

11. Right, I know. Anyway, so Mead has a second group of people who are carrying around that original sin of coming from the “non-West”, full of conformism and reliance on authoritarian external commands and collectivism and avoidance of individual aspiration: “Hispanics”, which at another point in the article he identifies more specifically as “Mexicans”. I would need a hundred hands to perform the number of facepalms this calls for. Let’s stick to Mexico, but everything I’m going to say applies more or less to all of Latin America. What on earth can Mead mean here? Is he suggesting that contemporary Latinos in the United States who have migrated from Mexico, are the descendents of migrants from Mexico, or are the descendents of people who were living within the present-day boundaries of the United States back when some of that territory was control either by the nation-state of Mexico or earlier as a colonial possession of Spain, are somehow the product of sociohistorical roots that have nothing to do with “the West”?

12. Mead does gesture once towards the proposition that by “Western” he really means “people from the British Isles and northern Europe”; at other times, he seems to be operating (vaguely) with the conception of “Western” that can include anybody from Europe. He could always make the move favored by Gilded Age British and American racists and claim that Spain, Portugal, Italy, and Greece are not really Western, that their peoples were lazy collectivists who liked authoritarian control, and so on—it’s consistent with the incoherence of the rest of the argument, but he may sense that the moment he fractures the crudity of “Western” and “non-Western” to make more specific claims about the sociopolitical dispensations of 1500 CE that produced contemporary “cultural difference”, he’s screwed. In his 2018 essay, it becomes clearer why he would be screwed by this, because then he couldn’t contrast European immigrants from Italy and Eastern Europe in the late 19th Century with the really-truly “culturally different” black and brown people—if he drops Spain out of “Western” (by which he really means “white”), he’s going to lose his basis for saying that Giovanni DiCaprio had a primordial Western identity but Juan Castillo is primordially non-Western.

13. He’s screwed anyway, because there is no way you can say that Mexican-Americans are “non-Western” because they derive their contemporary cultural disposition from some long-ago predicate that is fundamentally different than that of white Americans and that this has nothing to do with the ways that societies in the Americans have structured racial difference and inequality. What is he even thinking is this ancient predicate? That Mexican-Americans are reproducing the sociocultural outlook of Mesoamerican societies that predate Spanish conquest? That Spain was non-Western, or that the mestizo culture of early colonial Mexico was totally non-Western? I can’t even really figure out what he thinks he is thinking of here: the Ocaam’s Razor answer is “well, he’s a bigot who wants to explain African-American and Latino poverty as a result of a ‘cultural difference’ that is a proxy for ‘biological difference’”, because his understanding of the histories he’s flailingly trying to invoke is so staggeringly bad that you can’t imagine that he is actually influenced by anything even slightly real in coming to his conclusions.

14. To add to this, he clearly knows he’s got another problem on his hands, which is why Asian-Americans aren’t in structural poverty in the same way, considering that most of his Baby’s First Racist History Stories conceptions of “cultural difference” would seemingly have to apply to many East, Southeast and South Asian societies circa 1500 as well. (And to Europe too, but hang on, I’m getting there.) In his 2018 essay, he’s got some surplus racism to dispense on them: some of them “become poor in Chinatowns” (citing for this a 2018 New York Times article focused on “Crazy Rich Asians”), and saying that despite the fact that they do well in school, Asians do not “assert themselves in the creative, innovative way necessary to excel after school in an individualist culture” and “fall well short of the assertiveness needed to stand out in America”. But he’s not going to get hung up on them because they pretty well mess up his argument, much like anything remotely connected to reality does.

15. Another reality that he really, really does not want to even mention, because he can’t have any conceivable response to it, is “well, what about persistent structural poverty in parts of the United States where the poor are white? And not just white, but whiteness that has pretty strong Scots-Irish-English roots, like in parts of Appalachia?” In terms of how he is conceptualizing cultural difference, as a cursed or blessed inheritance of originating cultures five or six hundred years old, he’s completely screwed by this contemporary structural fact. He can’t argue that it’s just a short-term consequence of deindustrialization or globalization—the structural poverty of Appalachia has considerable historicity. It used to give white supremacists fits back in the early 20th Century too.

16. Moreover, of course, everything I’ve said above about the complexity of the West and Central African origins of people taken across the Atlantic as slaves goes very much for Europeans arriving in the Americas. The idea that the Puritans, for example, represent a purely individualistic Western culture pursuing individual aspiration who are not ruled by and conforming to external authority is a laughably imprecise description of the communities they made. The sociopolitical and intersubjective outlooks derived from the local origins of various Europeans arriving in the Americas between 1500 and 1800 were substantially different. The states that many came from were absolutist, hierarchical, authority-driven, and the cultures that many reproduced were patriarchal, controlling, and not particularly anything like Mead’s sketch of “Western” temperaments, which is just a kind of baby-talk version of the Protestant work-ethic, a concept which actual historians doing actual research have complicated and questioned in a great many ways. Moreover, as many scholars have pointed out, the conflicts between these divergent origins were substantial until many colonists found that the threat of Native American attacks and slave revolts pushed them towards identifying as a common “white” identity.

17. Speaking of slavery, it’s another place where the entire premise of Mead’s article is just so transcendently awful and transparently racist. Mead is arguing that somehow the cultural disposition of a generic “Africa” survived intact through enslavement, which even the most enthusiastic historian of black American life would not try to claim for more positive reasons, and that slavery had no culture-making dimension in its own right. The debate about African influences, “Africanisms” and so on in the African diaspora is rich and complicated and of long-standing by scholars who actually do research, but that same research amply documents how the programmatic violence of slavery aimed to destroy or suppress the diverse African heritage of the enslaved. That research also documents the degree to which Africans in the Americas participated in the making of new creole or mixed cultures alongside people of European, Native American, and Asian descent. It’s easy to see why Mead has to make this flatly ridiculous claim and avoid seeing slavery as a culture-making (and culture-breaking) system, because it leads right away to the proposition that structural poverty among African-Americans has causal roots in enslavement, in post-Civil War impoverishment, in racial discrimination and segregation in the 20th Century. It also takes some spectacular, gross misperception, by the way, to see slave-owners collectively as canonical examples of “Western” hard-working, aspiration-fulfilling individualists. Right, right, having a hundred slaves plow your fields for you under threat of torture and death is the essence of inner-driven individualism and hard work

18. I’m leaving completely aside in all of this an entire different branch of absurdity in the article, which is that Mead says nothing about growing income inequality and lack of social mobility in the United States over the last thirty years, and nothing about what life is actually like for people who are working minimum wage jobs with all of what he calls “Western” motivations—with an individualist sensibility, with aspirations for improvement, and so on. He might say that getting into the historical details about Western and non-Western cultural differences is just beyond his remit in a short article connected to a long project. I don’t think he can say that legitimately, because extraordinary claims call for extraordinary evidence, even in a short article. But there is no way that he can excuse not citing or being even aware of the last thirty years of social science research on structural poverty in the United States. The footnotes in both his 2020 article and his 2018 article are like time-capsules of the 1990s, with the occasional early-2000s citation of scholars like Richard Nisbett.

19. I’ve bothered to lay all this out because I want people to understand that many critiques that are dismissed breezily as ideological or “cancel culture” derive from detailed, knowledgeable, scholarly understandings of a given subject or concept—and that in many cases, if a scholar or intellectual is arguing that another scholar should not have a platform to publish and speak within it is because the work they are producing shows extraordinary shoddiness, because the work they are producing is demonstrably—not arguably, not contentiously, but unambiguously—untrue. And because it is so dramatically bad, that work has to raise the question of what that scholar’s real motivation is for producing that work. Sometimes it’s just laziness, just a case of recycling old work. That isn’t anything that requires public dismissal or harsh critique.

But when the work is not only bad, but makes morally and politically repellant claims, it’s right to not merely offer public criticism but to raise questions about why a respectable scholarly journal would offer a place to such work: it mocks the basic ideals of peer review. It’s right to raise questions about why a prestigious university would regard the author of such work as a person who belongs on its faculty and tout him as an expert consultant in the making of public policy. That may be an accurate description of his role in setting policy on poverty in the past and his past work may possibly be not as awful as this recent work (though the contours of some of this thinking are visible, and reveal anew just how deeply flawed the public policy of the Clinton Administration really was). This is not about punishing someone for past sins, nor for their political affiliations. It is about what they have chosen to put to the page recently, and about the profound intellectual shoddiness of its content, in service to ideas that can only be called racist.

]]>
https://blogs.swarthmore.edu/burke/blog/2020/07/28/mucking-out-mead/feed/ 4
Knowing Better https://blogs.swarthmore.edu/burke/blog/2020/05/19/knowing-better/ https://blogs.swarthmore.edu/burke/blog/2020/05/19/knowing-better/#comments Tue, 19 May 2020 15:30:03 +0000 https://blogs.swarthmore.edu/burke/?p=3305 Continue reading ]]> I’m struggling to process my own discomfort at the thought of either cancelling a fall semester or doing it only online with the primary intention of protecting the health of faculty and staff.

Assuming that the still-fragmentary data about the pandemic holds somewhat true, students who are 18-22 year olds would be right to think that the risk to their own health from gathering together on a residential campus this fall is relatively small. They’re not invulnerable, of course–there are people in that age range who are immuno-compromised, there are people in that age range who have gotten very sick or died from covid-19 without apparent vulnerabilities, and there is the possibility that even asymptomatic or lightly symptomatic cases of coronavirus may pose unknown long-term health threats given how little we really know about the disease. On the other hand, one thing we do know is that it’s very contagious. I do not think it’s likely that colleges and universities can have a testing regimen sufficient to ensuring that everyone who comes to campus is not an infectious carrier. By fall, I expect that a much wider number of people will be exposed to it, whether or not campuses reopen. If they reopen, it’s almost certain that covid-19 will be a constant threat during the semester.

The major threat would be to older faculty and to staff who have regular contact with students. We could continue to hold most of our meetings remotely and stay away from each other, but if students are here, the people who teach them, serve them food, clean the buildings, attend to their mental and physical health, counsel them on academic and community matters, discuss their financial aid, etc., will inevitably be at some risk of exposure to a large community pool of potential carriers, even with some form of PPE (a non-trivial thing to secure in sufficient quantities in and of itself).

I’m a fat guy with high blood pressure in his mid-50s, so this is a meaningful threat to my survival. I should be, rationally, all for anything that will allow me to continue in relative isolation while still getting paid and doing as much of my job as I can in ways that are as creative and professional as I can manage as long as possible. And rationally, I am.

My discomfort is in the contrast between that future for me and my wider society. Many people have proposed that this is a national and global challenge that compares in its intensity and exigency and unpredictability to wartime. A few of the people using that structure of metaphor should probably think again about it–our utterly failed national leadership are just amplifying their failure when they talk in these terms. But mostly it’s meant sincerely and mostly I take it to heart. It’s because I take it to heart that I’m uncomfortable.

I’m uncomfortable because I think closing major institutions and workplaces (academic and otherwise) through the fall and possibly even longer while finding ways for professionals and white-collar employees to continue to productively work remotely while likely at the same time furloughing or terminating the employment of people who can’t work remotely doesn’t feel wartime to me. It doesn’t feel like wartime that I should be solicitiously protected from a risk to my health and a risk to my livelihood at once while some people are fired and other essential employees are compelled to take risks, often for little to no economic reward and with little national support beyond the same empty gladhanding we have given men and women sent to die in misbegotten wars since 2001–grocery clerks, delivery people, health care professionals, farm workers, meatpackers, police and fire, and so on. Wartime means shared sacrifice, shared danger, shared risk.

If we can’t all stay home and work on laptops–and plainly we can’t–there is part of me that think we all should be on the same frontlines, in the same foxholes, enduring the same bombardments. Not without precautions–masks, distancing, hand-washing, the whole thing. Not without the equivalent of 4F–the immuno-compromised, the highly vulnerable, in all industries and jobs given leave to stay home and be paid securely for the duration. But the rest of us–even me, obese and high blood pressure and all–out there like everyone else. Not for the sake of “the economy”, which needs a total transformation. Not for the 1%, not for anyone’s political prospects. But just as there has been solidarity in being apart to stretch out the curve, if by September some of us are in the soup of contagion with no choice (or in the abyss of unemployment in an especially cruel and unequal national economy), I feel as if there should be solidarity in the inescapability of threat. And I believe enough in the mission of my work to think that my students deserve to continue their studies, and to continue them in a format better than online–to think that there is a value in facing this risk. At least as much value as delivering packages, stocking shelves, collecting garbage, producing food and other services we have deemed so essential (if poorly compensated) that we feel they must continue regardless. I’m in no rush to say that a college education is inessential or can be delayed without cost, and not merely because that’s my meal ticket. I honestly believe it, more than ever with my own child in college.

I know there’s a lot wrong with these feelings, and that many of you feel very differently. Give me a moment and I will feel the same: that we should continue to shelter as long as possible, that no job is worth dying for, that we should not for a moment sanction the degree to which our systems have failed us all in the face of a deeply forseeable, inevitable crisis by numbly accepting a hollow rhetoric about shared sacrifice and duty. Indeed, if you follow the wartime metaphor, this has always been the problem for dissenters and social critics in wartime–to seem to deny or dismiss the heroic willingness of soldiers to die and the homefront to endure shared hardship by refusing the call to unity. And yet the metaphor has a pull, and all the more because this crisis at least does not involve the contingent failure of the powerful to make peace with an enemy they did not have to fight. We could have been so much better prepared but this crisis will come to humanity now and again no matter what we do, all the more so in the Anthropocene, as life (including pathogens and parasites) evolves to human bodies and systems as its primary ecosystem. This is one of the few existential crises that should put us in radical solidarity with one another.

So I grapple. I don’t want any of the short-term futures that September may bring. I can see the reasonableness of the ones I would guess to be most likely. I feel the pull of an unreasonable desire for something else.

]]>
https://blogs.swarthmore.edu/burke/blog/2020/05/19/knowing-better/feed/ 9
An Actual Trolley Problem https://blogs.swarthmore.edu/burke/blog/2020/03/20/an-actual-trolley-problem/ https://blogs.swarthmore.edu/burke/blog/2020/03/20/an-actual-trolley-problem/#comments Fri, 20 Mar 2020 15:20:24 +0000 https://blogs.swarthmore.edu/burke/?p=3301 Continue reading ]]> I’ve always seen a certain style of thought experiment in analytic philosophy and psychology as having limited value–say for example the famous “trolley problem” that asks participants to make an ethical choice about whose life to save in a situation where an observer can make a single intervention in an ongoing event that directs inevitable harm in one of two directions.

The problem with thought experiments (and associated attempts to make them into actual psychological experiments) is that to some extent all they do is clarify what our post-facto ethical narrative will be about an action that was not genuinely controlled by that ethical reasoning. Life almost never presents us these kinds of simultaneous, near-equal choices, and we almost never have the opportunity to reason clearly in advance of a decision about such choices. Drama and fiction as well as philosophy sometimes hope to stage or present us these scenarios either to help us understand something we did (or was done to us) in the confusion of events, or perhaps to re-engineer our intuitions for the next time. What this sometimes leads to is a post-facto phony ideological grandiloquence about decisions that were never considered in their actual practice and conception as difficult, competing ethical problems. Arthur Harris wasn’t weighing difficult principles about just war and civilian deaths in firebombing Dresden, he was wreaking vengeance plain and simple. Neoliberal institutions today frequently act as if they’re trying to balance competing ethical imperatives in purely performative way en route to decisions that they were always going to make, that were always going to deliver predictable harms to pre-ordained targets.

But at this moment in late March 2020, humanity and its various leaders and institutions are in fact looking at an honest-to-god trolley problem, and it is crucial that we have a global and democratic discussion about how to resolve it. This is too important to leave to the meritocratic leaders of civil institutions and businesses, too important to be left to the various elected officials and authoritarian bureaucracies, too important to be deferred to just one kind of expertise.

The terms of the problem are as follows:

Strong national quarantines, lockdowns, and closure of nonessential businesses and potential gathering places in order to inhibit the rapid spread of the novel coronavirus COVID-19 will save lives in all countries, whether they have poorly developed health infrastructures, a hodgepodge of privately-insured health networks of varying quality and coherence or high-quality national health systems. These measures will save lives not by containing the coronavirus entirely but simply by slowing the rapidity of its spread and distributing its impact on health care systems which would be overloaded even if they had large amounts of surplus capacity. The overloading of health care facilities is deadly not just to people with severe symptomatic coronavirus infections but to many others who require urgent intensive care: at this same moment, there are still people having heart attacks, life-threatening accidental injuries, poisonings, overdoses, burns from fires, flare-ups of serious chronic conditions, and so on. There are still patients with new diagnoses of cancer or undergoing therapy for cancer. There are still people with non-COVID-19 pneumonias and influenza, still people with malaria and yellow fever and a host of other dangerous illnesses. When a sudden new pandemic overwhelms the global medical infrastructure, some of the people who die or are badly disabled who could have been saved are not people with the new disease. Make no mistake: by the time this is all said and done, perhaps seventy percent of the present population of the planet or more will likely have been exposed to and been carriers of the virus, and it’s clear that some percentage of that number will die regardless of whether there was advanced technology and expertise available to care for them. Let’s say it’s two percent if we can space out the rate of infection: that is still a lot of people. But let’s say it’s eight percent, including non-COVID 19 people who were denied access to medical intervention, if we don’t have strong enforced quarantines at least through the first three months where the rate of infection in any given locale starts to rise rapidly. That’s a lot more people. Let’s say that a relatively short period of quarantine at that level–three months–followed by moderate social distancing–splits the difference. A lot of people, but fewer than in a totally laissez-faire approach.

Against that, there is this: in the present global economy, with all its manifest injustices and contradictions, the longer the period of strongly enforced quarantine, the more that another catastrophe will intensify that will destroy and deform even more lives. There are jobs that must continue to be done through any quarantine. Police, fire and emergency medical technicians must work. Most medical personnel in emergency care or hospitals must work. Critical infrastructure maintenance, all the way down to individual homes and dwellings, still has to be done–you can’t leave a leaking pipe in the basement alone for four months. Banks must still dispense money to account holders, collect interest on loans, and so on. And, as we’re all discovering, there are jobs which can be done remotely in a way that was impossible in 1965 or 1985. Not optimally from anyone’s perspective, but a good deal of work can go on in that way for some months. But there are many jobs which require physical presence and yet are not regarded as essential and quarantine proof. No one is getting routine tooth cleaning. The barber shops are closed. Restaurants and bars are closed. Ordinary retail is closed. Amusement parks and concert halls are closed. All the people whose lives depend on those businesses will have no money coming in the door. Three months of that might be barely survivable. Ten months of that are not. Countries with strong social-democratic safety nets have some insulation against the damage that this sudden enforced unemployment of a quarter to a half of the population. Countries like the United States with almost no safety nets are especially exposed to that damage. But the world can’t go on that way for the full length of time it might take to save the most lives from the coronavirus pandemic. And make no mistake, this will cost lives as well. Quite literally from suicide, from sudden loss of access to shelter and health care, from sudden inability to afford the basic necessities of everyday life. But also from the loss of any future: the spiralling catastrophe of an economic downturn as grave as the Great Depression will deform and destroy a great deal, and throw the world into terrifying new disequilibrium.

It cannot be that saving the most lives imaginable from the impact of the pandemic is of such ethical importance that the destructiveness of the sudden collapse of the world economy is unimportant. It cannot be that business as usual–already deformed by inequality and injustice–must march forward over the deaths caused by the unconstrained, unmanaged spread of COVID-19. Like many people, this problem is not at all abstract for me. I’m 55, I have high blood pressure, I have a history of asthma, I’m severely overweight and when I contract the disease, I may well die. I have a mother that I love who is almost 80, aunts and uncles whom I love who are vulnerable, I have valued colleagues and friends who are vulnerable, and of course some who may die in this have no pre-existing vulnerabilities but just draw a bad card for whatever reason. But there has to be a point where protecting us to the maximum degree possible does more harm to others in a longer-lasting and more devastating way.

And this trolley problem cannot be left to the civic institutions and businesses that in the US were the first to act forcefully in the face of an ineffective and diffident national leadership. Because they will decide it on the wrong basis and they will decide it in a way that leaves all of us out of the decision. They will decide it with lawyers in closed rooms, with liability and insurance as their first concerns. They will decide it following neoliberal principles that let them use the decision as a pretext to accomplish other long-standing objectives–streamlining workforces, establishing efficiencies, strengthening centralized control.

It cannot be left to political authorities alone. Even in the best-case scenario, they will decide it in closed rooms, following the technocratic advice of experts who will themselves stick to their specialized epistemic networks in offering counsel: the epidemiologists will see an epidemic to be managed, the economists will see a depression to be prevented. In the worst-case scenario, as in the United States, corrupt leaders will favor their self-interest, and likely split differences not out of some transparent democratic reasoning but as a way to avoid responsibility.

This has to be something that people decide, and that people are part of deciding. For myself, I think that we will have to put a limit on lockdowns and quarantines and that limit is likely to be something like June or July in many parts of the United States and Europe. We can’t do this through December, and that is not about any personal frustration with having to stay at home for that length of time. It’s about the consequences that duration will wreak on the entirety of our social and economic systems. But it is not anything that any one of us can decide for ourselves as a matter of personal conscience. We the people have to decide this now, clearly, and not leave it to CEOs and administrators and epidemiologists and Congressional representatives and well-meaning governors and untrustworthy Presidents. This needs not to be a stampede led by risk-averse technocrats and managers towards the path of least resistance, because there’s a cliff at the end of all such paths. This is, for once, an actual trolley problem: no matter what we do, some people are going to die as a result of what we decided.

]]>
https://blogs.swarthmore.edu/burke/blog/2020/03/20/an-actual-trolley-problem/feed/ 1
Free College: Not So Extreme https://blogs.swarthmore.edu/burke/blog/2020/02/17/free-college-not-so-extreme/ https://blogs.swarthmore.edu/burke/blog/2020/02/17/free-college-not-so-extreme/#comments Mon, 17 Feb 2020 14:24:31 +0000 https://blogs.swarthmore.edu/burke/?p=3288 Continue reading ]]> I’ve complained that for the most part, self-identified centrists and moderates prefer not to engage in direct arguments about their policy preferences in this election, but instead to argue about “electability”–essentially laundering their preferences through mute off-stage proxies, some other group of voters who won’t accept an “extremist” policy proposal simply because it’s extreme.

It’s not as if any given proposal is intrinsically extreme. (Well, up to a point: there are ideas that might be in some absolute sense be deemed to be fringe–say, banning any policy that accepts that the Earth is round and that the solar system is heliocentric.) “Extreme”, for the most part, is a judgment about how far a given idea is from some perceived stable consensus or status quo. As such, it’s more a marketing term than an empirical description: you make something extreme by describing it as such, over and over again, much the same way that you remind people that Colgate has new whitening agents and an improved fluoride formula.

Let’s take by way of an example the proposition that making public higher education free to all citizens and residents is a really extreme proposal. So extreme that it has been the normal public policy of many other liberal democracies (and a few non-democracies). So extreme that up to 1980 or so, it was in effect the policy of most states in the United States, in that there was a sufficient level of public funding for universities and community colleges that most could, if they chose, attend college for very little.

How did that become “extreme”? Through a steady thirty year effort to defund public higher education, which simultaneously raised the cost to prospective students while degrading the quality of the service it provided. Why exactly did we do that? Largely because we had thirty years of both Republican and Democratic administrations that turned away from public goods in general while cutting taxes, thirty years of austerity talk about inefficiencies and the need for private competition, and thirty years of educated elites trying to slow increasing access to higher education as union-protected high-wage manufacturing was transferred overseas and high-paying professional work that required educational credentials became the only alternative to low-paying service jobs. Thirty years of using higher education as the false whipping boy explanation for a major structural realignment of the economy (there aren’t enough engineers! There are too many poets and anthropologists!) while starving higher education in the process.

That’s how “public higher education should be free or nearly so to citizens and residents” became an extreme idea. It isn’t that way naturally: it was made extreme, a parting gift from the boomers and their parents who benefited from the idea back when it wasn’t extreme. It’s as if a thief broke into your house, stole something valuable, and then claimed you shouldn’t have it back because you never could have afforded it in the first place.

Inasmuch as any moderates care to actually engage the proposal on its merits, they have complained that it is regressive. Meaning that free access to public higher education should be means-tested, and the wealthy should have to pay. That sounds reasonable enough, but in fact, this is the torturous logic that has brutalized liberal democracy for the last three decades.

Before we get to the practicalities of it, at a more philosophical level, what looks like a gesture that targets income inequality in fact sanctifies it as foundational. When you prorate access to public goods, you establish that there are and must always be tiers of citizens–that inequality is fundamental. Platinum-tier citizenship, Gold-tier citizenship, etc. It effectively amends the Declaration of Independence: that all men are created unequal. What is public should be public to all: it is the baseline of equality.

That the wealthy can buy more on top of that is true–but don’t write that into the baseline. The rich can buy more legal representation than the state can provide, they can buy concierge health care on top of a public provision, they can buy an expensive private education. Yes. Figuring how to keep that from cancelling out equality of opportunity is a difficult, challenging problem. But you do not acknowledge that fact by writing it in at the deepest level of provision.

The wealthy already legitimately pay their fair share in a progressive tax system–if it’s actually used effectively as a way to redistribute excessive wealth and check run-away inequality.

The complaint that “free college” is regressive, moreover, feels jury-rigged to this political moment. The people who raise this argument against Sanders and Warren curiously seem to think this is the one ad hoc case where this concern is important. They are not arguing in favor of universal means-testing in the provision of public goods. Should we start charging wealthy people more to enter national parks? To send their children to public secondary schools? Should their EZ Passes charge them twice as much to drive on an interstate highway? Should they have to pay a fee to use the Library of Congress website? Be charged a fee in order to send a letter to their representative in Congress? Why not? They can afford it, after all. Isn’t it regressive to allow them to generally access public goods on the same basis as everyone else?

The idea that free public college should only be for those who really need it, moreover, is in practical terms the same kind of terrible idea that Democratic moderates have been peculiarly in love with since Johnson’s Great Society programs. When you set out to create elaborate tiers that segregate the deserving poor from the comfortable middle-class and the truly wealthy, you create a system that requires a massive bureaucracy to administer and a process that forces people into petitionary humiliation in order to verify their eligibility. You create byzantine cutoff points that become business opportunities for predatory rentiers. “Ah! I see you earn just $1,000 too much to qualify for free public college, and so will have to pay $5,000 a semester. Why don’t you consider taking on debt to attend my for-profit online school and we’ll spread that out for you? How about you hide that $1,000 in income using my $500/year accounting service? Try using your employer-offered system for tax-deferred payments into a special fund rather than receiving raises for the next four year!” Simplicity isn’t just about a basic idea of citizenship: it is also about efficiency, the very thing that neoliberal policy-makers supposedly revere so greatly and yet will so very often go to great pains to avoid.

Perhaps it is no great surprise that eye-rolling dismissals at supposedly utopian improvidence and hand-waving at proxies who are afraid of extremism is the preferred way to engage proposals like “free public higher education”. Who would want to undo thirty years of rigging the conversation, after all? Or for that matter twenty or so years of close ties between some of the economic interests that have benefitted from the defunding of public higher education and centrist policy makers.

]]>
https://blogs.swarthmore.edu/burke/blog/2020/02/17/free-college-not-so-extreme/feed/ 2
Harvest Time on the Whirlwind Farm https://blogs.swarthmore.edu/burke/blog/2020/02/04/harvest-time-on-the-whirlwind-farm/ Tue, 04 Feb 2020 21:11:44 +0000 https://blogs.swarthmore.edu/burke/?p=3277 Continue reading ]]> To some extent, people turn to omnicompetent forms of conspiracy theory when they cannot believe that anybody could be THAT incompetent.

People who are always and invariably against conspiracy theories tend to be that way first and foremost because omnicompetent conspiracy seems both impossibly improbable and because it is a futile theory (you can’t oppose omnicompetence by definition; in fact, if omnicompetence is real, then being allowed to voice the conspiracy theory is part of the conspiracy).

In some cases, the two kinds of theories of improbability and futility cross. It is truly hard to believe that four years after a deeply contested election that featured credible accusations about malign interference (including hacking attempts) in election security and four years after an election revealed bitter divides within the Democratic Party that threaten the stability of a coalition required to defeat a man who is endangering democracy itself, the Democratic Party would turn to a small tech company called Shadow, a company with no real track record, to hastily build an app of questionable usefulness even IF it actually worked as planned, to be used in the first primary elections of the 2020 campaign–and given reports that the app wasn’t working well and hadn’t been stress tested a month ago, would fail to build an alternative procedure should the app fail. The chain of miscalculations involved does seem almost impossible to believe in.

And yet so too is believing that any candidate actually running for the nomination could be so omnicompetently operating as a Manchurian candidate so as to make that chain of miscalculations occur as part of a plan, or that the DNC leadership has suddenly achieved this kind of omnicompetence after decades of evident managerial fecklessness and gang-that-couldn’t-shoot-straight mistakes. I mean, if you’re really omnicompetent conspirators, aim high–just steal the election seamlessly, plant evidence to discredit them that you wants discredited, etc.

Somewhere in that intersection there may be improvident forms of hidden coordination, self-interested incompetence that might be called cronyism, perfectly innocent misplaced trust in technology, and a kind of structured helplessness that the tech industry has sought to produce in all of us. That may deserve to be called something other than conspiracy or incompetence. It is, unfortunately, gravely consequential, in a way that demands both heads should roll and that reforms should be made, some of them far-reaching and substantial.

——————

Ilana Gershon’s excellent book Down and Out in the New Economy offers an analysis of the “gig economy” that is both subtle in its grasp of our historical moment and hugely consequential in its implications. Among other things, it gives me an unexpected window into understanding what may have happened in Iowa as the caucus officials turned to the app developed by Shadow.

When faculty talk about the damage done to our institutions and our profession by the rise of contingent labor in academia, we sometimes overlook that we are here, as in many things, only one piece in a larger puzzle. Two things have happened to most of the major professional workplaces that were a centerpiece of mid-20th Century life around the world (the university, the secondary school, the hospital, the law firm, the advertising agency, the major corporation, the governmental bureau or civil office). On one hand, governance and authority over the mission and operations of the institution and its employed professionals have been increasingly transferred to a series of dislocated, dispersed administrative organizations devoted to particular kinds of compliance. Some of those are statutory, some are a consequence of institutional membership in associational networks, and some are effectively from off-site or absentee owners of some of the operations of the institution. The authority of these external organizations over the institution is often projected into the institution via specific professionals on the institutional payroll whose work is largely the maintenance of compliance. The organizational chart shows those individuals working within the institution’s hierarchy and procedures, but in many ways they are equally subject to and solicitious of the separate external organization. The hospital manager who is the primary point of contact with insurance networks, the corporate executive who represents the private equity firm that recently bought the business, the academic dean charged with meeting the demands of an assessment organization, the investment manager who works as much for a hedge fund partner as he or she does for the institutional portfolio, the government official who is charged with managing procurement or who is the liason to a PAC or other source of extra-governmental influence on policy-making.

At the same time, most those professional institutions are off-loading much of the labor they once did and could still plausibly do out of their own staff and payroll onto outside consultants, facilitators, software developers, contract workers, and so on. In its early crude forms, this was “outsourcing”, the segmentation of the organization into geographically dispersed subsidiaries who could produce some labor very cheaply outside of the US or EU. I think now this wave has moved on to something more dispersed, less transparent, and more punctuated and uneven. This is the classic “gig economy” that Gershon has set out to investigate. From inside the institution, part of the logic of the gig is financial efficiency (the shedding of staff off the payroll) but I think it is more than that. I think it is also the management of risk, often by lawyers or legal professionals: necessary operations that entail risk if done incompetently or imprecisely are protected from claims of liability to some extent if they are devolved onto individuals and firms whose inner workings are private and to whom legal responsibility can possibly be redirected, along with less financially tangible forms of blame.

Gershon’s analysis is that as people transition into the gig economy, their relation with employing institutions changes. They no longer are offering their distinctive mix of intrinsic skills and human insight to the employer via a long-term contract. The gig workers are, Gershon observes, increasingly narrating their economic relations as if the gig worker were a business who is engaged in a business deal with another business. The worker is no longer identifying with the purposes and mission of the institution while employed by it, but is instead always thinking about the interests of their own “gig” brand, which align for as long (or short) as they may with the other business that pays them for services rendered.

There are ways in which this is neither bad nor good but simply different. But it has implications for the outcomes that institutions seek (or claim to seek), whether that is educating the next generation, healing the sick or injured, or delivering profit to shareholders. As an institution increasingly employs people who are essentially the intrusion of some other institution into its framework (the compliance professionals) and expels functions and tasks to be served by networks of consultants and subcontractors, it loses most or all control over the outcomes of its operations. It is subject to extra-institutional dictate in a way it is almost helpless to resist–“the call is coming from inside the house!”–while it has protected itself from both the expense and the risk of directly supervising (or being shaped by) people who carry out many functions that its mission or purpose require.

In fact, many institutions end up pairing another class of internal worker with the intruding compliance managers: the contact point for networks of consultants, facilitators and subcontractors. Much like the compliance worker, this person is not responsible to the institution. They’re responsible to the network that they manage. This has huge implications. It is not in the interest of the “internal gig manager” to put the institution’s needs or functions first, not the least because the internal gig worker knows that tomorrow they could be back out in the network again, and it is the network that matters, the network that secures the next gigs. But more potently, if the internal gig worker wants the gig to continue, they actually have to actively degrade the capacity of their employing institution to carry out some functions. Because that’s what makes consultants and subcontractors necessary: the institution has failed, is failing, will fail to do this work on its own–it lacks some form of expensive expertise or some form of knowledge about the nature of the labor function that it formerly handled on its own.

—————

And here we return to the catastrophe of the Iowa caucuses. Whatever the specificity of the ways in which Shadow was employed to build an app that was designed to report the results of the caucuses–specifics that hopefully we will learn more about in the days and weeks to come–the ways in which both the national and state Democratic Party and an associated electoral administration has lost control of a vital function that once resided entirely within its organizational purview is familiar and haunting. And here I am no longer in equanimity about the implications: this is an actively commissioned outcome deriving from a web of systemic shifts in political, economic and social life over the last forty years. Call it neoliberalism, or find a better name. Argue it’s three things, not one thing. Argue it’s intentional or incidental, interested or unexpected. That’s all fine. One thing it is not is good.

All over this country (and the world) for the last twenty years, tech companies have worked with increasing intensity and sometimes desperation to actively produce in other institutions a state of learned, professed helplessness, a proposition that everything they do must be transformed (or “disrupted”) by tech in the name of some underspecified (or wholly unspecified) better end. Along the way, tech companies and the managerial clouds that swirl around them like courtiers have appropriated languages of fairness, of equity, of objectivity, of efficiency, of empowerment and attached them to cycles of tech adoption and to endless, vague ideas about process and ‘best practice’. If you understand tech as being more than just an app or a digital tool or a computer, you can even see that some of these processes and adoptions are of rules, procedures, codes that are themselves a kind of organizational technology.

And it is the change in institutions overall that make this ubiquity possible while amplifying the disastrous forms and modes of helplessness and surrender that comes with that ubiquity. The tech to worry about here is not really first or only the big companies we all love to hate (Google, Facebook, Apple, Microsoft). It is the tech of the gig economy: the small firms (who are often using, in ways acknowledged and obscured, the product of the big companies). This is the tech that we subcontract for and assemble. What it does and how it is put together is a black box–a Shadow indeed–and that is often part of its value, as Cathy O’Neill observes in her book Weapons of Math Destruction. Bias, unfairness–or a miscounting of electoral outcomes–that happens algorithmically in a product developed by a small firm using the proprietary technologies of three big firms is protected by multiple layers of secrecy and obscuration, even from the subcontractors who delivered the product. All of us in institutions hire the consultants and facilitators and subcontractors because they’re former students, former associates, former (or present) parts of our gig networks. As we all become gig workers, we all think about the gig, not the mission or the purpose.

If that means a food company loses the ability to know why the romaine lettuce it buys is frequently infected by e-coli, bad for its customers and likely bad for the company. If that means all food companies sell a product that is composed of ten different layers of subcontraction, bad for everyone who eats commodified food. The compliance officers inside the company aren’t truly protecting the public interest–they’re hidden inside the institution and yet not answerable to it. The gig contact points inside the company aren’t really responsible to the company, and neither are the contractors they’ve hired. Nobody’s really responsible. Maybe some individual will be unlucky enough to be identified in a viral video and hashtagged into temporary oblivion, but the structures live on.

Iowa is all of this made truly and horrifingly manifest. At the beginning of a national election that many citizens plainly feel is the most important election in their lifetime–and possibly one of the three or four most important in the history of this nation-state–a party organization lost control of one of its most important functions. It will be tempting to say that this must be a cunning, purposeful, self-interested conspiracy by a few, or a punishable kind of professional incompetence that was contingent, e.g., that could have been avoided. I strongly suspect instead that the Shadow we will uncover has fallen on us all, that all of us are involved in forms of labor towards valuable, important ends that our institutions have lost control over, and that none of us know quite how to walk back into the light of sovereignty and authority over the missions we value, the purposes we are called to, the responsibilities we revere.

]]>
Dialogue and Demand https://blogs.swarthmore.edu/burke/blog/2019/08/01/dialogue-and-demand/ https://blogs.swarthmore.edu/burke/blog/2019/08/01/dialogue-and-demand/#comments Thu, 01 Aug 2019 15:34:29 +0000 https://blogs.swarthmore.edu/burke/?p=3247 Continue reading ]]> Why is a call for conversation or dialogue met so often with indifference or hostility?

That I am thinking about this question might feel peculiar to Swarthmore, but I could just as readily be addressing Johns Hopkins (the scene of protest against the creation of a private police force on campus this past spring), Wesleyan when I was an undergraduate in the 1980s, really higher education all the way back to the mid-1960s. It may seem that I’m talking about a challenge that is peculiar to academia, but in fact I think this is an issue for most contemporary civic and corporate institutions.

So what am I thinking about? Roughly speaking, the kind of impasse in the life of an institution where some group of people within the institution or reliant upon it are demanding concrete, specific changes in how the institution operates and the people with authority over the institution respond to that demand by calling for dialogue and conversation. This usually in turn infuriates or provokes the constituencies demanding changes and leads them to escalate or amplify their demands, which then in turn antagonizes, alienates or worries other groups who might have supported the initial demands but not the intensified or more militant requests, which leads to more people calling for some form of dialogue or deliberation, which then intensifies the us-or-them divide within the institution about the way forward.

I think this general dynamic has been described very well by Moises Naim in his book The End of Power. Naim starts by asking why people who are at the top of the hierarchy in many organizations and institutions–CEOs, college and university presidents, heads of executive agencies in government, leaders of non-profit community groups, and so on, frequently report that they feel powerless to act within their organizations beyond vague, broad or gestural kinds of leadership. The former president of the University of Virginia, Teresa Sullivan, described this view well in the midst of a controversial attempt by her board to displace her when she said that she and her peers invariably had to lead towards change slowly, through “incremental buy-in”. Even that is more active than many leaders of institutions, academic and otherwise, might put it–more typical perhaps is a description of leadership as custodial, as stewardship, on behalf of collectively-determined values or a mission that derives from the inchoate massing of all ‘stakeholders’ in the institution.

Naim observes that in private, leaders and their closest advisors are often not so sanguine. Instead, they express intense frustration about what they feel they can’t do. They can’t admonish or discipline people who are technically subordinate to them but too far away in the hierarchy for that admonishment to feel proportionate or fair. They can’t instruct a division or office within their organization to straightforwardly execute a policy that the leadership wants but the division opposes. They cannot quickly dispense with rules, regulations or even “traditions” that the leader and their close associates deem to be impediments to their vision of progress. They cannot undertake new initiatives unilaterally, no matter how sound they believe their own judgment to be. They can’t reveal the truth as they understand it from facts that are private or confidential.

Naim argues that the contemporary world is being compressed between two simultaneous developments. The first is that power has gotten “big”: that it is increasingly attached to large-scale, centralized and increasingly hierarchical institutions. The second is that power is “decaying”: that it is harder and harder to wield at scale, through a centralized apparatus, and from the top of hierarchies downward as a command exercise. It is harder in part because organizations now have internal structures as well as external constraints that cause this decay. What Naim observes is that people within institutions or dependent upon their actions are simultaneously being consulted or included or brought into dialogue and deliberation at the same time that they feel it is increasingly impossible for their suggestions, advice or observations to actually inform what their institutions do with power.

People know that these institutions are “big”: that the institutions do in fact routinely wield power. A college like Swarthmore year in and year out determines the academic outcomes of 1600 students; it hires, disciplines, tenures (or not) employees; it undertakes expensive construction projects with substantial economic implications; it participates in numerous collective or shared decisions across academia; it buys services and commodities; it invests and accumulates. But if you ask, it’s very hard to find anyone within the institution who ascribes the power to do any of those things directly and unilaterally to themselves or to their offices. The “big” capacity of an institution’s power comes from everywhere and nowhere. As a result, Naim suggests, there is only one form of actual influence over institutional action that most stakeholders, community members or citizens have left, what he calls “the veto”–that people can block or impede or frustrate institutional action. Not necessarily because they actually object that intensely to what is being proposed, but because it is the only action they can actually take in which their own agency is visible, important and has actual impact. In every other deliberative or active moment that people are supposedly included in and consulted about, there is no accountable tracing of whether or how their advocacy and their evidence has weighed on institutional power, and there are repeated encounters with decision-making processes that are either occluded or exclusive, and with accounts of decisions that are in no one’s hands, that are made but made from nowhere in particular. Even when you’ve been in “the room where it happens”, present at the scene where a decision was concretely made by people who have the power to decide, you often leave uncertain of what exactly happened and whether it’s going to be done as it was decided. You will also often not be allowed to speak at all about what was said, what was decided, or by whom. When people rise to block or impede decisions–to exercise the veto out of frustration–that further decays power while doing nothing to change its concentrated ‘bigness’.

———

I think the descriptive usefulness of Naim’s analysis is all around us now. The 2019 American discourse about the “deep state” and desires for various forms of authoritarian or direct-rule escape from its supposed clutches seems entirely consistent with the picture that Naim laid out in 2013. The prevalence of what is now being called “cancel culture” across social media is another manifestation of Naim’s veto, arising from people who feels that in some fashion they are being told that they are included in processes that select or identify cultural and political prominence and authority, if only through access to algorithms that rank and rate, but feeling as if the only real power they have is to reject a selection that has been made without real, transparent and accountable structures of representation and consultation.

I suspect that every working professional across several generations both feels this sense of exclusion and is aware of how they have excluded other people within their own institutional worlds. After twenty-five years of working at my present institution, I can cite innumerable examples of processes in which I have been formally included, cases where my opinion has been solicited, and cases where I’ve taken advantage of what are supposed to be always-open channels for communication to offer feedback in which the difference between my participation and my absence is impossible to discern. Sometimes I’ve seen a point I raised emerge almost entirely verbatim from one of the people involved in the earlier consultation two, five or ten years later with no perceptible connection to that earlier process. Mostly, my participation–sometimes about issues or decisions that I think are highly consequential or urgent–disappears without a trace (often simultaneously with confirmation that what I believed to be urgent was in fact urgent). Committees spent a year (or more) working on a policy that disappears into trackless invisibility afterwards–where it’s not clear even whether administrative leadership thought the policy impossible or risible, whether they earnestly meant to implement it but then the person who would have had responsibility left, or whether it was simply forgotten.

This isn’t distinctive to me. We all feel this way. Women feel this way even more. People of color feel this way even more. We all have had the experience of sounding an alarm that no one hears. Of providing advice that rests on decades of experience that seems to be ignored. Of trying to push towards an outcome that would satisfy many only to watch dismayed as an outcome that satisfies almost no one is chosen instead.

If we have power or responsibility within an institution, many of us have been on the other end. We’ve been the void that doesn’t answer, the soothing managerial assurance that all opinions are helpful, the person who absorbs and later appropriates a solution or idea that someone else advocated. And thus most of us know well why participation in a process doesn’t scale smoothly into an impact on a process. Think of job searches where you have been on the inside of the final decision but where many people provided feedback on a candidate. Some of that feedback you ignore because the person providing it didn’t see all the candidates or is missing some critical piece of information (that probably wasn’t available). Some of that you consider very carefully and respectfully but end up simply disagreeing with. Some of that you dismiss out of hand because the person consulted is someone who had to be consulted but who is widely regarded as wrong or irresponsible. Some of it you ignore because it’s expressed in a cryptic or confusing way. Some of it you ignore because you’re just really busy and the decision is already robustly confirmed by other information, so why keep discussing it?

None of which you can tell someone about. The people who made the decision can’t say:

a. You didn’t work hard enough for us to value your input equally.
b. We really did consider what you said, but here’s why we disagreed with you, specifically.
c. We asked your feedback because you’d be insulted if we didn’t but we don’t respect your views at all.
d. We had no idea what you meant and we didn’t have time to sort it out.
e. Our cup overfloweth: thank you for the advice but we turned to have as much as we needed before we even got to you.

You can’t even say the one thing that would be comforting (we considered your advice, and disagreed) because then you have to provide an external, visible transcript of a conversation that it is unethical (or illegal, even) to transcribe and circulate.

——————-

The number of decisions that power considers impossible to transcribe or even describe has grown along with power itself. Here I think we arrive at the heart of the problem with “conversation” as an alternative to “demands”.

Take my previous example of a job search in academia. Most of the people solicited for opinions understand why there is no account of whether or how their opinion mattered, except perhaps students. Why there will be no “conversation” about the decision after it is made, and why the parties to the conversation will be limited and sequestered. But even in this fairly clear case, academic departments could probably do a better job with students. In one hiring process in the last six years, we chose a candidate who was not consistently the number #1 preference of the students that we asked to participate. So I met as department chair with them afterwards to talk about how a decision like this gets made, and to give them a carefully limited version of our reasoning. I knew there was a risk involved that one or more students would indiscreetly repeat what I’d said so that it would get back to the candidate, so I didn’t share anything too private. The important thing for me was to talk frankly about how and why hiring decisions unfold as they do, including pointing out that these are decisions where typically ten to twenty candidates are very nearly evaluatively equal–if nothing else because the students who may be considering academia need to understand that about the labor market at the other end.

I also explained the legal constraints on anything connected to personnel decisions and then why most of us also find it unprofessional to talk about a colleague directly with students, most of the time. And we talked a bit more beyond that about why student impressions of faculty are sometimes perceptive and useful and sometimes simply wrong. I pointed out that I once proudly asserted decades ago that a graduate professor I knew was reticent because of the lingering effects of McCarthyism on older academics, which turned out to be the kind of thing that was ever so vaguely right as a generic guess and ever so completely wrong about the actual person, as I learned on longer acquaintance.

This is what I think a “conversation” as an alternative to a “demand” might look like. It may be many people have conversations of the kind I just described, as ad hoc, one-off, personal and effectively private conversations that do not become a public fact about power and authority within the institution. The public or shared or visible spaces within an institution are not routinely alive with this sort of conversation. It isn’t shared.

You could suggest that my approach in this case was managerial: that I chose to talk with the students in order to manage the possibility of their unhappiness in response to a perceived exclusion from decision-making. I think you’d be right that this is how offers of dialogue or conversation are often perceived by stakeholders who want to change the policies or culture of their institutions.

What is missing from these offers, what makes them not-really-conversations that only fuel the movement towards what Naim calls the veto, are three major attributes:

a) Too much of the subject of the conversation is veiled or off-limits.
b) The powerful do not fully disclose or describe both the constraints on their actions AND their own strong philosophical or ethical commitments.
c) When disclosed, the constraints are not up for debate; there is nothing contingent in the conversation.

In effect, what is missing is what defines a democratic public sphere. Which is an absence that nullifies the offer of a conversation or a dialogue as a part of decision-making or life in community. You can’t have a conversation that’s meaningful, trustworthy and part of a process of deliberation and decision-making in the weird kind of fractured “public” that academic institutions, civic institutions and businesses maintain, where information flows in trickles or pools in hidden grottos, in which most of the participants can’t discuss even a small proportion of what they know or disclose the tangible reality behind most decisions that have been made or are being contemplated.

———-

Title IX/sexual assault conversations in higher education are a major example of this issue, not just at Swarthmore but almost everywhere. In the case of Title IX, I am for the most part neither a petitioner nor the powerful, so I can see to some extent both why so many institutions trend towards Naim’s veto and why it is hard to have the conversations that might approach power differently.

Let’s start with what is off-limits. The specifics of the last decade of actual cases can’t be discussed in any kind of public or even private conversation within institutions. That would usually be illegal (several kinds of illegal), it would usually be an invitation to a lawsuit (several kinds of lawsuit), and it would broadly be considered to be unethical by almost everyone with an interest in the issue. And yet the generalities of those specifics are precisely what is at stake. What can the forms of centralized, hierarchical, ‘big’ power within academic institutions plausibly do about what’s in those specifics? How can anybody talk about that question without granular, particular attention to how it would work in specific cases, at the moment of the incident and its aftermaths?

That’s not all that is off-limits. Mostly the people with power over the disposition of cases or the setting of policy cannot fully disclose or discuss what they’re being told within one set of meetings: what the lawyers say about what can or cannot be said. Within another set of meetings: what trustees say about what they think should or should not be done. Within another set of meetings: what the specific managers of specific cases believe or think about those cases at various stages of investigation or judgment or therapy. Again, mostly because they can’t. In most of these cases, the legal constraints are real and specific. But all of those off-limits deliberations and conversations erupt into the public space, sometimes even as quotations that can’t be attributed or even acknowledged as quotations. So legal advice, even if it might be questionable or flawed, can’t be examined or questioned directly–it often can’t even be labeled as such. Practicioner beliefs about best practices in counselling or therapy can only be described in the vaguest ways, shorn of all the specifics that would make them valid or invalid, helpful or questionable.

The fracturing of this not-public runs all the way down to the bottom of this hoped-for conversation. No one–including student advocates–gets to a point of disclosure about the deeper fundamentals of their views on any of the issues at stake–about sexuality, about justice, about gender, about equity, about safety and freedom, about the rights and responsibilities of institutions and of those who work for and study within institutions. There is no incentive or reward to disclose if there is no real possibility of tracing how a dialogue will or will not inform decisions and policies. Nobody wants to start a conversation in which they will lay their deepest convictions out on the table if they have no sense at all of what will be done with or to those exposed beliefs and narratives after everyone leaves the table. Conversation is an intimate word, but the familiarity that even small colleges allow between students, faculty and administration is not intimate familiarity between equals who have consented to mutual exposure. What adminstrator would ever want to say clearly what they think and know to students who might turn around and demand the termination of that employee? What student would ever want to have a genuinely informing, richly descriptive and philosophically open conversation about sexuality, violence and justice with an administrator if the student is the only person obliged to participate in the conversation in that spirit?

The only hope for those kinds of dialogues is the classroom, precisely because the instrumental character of any given discussion is not directly fed back into institutional governance and because classrooms are semi-private and leave little visible trace to anyone who was not a direct participant. When we otherwise offer dialogue as an alternative to demands, we dramatically underimagine what it would take for dialogue to be a meaningful substitute, which is nothing short of redesigning the visibility of decisions and the flow of information in a way that no one is really ready for and perhaps that no one really wants.

]]>
https://blogs.swarthmore.edu/burke/blog/2019/08/01/dialogue-and-demand/feed/ 6
College of Theseus https://blogs.swarthmore.edu/burke/blog/2019/01/24/college-of-theseus/ https://blogs.swarthmore.edu/burke/blog/2019/01/24/college-of-theseus/#comments Thu, 24 Jan 2019 16:50:12 +0000 https://blogs.swarthmore.edu/burke/?p=3239 Continue reading ]]> Most of us know to be skeptical about the public statements of a person paid to defend a particular organization or corporation. For the same reason, we tend to look askance on a pundit or expert who will derive some particular financial benefit if people heed his or her advice–a biochemist who is supposed to test a drug who owns shares in the company that will produce it, for example. There are often legal and ethical restrictions that apply in such cases.

You can’t so easily constrain a conventionalized narrative that mainstream reportage and experts collaboratively disseminate that just so happens to advance a strongly vested financial interest that is diffused across a particular business sector or range of organizations. Even if that story leaves out vitally important details, or is simply wrong in some crucial respect.

For example, almost every mainstream story I’ve read or heard about the financial struggles of Sears, Toys R Us, and other brick-and-mortar retailers leaves out the role of private equity, debt and cult-like management strategies employed by neophyte CEOs (often installed by private equity firms). The shorthand instead is always: couldn’t compete with Amazon. Which is a story that benefits Amazon and its shareholders: it is how Amazon survived years and years of continuous losses, because reporters and experts kept describing it as the inevitable future, kept using it as the singular causal explanation for every other event in retail.

Another example: autonomous cars. A ton of big players have a huge bet down on the table on autonomous cars, and virtually everyone writing about the issue is compliantly doing their best to make that bet pay off by describing autonomous cars as inevitable no matter what technical, political and economic challenges might remain in their implementation. Just saying something is inevitable doesn’t overcome fundamental material limitations: flying cars, jetpacks and moonbases were also once represented as inevitable in a near-term future, but all three turned out to be basically impossible within present circumstances. But in a sense the actual money knew that: no one but fringe visionaries put serious investment into those projects. With autonomous cars, there’s real money involved, and so every time an expert or a reporter casually and thoughtlessly treats them as a certainty, they are creating the certainty that they only claim to predict. If it turns out that you can’t simply unleash tens of thousands of perfectly working autonomous vehicles onto the current road network, it will be made to happen by changing the infrastructure. The autonomous car makers will buy out HOV lanes and put guides on them and get manually driven cars banned from them, in the name of safety or experimentation or innovation. Then they’ll argue that any accidents on non-guided roadways are actually human error, not autonomous car error, and push for eliminating manual drivers from all high-speed highways. Inch by inch it will happen–and “prediction” will have played its role.

The example that’s really got my goat this week, however, is the way that much of the press and a particular group of experts report on the closure or threatened closure of colleges and universities. Let’s take three examples that have been reported recently: Newbury College, Green Mountain College, and Hampshire College.

The reporting and prognostication tends to lump these closures together as a single phenomenon, stemming from a singular cause, interpreted within a conventionalized story. That’s usually something like, “College is too expensive, families are no longer certain of the value of traditional higher education, and this is just going to accelerate as we hit the edge of a demographic drop-off”. All of this is true enough in terms of pressures on the entire sector: college is expensive, its consumers are feeling doubtful about its value, and there’s a demographic drop-off coming. But it’s also a story that has a client behind it: various “disruptors” who have a huge bet down on the table that various kinds of for-profit online education will and must replace expensive, inefficient, “traditional” brick-and-mortar education. Those folks are getting impatient–or are starting to worry they’re going to lose their money. They’ve been moving fast but so far not that much has been broken. They’ve been angling to do the usual smash-and-grab theft of public goods but so far all they’ve been able to do is sneak a few bits of bric a brac into their pockets. So the story that all colleges are near to failing, about a kind of institutional singularity, is especially important for them to tell–and to urge others to tell for them.

The problem with that story is two-fold. First, even if we’re talking about “all of American higher education”, this is not the first time that the entire sector has been faced with severe economic and sociopolitical pressures and not the first time that these pressures have produced new institutional forms and marketing hooks–and waves of consolidation and failure. It’s not even the first time that people enamored of a new mass medium have specifically sought to use it to replace colleges and universities–it happened with television, it happened with radio, it happened with the postal service. And yet for the most part, the variety and richness of physical institutions of higher learning has remained intact in the United States through all those failures and consolidations and transformations. The current storyline forgets all of that. There is an unbroken clumpy mass of “traditional higher education” and then there is the disrupted, innovated future. Only occasionally does an expert or prognosticator go a bit deeper into the history before breaking out the shill for the brave new innovated future–Kevin Carey, for example, does an actually fair and responsible job of recounting how contemporary research universities in the US took on the shape they now have and understands that this doesn’t extend all that far back.

But it’s at the individual level of institutional closures that the conventionalized narrative is just plain misleading or even false. Because many of the places that have announced closures or crises recently have never been stable or successful institutions in the first place, or have always been outliers in certain respects.

Let’s take Newbury to start.

The United States is known, correctly, for a unique variety and quantity of institutions of higher education. This was primarily generated in the 19th Century between 1830 and 1890. Every institution created subsequently in the 20th Century was to some extent building on this unique earlier history, trying to fit into the infrastructure created in that era, but there were at least two significant waves of later institutional creation, one in the 1920s that capitalized on the new centrality of higher education to the training of professionals and specialists, one in the 1960s that was a response both to a massive new investment in public education and to the demographic bulge known as the “Baby Boomer generation”.

A lot of those 1960s institutions have lived on the edge of failure for their entire existence. They were responding to a temporary surge in demand. They did not have the benefit of a century or more of alumni who would contribute donations, or an endowment built up over decades. They did not have names to conjure with. They were often founded (like many non-profits) by single strong personalities with a narrow vision or obsession that only held while the strong personality was holding on to the steering wheel. Newbury is a great example of this. It wasn’t founded until 1962, as a college of business, by a local Boston entrepreneur. It relocated multiple times, once into a vacated property identified formerly with a different university. It changed its name and focus multiple times. It acquired other educational institutions and merged them with its main operations, again creating some brand confusion. It started branch campuses. It’s only been something like a standardized liberal-arts institution since 1994. In 2015 it chased yet another trend via expensive construction projects, trying to promise students a new commitment to their economic success.

This is not a college going under suddenly and unexpectedly after a century of stately and “traditional” operations. This is not Coca-Cola suddenly going under because now everyone wants kombucha made by a Juicero. This is Cactus Cooler or Mr. Pibb being discontinued.

Let’s take Hampshire College. It’s a cool place. I’ve always admired it; I considered attending it when I was graduating high school. But it’s also not a venerable traditional liberal arts college. It’s an experiment that was started as a response to an exceptionally 60s-era deliberative process shared between Amherst, Smith, Mount Holyoke and UMass Amherst. It’s always had to work hard to find students who responded to its very distinctive curricular design and identity, especially once the era that led to its founding began to lose some of its moral and political influence. You can think about Hampshire’s struggle to survive in relationship to that very particular history. You should think about it that way in preference to just making it a single data point on a generalized grid.

Let’s take Green Mountain College. “The latest to close”, as Inside Higher Education says–again fitting into a trend as a single data point. At least this time it is actually old, right? Founded in 1834, part of that huge first wave of educational genesis. But hang on. It wasn’t Green Mountain College at the start. It was Troy Conference Academy. Originally coed, then it changed its name to Ripley Female Academy and went single-sex. Then it was back to Troy Conference. Then during the Great Depression it was Green Mountain Junior College, a 2-year preparatory school. Only in 1974 did it become Green Mountain College, with a 4-year liberal arts degree, and only in the 1990s did it decide to emphasize environmental studies.

Is that the same institution, with a single continuous history? Or is it a kind of constellation of semi-related institutions, all of which basically ‘closed’ and were replaced by something completely different?

If you set out to create a list of all the colleges and universities by name which have ever existed in the United States, all the alternate names and curricular structures and admissions approaches of institutions which sometimes have existed on the same site but often have moved, you couldn’t help but see that closures are an utterly normal part of the story of American higher education. Moreover, that they are often just a phase–a place closes, another institution moves in or buys the name or uses the facilities. Sure, sometimes a college or university or prep school or boarding school gets abandoned for good, becomes a ruin, is forgotten. That happens too. We are not in the middle of a singular rupture, a thing which has never happened before, an unbroken tradition at last subject to disruption and innovation.

This doesn’t mean that we should be happy when a college or university closes. That’s the livelihood of the people who work there, it’s the life of the students who are still there, it’s a broken tie for its alumni (however short or long its life has been), the loss of all the interesting things that were done there in its time. But when you look at the story of any particular closure, they all have some important particulars. The story being told that flatters the disruptors and innovators would have us thinking that there are these venerable, traditional, basically successful institutions going about their business and then suddenly, ZANG, the future lands on them and they can’t survive. At least some of the institutions closing have been hustling or struggling or rebranding for their entire existence.

]]>
https://blogs.swarthmore.edu/burke/blog/2019/01/24/college-of-theseus/feed/ 4
Save the Children https://blogs.swarthmore.edu/burke/blog/2018/05/01/save-the-children/ https://blogs.swarthmore.edu/burke/blog/2018/05/01/save-the-children/#comments Tue, 01 May 2018 14:46:11 +0000 https://blogs.swarthmore.edu/burke/?p=3227 Continue reading ]]> Jonathan Haidt is consistently unimpressive.

Responding in this Chronicle piece about Jeffrey Adam Sachs’ great essay for the Niskanen Center, Haidt concedes that the speech-related episodes that he and his pals get so agitated about are confined to a relative handful of highly selective institutions. The evidence for a significant shift in attitudes among all college-attending students is thin and contested.

But Haidt says that since students at elite institutions are going to be the leaders of tomorrow, we should be disproportionately worried about how they think.

This is a classic kind of fallacious reasoning in populist social science that seeks to stoke up some form of middlebrow moral panic. I first became familiar with it while researching claims by social scientists during the 1970s about the effects of “violent” cartoons on children.

The argument runs like this: children or young people are being moved away from adults on some kind of important social norm by a lack of institutional vigilance–that it’s up to the adults to control what children and young people see, say or do so that social norms will be protected. There’s an odd kind of philosophical incoherence somewhere in there in many cases–a kind of softly illiberal vision of parenting and education that is invoked in many cases to defend adult liberalism as the social norm worth preserving–but leave that for the moment.

What’s more important in terms of social science is that this is a *prediction*: that if the external stimulus or bad practice is permitted, tomorrow’s adults will have a propensity to behave very differently in relationship to the norm being invoked. The anti-children’s television crusaders said: tomorrow’s kids will be more violent. Haidt is saying: tomorrow’s kids will have less respect for free speech.

There’s a sleight of hand going on here always. Because usually this is being said against a *contemporary* crisis about the issue at hand. The television crusaders were responding to the violence of 1968-75: the Vietnam War, protests on campus, rising rates of violent crime. But the people involved in those forms of violence *didn’t watch cartoons on Saturday morning*. They were the previous generation. The people who are most threatening to free speech in the United States today are not 20-year old Middlebury students: they’re the President of the United States and his administration, the Congress, the people in charge. People who grew up under the norms that Haidt and Brooks etc. are trying to defend.

So it turns out that past dispensations that were allegedly friendly to the norms being defended actually produced the most serious threat to them.

And of course, it usually turns out that the prediction is wrong as well. Violence has been steadily more and more represented in mass media for children and adults since 1965; rates of violent crime have gone steadily down since the mid-1970s. You can always claim in a particular case that there’s a particular link–a mass shooter who turns out to have played Call of Duty or whatever–but that’s not how a general social scientistic prediction about a variable and a population works. If watching cartoons where bad guys got punched in the face made you more likely to be violent, that’s a prediction that there would be more interpersonal violence overall in the future. It didn’t happen. That’s not how it works. The same thing here: if free speech norms are enduring and important, I guarantee you that a bunch of kids at Middlebury standing up and turning their backs on Charles Murray does not represent a future trend that will affect a generation. Frankly, anything Middlebury or Swarthmore students do will have negligible collective impact–they are not a good marker of generational typicality.

It might even be that actually testing out the propositions embedded in a belief in free speech rather than dully worshipping them as received orthodoxy produces a more meaningful lifelong relationship to them. It certainly is that Haidt and others are producing a nostalgic myth about where a commitment to free speech comes from.

]]>
https://blogs.swarthmore.edu/burke/blog/2018/05/01/save-the-children/feed/ 1
The Kid With the Hammer https://blogs.swarthmore.edu/burke/blog/2018/02/27/the-kid-with-the-hammer/ https://blogs.swarthmore.edu/burke/blog/2018/02/27/the-kid-with-the-hammer/#comments Tue, 27 Feb 2018 22:13:50 +0000 https://blogs.swarthmore.edu/burke/?p=3224 Continue reading ]]> A certain kind of application of social science and social science methods continues to be a really basic limit to our shared ability in modern societies to grapple with and potentially resolve serious problems. For more than a century, a certain conception of policy, government and the public sphere has been determined to banish the need for interpretation, for difficult arguments about values, for attention to questions of meaning, in understanding and addressing anything imagined as a “social problem”. This banishment is performed in order to move a social scientistic mode of thinking into place, to use methods and tools that allow singular causes or variables to be given weight in relation to a named social problem and then to be solved in order of their casual magnitude.

Certainly sometimes that analysis is multivariable. It may even occasionally draw upon systems thinking and resist isolating individual variables as something to resolve individually. But what is left outside the circle always are questions of meaning that require interpretation, that require philosophical or value-driven understanding, that can’t be weighted or measured with precision. Which is why in some sense technocratic governance, whether in liberal societies or more authoritarian ones, feels so emotionally hollow, so unpersuasive to many people, so clumsy. It knocks down the variables as they are identified, often causing new problems that were not predicted or anticipated. But it doesn’t understand in any deeper way what it is trying to grapple with.

I’ve suggested in the past that this is an unappreciated aspect of military suicides since 2001, that the actual content of American wars, the specific experiences of American soldiers, might be different than other wars, other experiences, and that difference in meaning, feeling, values might be a sufficient (and certainly necessary) explanation of suicide. But that conversation never floats up to the level of official engagement with the problem, and not merely because to engage it requires an official acknowledgement of moral problems, problems in meaning and values, with the unending wars that began in 2001. It’s because even if military and political leaders might have a willingness to consider it, they don’t have the tools. It’s not in the PowerPoints, in the graphs, in the charts. It’s in the hearts, the feelings, the things spoken and unspoken in the barracks and the bedrooms. It’s in the gap between the sermons and the town meetings on one hand and the memories of things done and said in the battlefield. No one has to say anything for that gap to yawn wide for a veteran or veteran’s family–it is there nevertheless.

Here’s another example: a report on “teen mental health deteriorating”. It’s a classic bit of social scientistic reason. Show the evidence that there is something happening. That’s fine! It’s useful and true. You cannot use interpretation or philosophy to determine that truth. But then, sort the explanations, weigh the variables, identify the most significant culprit. It’s the smartphones! It’s social media!

Even this is plausible enough and not without its uses. But the smartphone here is treated as causal in and of itself, with some hand-waving at social psychology and cognitive science. Something about screen time and sociality, about what we’re evolved to do and about what we do when our evolution drives us towards too much of something. What’s left out is the hermeneutics of social media, the meaning of what we say on it and in it. Because that’s too hard to understand, to package and graph, to proscribe and make policy about.

And yet, I think that’s a big part of what’s going on. It is not that we can say things to each other, so many others, so easily and so constantly. It is the content and meaning of what we say. The structures of feeling that follow from reading a stranger with no standing in your own life pronouncing authoritatively in the genre of a social-justice-oriented “explainer” that you are commanded to do something, feel something, compared to a person with great standing in your own life providing delicately threaded advice about a recent experience that you’ve had? Those are hugely divergent emotional and social experiences, they produce loops and architectures of sentiment. Reading people who hate you, threaten you, express a false intimacy with you, who decide to amplify or redirect something you’ve said? Those experiences have an impact on a reader (and on the capacity to speak) that rests on how their content (and authors) have meaning to the reader, often in minutely divergent and rapidly shifting ways.

We blunder not in our diagnosis of a problem (teen mental health is more fragile) or even in roughly understanding an important cause. We blunder in our proposed solution: take away the smartphones! (Or restrict their use.) Because that shows how little we understand of what exactly is making people feel that their online sociality is a source of vulnerability and fragility and yet precious and important all the same. It’s not the device, it’s the content. Or in a more well-known formulation, not the medium but the message. That requires semantic understanding, it requires literary interpretation, it requires history and ethnography, to understand and engage. And perhaps change–but that takes also a different set of instruments for coordinating shared or collective action than the conventional apparatus of government and policy.

]]>
https://blogs.swarthmore.edu/burke/blog/2018/02/27/the-kid-with-the-hammer/feed/ 3