Defining “Liberal Arts” – Easily Distracted https://blogs.swarthmore.edu/burke Culture, Politics, Academia and Other Shiny Objects Thu, 15 Sep 2016 17:24:18 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.15 Enrollment Management: The Stoic’s Version https://blogs.swarthmore.edu/burke/blog/2016/09/15/enrollment-management-the-stoics-version/ https://blogs.swarthmore.edu/burke/blog/2016/09/15/enrollment-management-the-stoics-version/#comments Thu, 15 Sep 2016 17:24:18 +0000 https://blogs.swarthmore.edu/burke/?p=3016 Continue reading ]]> I have had a few interesting conversations with colleagues online about recent news of falling enrollments in college history courses nationwide, conversations which broadly echo similar discussions among faculty in other disciplines about the same phenomenon in their classes.

Speaking generally, two things tend to strike me about these recurrent discussions. The first is that many faculty make extremely confident assertions about the underlying causes of shifting enrollments that are (at best) based on intuitions, and moreover, these causal theories tend to be bleakly monocausal. Meaning that many faculty fixate on a single factor that they believe is principally responsible for a decline and dig in hard.

The second is that the vast majority of these causal assertions are focused on something well beyond the power of individual history professors or even departments of history (or associations of historians!) to remedy.

Just to review a range of some of the theories I’ve encountered over the last two years of discussion, including recently:

a) It’s a result of parental and social pressure for utility and direct application to viable careers.
b) It’s a result of admitting too many students who are interested in STEM disciplines. (Which is sometimes just relocating the agency of point #a.)
c) It’s a result of badly designed general education requirements that give students too much latitude and don’t compel them to take more history or humanities.
d) It’s a result of too many AP classes in high school, which gives students the idea that they’ve done all the history they might need.
e) It’s a result of bad or malicious advising by colleagues in other departments or in administration who are telling students to take other subjects.

At best, if these are offered as explanations which are meant to catalyze direct opposition to this hypothesized cause, they lead professors far away from their own courses, their own pedagogy, their own department, their own scholarship, all of which are vastly easier to directly affect and change. At worst, these are forms of resignation and helplessness, of not going gentle into that good night.

It might not be completely useless to engage in public argument about why history actually is useful in professional life or in the everyday lives of citizens. Or to argue against the notion that we measure subjects in higher education according to their immediate vocational payoffs. All faculty at liberal-arts institutions should be contributing to making that kind of case to the widest possible publics. However, argument in the general public sphere about these thoughts is less immediately productive in engaging enrollments than similar arguments made to actual students already matriculating at the home institutions of historians. Those students are knowable and are available for immediate consultation and dialogue. What they think about history or other humanities may not be what a far more abstract public thinks. They may be seeking very particular kinds of imagined utility which a historian could offer, or simply need to have some ideas about how to narrate the application of historical inquiry to other spheres and activities.

Complaining about requirements, about advising, or about AP classes is similarly distracting. Changing general-education requirements is a particularly dangerous answer to an enrollment problem for a variety of reasons. Compelling students to take a course they not only do not want to take but actively oppose taking is very likely to contribute to even greater alienation from the subject matter and the discipline overall, unless the subject matter and the pedagogy are of such overwhelming value that they singlehandedly reverse the initial negative perception. Moreover, there’s a game-theoretic problem with using requirements as an instrumental answer to enrollment shifts, which is that in a faculty organized around departments, this leads to every department with declining enrollments demanding new requirements specifically tailored to enrollment capture, which in turn forces departments which are the beneficiaries of stronger enrollment trends to weaponize their own participation in curricular governance and defend against a structure of requirements that takes students away from them. Like it or not–and I think we ought to like it–student agency is an important part of most of higher education, and indispensible in liberal-arts curricula especially. The only coherent alternative to a curriculum predicated on student choice is either an intellectually coherent and philosophically particular approach like that of St. John’s College or a core curriculum that is not departmentally based but is instead designed and taught outside of a departmental framework. Asking for new requirements is a way to avoid self-examination.

That’s generally the problem I have with these kinds of explanations. They take us away from what we can meaningfully implement through our own labor, but also they allow us to defer introspection and self-examination. If current students find the traditional sequencing of many college history majors to be uncompelling, whether that’s because of having taken AP courses or not finding the typical geographic and temporal structures compelling or useful, there is nothing about that sequence which is sacred or necessary. History is not chemistry: one does not have to learn to use Avogadro’s number and basic laboratory techniques in order to progress further in the subject. Maybe courses that are thematic which are taught across broad ranges of time and space are more appealing. Maybe courses that connect understanding history to contemporary life or issues in explicit ways are more appealing. Maybe courses that emphasize research methods and digital technologies are more appealing. Maybe none of the above. But those should be the only things that historians in higher education are concerned with when they worry about enrollments: what are we doing that’s not working for our actually-existing students? Could we or should we do other things? If we refuse to do other things because we believe that what we have been doing is necessary, what is it that we have been doing that’s necessary, and why is it important to defend regardless?

Historians should be (but generally aren’t) especially good at thinking in this way because of our own methodological know-how and epistemological leanings. If it turns out that what we are inclined to treat as natural and necessary in our current curricular structures and offerings is in fact mutable and contingent simply by comparison with past historical curricula, then when is it exactly that we became convinced of the necessity of those practices? And what was the cause of our certainty? If it turns out that what we defend as principle is in fact just a defense of the immediate self-interest of presently-laboring historians, then our discipline should itself help us gain some necessary distance and perspective about our interests.

Especially if it turns out that our perception of our interests is in fact harming our actual self-interest in remaining a viable part of a liberal-arts education. Perhaps the first, best way historians could demonstrate the usefulness of our modes of inquiry is by using them to understand our present circumstances better and imagine our possible futures more clearly. Even if we want to insist that lower enrollments should not by themselves resolve questions about the allocation of resources within academia (a position I agree with), we might find that there are new ways to articulate and explain that view which are more persuasive in the present rather than simply invoked as an invented tradition.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/09/15/enrollment-management-the-stoics-version/feed/ 6
Inchworm https://blogs.swarthmore.edu/burke/blog/2015/10/02/inchworm/ https://blogs.swarthmore.edu/burke/blog/2015/10/02/inchworm/#comments Fri, 02 Oct 2015 22:02:32 +0000 https://blogs.swarthmore.edu/burke/?p=2886 Continue reading ]]> Over the last decade, I’ve found my institutional work as a faculty member squeezed into a kind of pressure gradient. On one side, our administration has been requesting or requiring more and more data, reporting and procedures that are either needed to document some form of adherence to the standards of external institutions or that are wanted in order to further professionalize and standardize our operations. On the other side, I have colleagues who either ignore such requests (both specific ones and the entire issue of administrative process) to the maximum extent possible or who reject them entirely on grounds that I find either ill-informed or breathtakingly sweeping.

That pressurized space forms from wanting to be helpful but wanting also to actually take governance seriously. I think stewardship doesn’t conform well to a hierarchical structure, but it also should come with some sense of responsibility to the reality of institutions and their relationship to the wider world. The strongest critics of administrative power that I see among faculty, both here at Swarthmore and in the wider world of public discourse by academics, don’t seem very discriminate in how they pick apart and engage various dictates or initiatives and more importantly, rarely seem to have a self-critical perspective on faculty life and faculty practices. At the same time, there’s a lot going on in academia that comes to faculty through administrative structures and projects, and quite a lot of that activity is ill-advised or troubling in its potential consequences.

A good example of this confined space for me perennially forms around assessment, which I’ve written about before. Sympathy to my colleagues charged with administrative responsibilities around assessment means I should take what they ask me to produce seriously both in the sense that there are consequences to the institution if faculty fail to do in the specified manner and seriously because I value them and even value the concepts embedded in assessment.

On the most basic human level, I agree that the unexamined life is not worth living. I agree that professional practices which are not subject to constant examination and re-evaluation have a tendency to drift towards sloppiness and smug self-regard. I acknowledge that given the high costs of a college education, potential students and their families are entitled to the best information we can provide about what our standards are and how we achieve them. I think our various publics are entitled to similar information. It’s not good enough to say, “Trust us, we’re great”. That’s not even healthy if we’re just talking to ourselves.

So yes, we need something that might as well be called “assessment”. There is some reason to think that faculty (or any other group of professionals) cannot necessarily be trusted to engage in that kind of self-examination without some form of institutional support and attention to doing so. And what we need is not just introspective but also expressive: we have to be able to share it, show it, talk about it.

On the other hand, throughout my career, I’ve noticed that a lot of faculty do that kind of reflection and adjustment without being monitored, measured, poked or prodded. Professionalization is a powerful psychological and intellectual force through the life cycle of anyone who has passed through it, for good and ill. The most powerfully useful forms of professional assessment or evaluation that I can think of are naturally embedded in the workflow of professional life. Atul Gawande’s checklists were a great idea because they could be inserted into existing processes of preparation and procedure, because they are compatible with the existing values of professionals. A surgeon might grouse at the implication that they needed to be reminded about which leg to cut off in an amputation but that same surgeon would agree that it’s absolutely essential to get that right.

So assessment that exists outside of what faculty already do anyway to evaluate student learning during a course (and between courses) often feels superfluous, like busywork. It’s worse than that, however. Not only do many assessment regimes add procedures like baroque adornments and barnacles, they attach to the wrong objects and measure the wrong things. The amazing thing about Gawande’s checklists is that they spread because of evidence of their very large effect size. But the proponents of strong assessment regimes, whether that’s agencies like Middle States or it’s Arne Duncan’s troubled bureaucratic regime at the U.S. Department of Education, habitually ignore evidence about assessment that suggests that it is mostly measuring the wrong things at the wrong time in the wrong ways.

The evidence suggests, especially for liberal arts curricula, that you don’t measure learning course by course and you don’t measure it ten minutes after the end of each semester’s work. Instead you ought to be measuring it over the range of a student’s time at a college or university, and measuring it well afterwards. You ought to be measuring it by the totality of the guidance and teaching a faculty member provides to individual students, and by moments as granular as a single class assignment. And you shouldn’t be chunking learning down into a series of discrete outcomes that are chosen largely because they’re the most measurable, but through the assemblage of a series of complex narratives and reflections, through conversations and commentaries.

In a given semester, what assessment am I doing whether I am asked to do it or not? In any given semester, I’m always trying some new ways to teach a familiar subject, and I’m always trying to teach some new subjects in some familiar ways. I am asking myself in the moment of teaching, in the hours after it, at the end of a semester and at the beginning of the next: did that work? What did I hope would work about it? What are the signs of its working: in the faces of students, in the things they say then and there in the class, in the writing and assignments they do afterwards, in the things they say during office hours, in the evaluations they provide me. What are the signs of success or failure? I adjust sometimes in the moment: I see something bombing. I see it succeeding! I hold tight in the moment: I don’t know yet. I hold tight in the months that follow: I don’t know yet. I look for new signs. I try it again in another class. I try something else. I talk with other faculty. I write about it on my blog. I read what other academics say in online discussion. I read scholarship on pedagogy.

I assess, I assess, I assess, in all those moments. I improve, I think. But also I evolve, which is sometimes neither improvement nor decline, simply change. I change as my students change, as my world changes, as my colleagues change. I improvise as the music changes. I assess.

Why is that not enough for the agencies, for the federal bureaucrats, for the skeptical world? Two reasons, namely. The first is that we have learned not to trust the humanity of professionals when they assure us, “Don’t worry, I’m on it.” For good reasons sometimes. Because professionals say that right up to the moment that their manifest unprofessionalism is laid screamingly bare in some awful rupture or failure. But also because we are in a great war between knowing that most of the time people have what my colleagues Barry Schwartz and Ken Sharpe call “practical wisdom” and knowing that some of the time they also have an innocent kind of cognitive blindness about their work and life. Without any intent to deceive, I can nevertheless think confidently that all is well, that I am teaching just as I should, that I am always above average and getting better all the time, and be quite wrong. I might not know that I’m not seeing or serving some group of students as they deserve. I might not know that a technique that I think delivers great education only appears to because I design tests or assignments that evaluate only whether students do what I want them to do, not whether they’ve learned or become more generally capable. I might not know that my subject doesn’t make any sense any longer to most students. Any number of things.

So that’s the part that I’ll concede to the assessors: it’s not enough for me to be thoughtful, to be practically wise, to work hard to sharpen my professionalism. We need something outside ourselves: an observer, a coach, a reader, an archive, a checklist.

I will not concede, however, that their total lack of interest in this vital but unmeasurable, unnumbered information is acceptable. This should be the first thing they want: our stories, our experiences, our aspirations, our conversation. A transcript of the lived experience of teaching. This is the second reason that the assessors think that what we think about our teaching is not wanted or needed. They don’t want that because they believe that all rhetoric is a lie, all stories are told only to conceal, all narrative is a disguise. They think that the work of interpretation is the work of making smoke from fog, of making lies from untruths. The reason they think that is that stories belong at least somewhat to the teller, because narratives inscribe the authority of the author. They don’t want to know how I assess the act of teaching as I perform it because they want a product, not a process. They want data that belongs to them, not information that creates a relationship between the interpreter and the interpreted. They want to scrub evidence clean, to make an antiseptic knowledge. They want bricks and mortar and to be left alone to build as they will with it.

——————

I get tired of the overly casual use of “neoliberal” as a descriptive epithet. Here however I will use it. This is what neoliberalism does to rework institutions and societies into its preferred environment. This is neoliberalism’s enclosure, its fencing off of commons, its redrawing of the lines. The first thing that gets done with data that has had its narrative and experiential contaminants scrubbed clean is that the data is fed back into the experience of the laborers who first produced it. This was done even before we lived in an algorithmically-mediated world, and has only intensified since.

The data is fed back in to tell us what our procedures actually are, our standards have always been. (Among those procedures will always be the production of the next generation of antiseptic data for future feedback loops.) It becomes the whip hand: next year you must be .05% better at the following objectives. If you have objectives not in the data, they must be abandoned. If you have indeterminacies in what you think “better” is, that’s inadmissable: rarely is this looping even subject to something like a Bayesian fuzziness. This is not some exaggerated dystopic nightmare at the end of a alarmist slippery slope: what I’m describing already happened to higher education in the United Kingdom, largely accomplishing nothing besides sustaining a class of transfer-seeking technocratic parasites who have settled into the veins of British universities.

It’s not just faculty who end up caught in the loop, and like frogs boiling slowly to death, we often don’t see it happening as it happens. We just did our annual fire drill here in my building, and this year the count that we did of the evacuees seemed more precise and drawn-out than last year, and this year we had a mini-lecture about the different scenarios and locations for emergency assembly and it occurred to me: this is so we can report that we did .05% better than last year.

We always have to improve just a little, just as everything has to be “growth-based”, a little bigger next year than last year. It’s never good enough to maintain ground, to defend a center, to sustain a tradition, to keep a body healthy happy and well. Nor is it ever good enough to be different next year. Not a bit bigger, not a bit better, but different. New. Strange. We are neither to be new nor are we to maintain. We are to incrementally approach a preset vision of a slightly better but never perfect world. We are never to change or become different, only to be disrupted. Never to commune or collaborate, always to be architected and built.

———————

So here I am in the gradient again, bowed down by the push on all sides. I find it so hard when I talk to faculty and they believe that their teaching is already wholly and infinitely sufficient. Or that it’s nobody’s business but their own how they teach, what they teach, and what comes of their teaching. Or that the results of their teaching are so sublime, ineffable and phenomenologically intricate that they can say nothing of outcomes or consequences. All these things get said, at Swarthmore and in the wider world of academia. An unexamined life.

Surely we can examine and share, express and create. Surely we can provide evidence and intent. Assess and be assessed in those ways. Surely we don’t have to bury that underneath fathoms of tacit knowledge and inexpressible wisdom. We can have our checklists, our artifacts.

But surely too we can expect from administrations that want to be partners that we will not cooperate in building the Great Machine out of the bones of our humane work. That we’re not interested in being .05% better next year, but instead in wild improvisations and foundational maintenance, in becoming strange to ourselves and familiar once again, in a month, a moment or a lifetime. Surely that’s what it means to educate and become educated in an uncertain world: not .05% more measured comprehension of the impact of the Atlantic slave trade on Sao Tome, but thinking about how a semester of historical study of the Atlantic slave trade might help make a poet forty years hence to write poems, might sharpen an analytic mind, might complicate what was simple or simplify what was complex. Might inform a diplomat ten years from now, might shape a conservative’s certainty that liberals have no answers when he votes next year’s Presidential race. Might inspire a semester abroad, might be an analogy for an experience already had. I can talk about what I do to build ramps to all those possibilities and even to the unknown unknowns in a classroom. I can talk about how I think it’s working and why I think it’s working. But don’t do anything that will lead to me or my successors having to forgo all of that thought in favor of .05% improvements onward into the dreary night of an incremental future.

]]>
https://blogs.swarthmore.edu/burke/blog/2015/10/02/inchworm/feed/ 5
The Ground Beneath Our Feet https://blogs.swarthmore.edu/burke/blog/2015/05/13/the-ground-beneath-our-feet/ https://blogs.swarthmore.edu/burke/blog/2015/05/13/the-ground-beneath-our-feet/#comments Wed, 13 May 2015 15:38:33 +0000 https://blogs.swarthmore.edu/burke/?p=2826 Continue reading ]]> I was a part of an interesting conversation about assessment this week. I left the discussion thinking that we had in fact become more systematically self-examining in the last decade in a good way. If accrediting agencies want to take some credit for that shift, then let them. Complacency is indeed a danger, and all the more so when you have a lot of other reasons to feel confident or successful.

I did keep mulling over one theme in the discussion. A colleague argued that we “have been, are and ought to be” committed to teaching a kind of standardized mode of analytic writing and that therefore we have a reason to rigorously measure across the board whether our students are meeting that goal. Other forms of expression or modes of writing, he argued, might be gaining stock in the world but they shouldn’t perturb our own commitment to a more traditional approach.

I suppose I’m just as committed to teaching that kind of writing as my colleague, for the same reasons: it has a lot of continuing utility in a wide variety of contexts and situations, and it reinforces other less tangible habits of thought and reflection.

And yet, I found myself unsettled on further reflection about one key point: that it was safe to assume that we “are and ought to be” committed. It seems to me that there is a danger to treating learning goals as settled when they’re not settled, just as there is a danger to treating any given mix of disciplines, departments and specializations at a college or university as something whose general stability is and ought to be assured. Even if it is probable that such commitments will not change, we should always act as if they might change at any moment, as if we have to renew the case for them every morning. Not just for others, but for ourselves.

Here’s why:

1) even if a goal like “teaching standard analytic writing” is absolutely a bedrock consensus value among faculty and administration, the existence of that consensus might not be known to the next generation of incoming students, and the definition of a familiar practice for faculty might be unfamiliar to those students. When we treat some feature of an academic enviroment as settled or established, there almost doesn’t seem to be any reason to make it explicit, or to define its specifics, and so if students don’t know it, they’ll be continuously baffled by being held accountable to it. This is one of the ways that cultural capital acts to reproduce social status (or to exclude some from its reproduction): when a value that ought to be disembedded from its environment and described and justified is instead treated as an axiom.

2) even if something like “teaching analytic writing” is absolutely a bedrock consensus value among faculty, if some in a new generation of students consciously dissent from that priority and believe there is some other learning goal or mode of expression which is preferable it, then faculty will never learn to persuade those students, and will have to rely on a brute force model to compel students to comply. Sometimes that works in the same way that pulling a child away from a hot stove works: it kicks the can down the road to that moment when those students will recognize for themselves the wisdom of the requirement. But sometimes that strategy puts the goal itself at risk by exposing the degree to which faculty themselves no longer have a deeply felt or well-developed understanding of the value of the requirement they are forcing on their students.

3) Which leads to another point: what if the previously consensus value is not a bedrock consensus value even among faculty? If you assume it is, rather than treat the requirement as something that needs constantly renewed investigation, you’ll never really know if an assumed consensus is eroding. Junior and contingent faculty may say they believe in it, but really don’t, which contributes to a moral crisis in the profession, where the power of seniority is used to demand what ought to be earned. Maybe some faculty will say they believe in a particular requirement but actually don’t do it well themselves. That’s corrosive too. Maybe some faculty say they believe in it but what they think “it” is is not what other people think it is. You’ll never know if the requirement or value isn’t always being revisited.

4) Maybe there is genuine value-based disagreement or discord within the faculty that needs to be heard, and the assumption of stability is just riding roughshod over that disagreement. That’s a recipe for a serious schism at some point, perhaps at precisely the wrong moment for everyone on all sides of that kind of debate.

5) Maybe the requirement or value is a bedrock consensus value among faculty but it absolutely shouldn’t be–e.g., that the argument about that requirement is between the world as a whole and the local consensus within the academia. Maybe everything we think about the value we uphold is false, based on self-referring or self-validating criteria. At the very least, one should defy the world knowingly, if one wants to defy the world effectively.

I know it seems scary to encourage this kind of sense of contingency in everything we do in a time when there are many interests in the world that wish us ill. But this is the part of assessment that makes the most sense to me: not measuring whether what we do is working as intended (though that matters, too) but asking every day in a fresh way whether we’re sure of what we intend.

]]>
https://blogs.swarthmore.edu/burke/blog/2015/05/13/the-ground-beneath-our-feet/feed/ 2
Practice What We Preach? https://blogs.swarthmore.edu/burke/blog/2015/03/12/practice-what-we-preach/ https://blogs.swarthmore.edu/burke/blog/2015/03/12/practice-what-we-preach/#comments Thu, 12 Mar 2015 20:04:06 +0000 https://blogs.swarthmore.edu/burke/?p=2770 Continue reading ]]> I’ve been reworking an essay on the concept of “liberal arts” this week. One of the major issues I’m trying to think about is the relatively weak match between what many liberal arts faculty frequently say about the lifelong advantages of the liberal arts and about our ability to model those advantages ourselves. In quite a few ways, it seems to me that many academics do not demonstrate in their own practices and behavior the virtues and abilities that we claim follow on a well-constructed liberal arts education. That is not necessarily a sign that those virtues and abilities do not exist. One of the oldest known oddities surrounding teaching is that a teacher can guide a student to achievements that the teacher cannot himself or herself achieve. Good musicians can train great musicians, decent artists can train masterful ones, and so on. Nevertheless, it feels uncomfortable that we commonly defend liberal arts learning as producing competencies and capacities that we do not ourselves exhibit or even in some cases seem to value. The decent musician who is training a virtuoso performer nevertheless would like to play as well as their pupil if they only could, and tries to do when possible.

Let me give four examples of capacities or skills that I have seen many faculty at many institutions extol as good outcomes of a liberal arts education.

First, perhaps most commonly, we often claim that a liberal arts graduate will be intellectually adaptable, will be ready to face new challenges and new situations by learning new subjects, approaches and methods on an as-needed or wanted basis.

Second, many of us would argue that a well-trained writer, speaker and thinker should be able to proficiently and persuasively argue multiple sides of the same issue.

Third, faculty often claim that a liberal arts graduate will be able to put their own expertise and interests in wider perspective, to see context, to step outside of the immediate situation.

Fourth, many liberal-arts curricula require that students be systematically engaged in pursuing breadth of knowledge as well as depth, via distribution requirements or other general-education structures.

So, do most faculty in most colleges and universities model those four capacities in their own work and lives? My impressionistic answer would be, “Not nearly enough”.

Are we adaptable, do we regularly tackle new subjects or approaches, respond well to changing circumstances? Within narrowly circumscribed disciplinary environments, yes. Most active scientific researchers have to deal with a constantly changing field, most scholars will tackle a very new kind of problem or a new setting at some point in their intellectual lives. However, many of us insist that learning new subjects, approaches and methods is an unforgiving, major endeavor that requires extensive time and financial support to work outside of the ordinary processes of our professional lives. That’s not the kind of adaptability we promise our graduates. We’re telling them that they’ll be better prepared to cope with wrenching changes in the world, with old lines of work disappearing and new ones appearing, with seeing fundamentally new opportunities and accepting new ways of being in community with others. And I really believe that this is a fair promise, but perhaps only because the major alternative so far has been narrowly vocational, narrowly pre-professional, training, which very clearly doesn’t prepare students for change at all. We win out by default. If students and parents increasingly doubt our promise, it might be in some measure because we ourselves exemplify it so poorly. Tenured faculty at research universities keep training graduate students the same way for professorial work even as the market for academic labor is gutted, for example, and largely leave those students to find out for themselves what the situation is really like.

Most of us show little or no aptitude for or zest for arguing multiple sides of an issue in our own advocacy within our communities, and only a bit more so in our work as scholars. Ad arguendo is a dirty phrase in most of the social media streams I read: I find that it is rarer and rarer to see academics experimenting with multiple branches of the same foundational line of thought, or exploring multiple foundations, for either the sheer pleasure of it or for the strengthening of their own most heartfelt case. Indeed, I see especially among some humanists a kind of anti-intellectual exasperation with such activity, as something one does reluctantly to manage social networks and maintain affective ties rather than as a demonstration of a deeply important capacity. The same goes for putting ourselves in some kind of larger perspective, of understanding our concerns as neither transcendently important nor as woefully trivial. We promise to show our students how to make connections, see their place in the world, to choose meaningfully, and then do little to strengthen our own capacities for the same.

Do we have our own “distribution requirements”? At the vast majority of academic institutions, not at all. Is there any reward at all for learning about other fields, for learning to understand the virtues and uses of disciplines other than one’s own, for generalism? Any imperative to do so? No, and in fact, many faculty will tell you that this isn’t possible given the intensive demands on their time and attention within their own fields of study and their own teaching labor. But if it’s not possible for us, how is it possible for our students? Most liberal-arts faculty teach in institutions that maintain as one of their central structural principles that it is readily possible for a student to move from advanced mathematics to advanced history to studio art to the sociology of elementary education in a single week and to do well in all of those subjects. If we think that is only possible for one brief pupating moment until a final irreversible choice is made, we ought to say so, and thus indemnify ourselves against the demands we make of our students. That would sit uncomfortably alongside all the grand claims we make about learning how to think, about the idea that a major isn’t a final choice, that you can do lots of things with a liberal arts education, however.

———

Liberal arts faculty have got to much more effusively and systematically demonstrate in our own lives and practices what we say are the virtues of a liberal arts education. Or we have to offer a trickier narrative about those virtues, one that explains how it is that we can teach what we cannot ourselves do. Which might also raise another question: are we actually the best people to be doing that teaching?

]]>
https://blogs.swarthmore.edu/burke/blog/2015/03/12/practice-what-we-preach/feed/ 6
Wary About Wisdom https://blogs.swarthmore.edu/burke/blog/2015/02/24/wary-about-wisdom/ https://blogs.swarthmore.edu/burke/blog/2015/02/24/wary-about-wisdom/#comments Tue, 24 Feb 2015 19:07:55 +0000 https://blogs.swarthmore.edu/burke/?p=2759 Continue reading ]]> Cathy Davidson has been steadily working away at the problem of inequality within higher education and at how higher education contributes to inequality.

I admire the intensity of her focus and her willingness to consider radical rethinking of institutions of higher learning. However, I think she’s up against a much harder problem than even she credits in her latest arguments for the liberal arts as a “a start-up curriculum for resilient, responsible, ethical, committed global citizens.”

Davidson has argued for a long time, in concert with many other reformers in education, for abandoning the industrial infrastructure of modern educational institutions–the idea of taking standard inputs (matriculating students) and producing standard outputs (graduates) through a series of industrially-organized allocations of time and labor. Put students in a room at a set time, do a standardized type of work or dump a standard unit of information, send them away at a set time, test and measure, do quality assessment (aka grading), throw away the substandard. Repeat.

Instead, she often counters, we should be contributing to human flourishing. Education should happen for every student seeking it at its own time and pace. For one person, competency and mastery might bloom in an hour, for another in a week, another in a month: let the institution match its pace to that. Don’t chop up knowledge into manageable reductions, skills into atomized pieces. Don’t suppress what students are really thinking through because there isn’t time to listen, because the assembly line must continue to move along. Don’t turn degrees into Skinner boxes. And so on.

It’s a familiar critique, and I endorse much of it. In part because I can imagine the classrooms and institutions that would follow these critiques. To me, much of what Davidson asks for can be done, and if done will show a greater and more effective fidelity to what many educators (and the wider society) already regard as the purposes of education, whether that’s the cultivation of humanity or teaching how to add. I have no trouble, in other words, arguing for the wholly conventional value of a substantially reimagined academy in these terms.

However, in any educational project that emphasizes the cultivation of humanity, at least, there is a difficult moment lying in wait. It’s fairly easy to demonstrate that specialized knowledge or skills are not present in people who have not received relevant training or education. When we talk about wisdom or ethics, however, I think it’s equally easy to demonstrate that people who have had no educational experiences at all, or education that did not emphasize wisdom and ethics, nevertheless possess great wisdom or ethical insight.

Arguably, our current educational systems at the very least are neutral in their production of wisdom, ethical insight, emotional intelligence and common sense. (Unless you mean that last in the Gramscian sense.) Davidson might well say at this point, “Exactly! Which is why we need a change.”

I can see what a learner-driven classroom looks like, or how we might rethink failure and assessment. I don’t know that I can see what an education that produces ethics and wisdom looks like such that I would be confident that it would produce people who were consistently more wise and more ethical than anyone without that education would be.

What I unfortunately can see is that setting out to make someone ethical or wise through directed learning might actually be counterproductive. Because to do so requires a prior notion of what an ethical, wise outcome looks like and thus creates the almost unavoidable temptation to demand a performative loyalty to that outcome rather than an inner, intersubjective incorporation of it.

If we thought instead about ethics and wisdom as rising out of experience and time, then that might attractively lead back towards the general reform of education towards projects, towards making and doing. However, if that’s yet another argument for some form of constructivist learning, then beware fixed goals. A classroom built around processes and experiences is a classroom that has to accept dramatically contingent outcomes. If we embrace Davidson’s new definition of the liberal arts, paradoxically, we have to embrace that one of its outcomes might be citizens whose ethics and wisdom are nothing like what we imagined those words contained before we began our teaching. We might also find it’s one thing to live up to an expectation of knowledgeability and another altogether to live up to an expectation of wisdom.

]]>
https://blogs.swarthmore.edu/burke/blog/2015/02/24/wary-about-wisdom/feed/ 6
The Listicle as Course Design https://blogs.swarthmore.edu/burke/blog/2014/08/11/the-listicle-as-course-design/ https://blogs.swarthmore.edu/burke/blog/2014/08/11/the-listicle-as-course-design/#comments Mon, 11 Aug 2014 18:51:58 +0000 https://blogs.swarthmore.edu/burke/?p=2658 Continue reading ]]> I’ve been convinced for a while that one of the best defenses of small classes and face-to-face pedagogy within a liberal arts education would be to make the process of that kind of teaching and coursework more visible to anyone who would like to witness it.

Lots of faculty have experimented with publishing or circulating the work produced by class members, and many have also shared syllabi, notes and other material prepared by the professor. Offering the same kind of detailed look at the day-to-day teaching of a course isn’t very common and that’s because it’s very hard to do. You can’t just videotape each class session: being filmed would have a negative impact on most students in a small 8-15 person course, and video doesn’t offer a good feel for being there anyway. It’s not a compressed experience and so it doesn’t translate well to a compressed medium.

I have been trying to think about ways to leverage participation by crowds to enliven or enrich the classroom experience of a small group of students meeting face-to-face and thus also give observers a stake in the week-by-week work of the course that goes beyond the passive consumption of final products or syllabi.

In that spirit, here’s an idea I’m messing around with for a future course. Basically, it’s the unholy combination of a Buzzfeed listicle and the hard, sustained work of a semester-long course. The goal here would be to smoothly intertwine an outside “audience” and an inside group of students and have each inform the other. Outsiders still wouldn’t be watching the actual discussions voyeuristically, but I imagine that they might well take a week-to-week interest in what the class members decided and in the rationale laid out in their notes.

——————–

History 90: The Best Works of History

Students in this course will be working together over the course of the semester to critically appraise and select the best written and filmed works that analyze, represent or recount the past. This will take place within a bracket tournament structure of the kind best known for its use in the NCAA’s “March Madness”.

The initial seeding and selection of works will to be read by class members will be open to public observers as well as enrolled members of the class. The professor will use polls and other means for allowing outside participants to help shape the brackets. One side of the bracket will be works by scholars employed by academic institutions; the other side will be works by independent scholars, writers, and film-makers who do not work in academia.

The first four weeks of the class will be spent reading and discussing the nature of excellence in historical research and representation: not just what “the historian’s craft” entails, but even whether it is possible or wise to build hierarchies that rely on concepts of quality or distinctiveness. Class members will decide through discussion what they think are some of the attributes of excellent analytic or representational work focused on the past. Are histories best when they mobilize struggles in the present, when they reveal the construction of structures that still shape injustice or inequality? When they document forms of progress or achievement? When they teach lessons about common or universal challenges to human life? When they amuse, enlighten or surprise? When they are creatively and rhetorically distinctive? When they are thoroughly and exhaustively researched?

At the end of this introductory period, students will craft a statement that explains the class’ shared criteria, and this statement will be published to a course weblog, where observers can comment on it. Students will then be divided into two groups for each side of the bracket. Each group will read or view several works each week on their side of the overall bracket. During class time, the two groups will meet to discuss their views about which work in each small bracket should go forward in the competition and why, taking notes which will eventually be published in some form to the course weblog. Students will also have to write a number of position papers that critically appraise one of the books or films in the coming week and that examine some of the historiography or critical literature surrounding that work.

The final class meeting will bring the two groups together as they attempt to decide which work should win the overall title. In preparation, all students will write an essay discussing the relationship between scholarly history written within the academic and the production of historical knowledge and representation outside of it.

]]>
https://blogs.swarthmore.edu/burke/blog/2014/08/11/the-listicle-as-course-design/feed/ 3
Playing the Odds https://blogs.swarthmore.edu/burke/blog/2014/07/25/playing-the-odds/ https://blogs.swarthmore.edu/burke/blog/2014/07/25/playing-the-odds/#comments Fri, 25 Jul 2014 18:56:51 +0000 https://blogs.swarthmore.edu/burke/?p=2648 Continue reading ]]> The idea that higher education makes you a better person in some respect has long been its soft underbelly.

The proposition makes most current faculty and administrators uncomfortable, especially at the smaller teaching-centered colleges that are prone to invoke tropes about community and ethics. The discomfort comes both from how “improvement” necessarily invokes an older conception of college as a finishing school for a small, genteel elite and from how genuinely indispensible it seems for most definitions of “liberal arts”.

Almost every attempt to create breathing room between the narrow teaching of career-ready skills and a defense of liberal arts education that rejects that approach is going to involve some claim that a liberal arts education enlightens and enhances the people who undergo it in ways that aren’t reducible to work or specific skills, that an education should, in Martha Nussbaum’s words, “cultivate humanity”.

This is part of the ground being worked by William Deresiewicz’s New Republic critique of the elitism of American higher education. One of the best rejoinders to Deresiewicz is Chad Wellmon’s essay “Twilight of an Idol”, which conjoins Deresiewicz with a host of similar critics like Andrew Delbanco and Mark Edmundson.

I see much the same issue that Wellmon does, that most of these critiques are focused on what the non-vocational, non-instrumental character of a college education was, is and should be. Wellmon and another critic, Osita Nwanevu, point out that there doesn’t need to be anything particularly special about the four years that students spend pursuing an undergraduate degree. As Wellmon comments, “There is, thankfully, no going back to the nineteenth-century Protestant college of Christian gentlemen. And that leaves contemporary colleges, as we might conclude from Deresiewicz’s jeremiad, still rummaging about for sources of meaning and ethical self-transformation. Some invoke democratic citizenship, critical thinking, literature, and, most recently, habits of mind. But only half-heartedly—and mostly in fundraising emails.”

Half-heartedly is right, precisely because most faculty know full well that all the substitutes for the older religious or gentlemanly ideals of “cultivation” still rest upon and invoke those predicates. But we can’t dispense with this language entirely because we have nothing else that spans academia that meaningfully casts shade at the instrumental, vocational, career-driven vision of education.

The sciences can in a pinch fall back on other ideas about utility and truth: their ontological assumptions (and the assumptions that at least some of the public make about the sciences) are here a saving grace. This problem lands much harder on the humanities, and not just as a challenge to their reproduction within the contemporary academy.

I wrote last year about why I liked something Teju Cole had said about writing and politics. Cole expressed his disappointment that Barack Obama’s apparent literacy, his love of good books, had not in Cole’s view made Obama a more consistently humane person in his use of military power.

I think Cole’s observation points to a much more pressing problem for humanistic scholars in general. Intellectuals outside the academy have been and still are under no systematic pressure to justify what they do in terms of outcomes. As a novelist or essayist or critic you can be a brutal misanthropist, you can drift off into hallucinogenic dream-states, you can be loving or despairing or detached. You can claim your work has no particular instrumental politics or intent, or that your work is defined by it. You don’t have to be right about whether what you say you’re doing is in fact what you actually do, but you still have a fairly wide-open space for self-definition.

Humanists inside the academy might think they have the same freedom to operate, but that clashes very hard with disciplinarity. Most of us claim that we have the authority that we do because we’ve been trained in the methods and traditions of a particular disciplinary approach. We express that authority within our scholarly work (both in crafting our own and in peer reviewing and assessing the work of others) and in our curricular designs and governance. And most of us express, to varying degrees, a whiggish or progressive view of disciplinarity, that we are in our disciplines understanding and knowing more over time, understanding better, that we are building upon precedent, that we are standing on the shoulders of someone–if not giants, at least people the same size as us. If current disciplinary work is just replacing past disciplinary work, and the two states are essentially arbitrary, then most of our citational practices and most of our curricular practices are fundamentally wasted effort.

So if you’re a moral philosopher, for example, you really need to think in your own scholarly work and in your teaching of undergraduates that the disciplined study of moral philosophy provides systematic insights into morality and ethics. If it does, it shouldn’t seem like a big leap to suggest that such insight should allow those who have it to practice morality better than those who have not. This doesn’t mean necessarily that a moral philosopher has to be more moral in the conventional terms of a dominant moral code. Maybe the disciplinary study of morality and ethics leads scholars more often to the conclusion that most dominant moral codes are contradictory or useless. Or that morality is largely an arbitrary expression of power and domination. Doesn’t really matter what the conclusions are, just that it’s reasonable to think that the rigorous disciplinary study of morality through philosophy should “cultivate the humanity” of a moral philosopher accordingly.

But if you’ve known moral philosophers, you’ve known that there is not altogether much a notable difference between them and other academics, between them and other people with their basic degree of educational attainment, between them and other people with the same social backgrounds or identities, between them and other people from the same society, and so on, in terms of morality and ethics. It seems to me that what they know has strikingly little effect on who they are, how they act, what they feel.

Many humanist scholars would say that reading fiction gives us insights into what it means to be human, but it’s pressingly difficult to talk about what those insights have done to us, for us, to describe what transformations, if any, we’ve undergone. Many historians would argue that the disciplined study of history teaches us lessons about the human condition, about how human societies navigate both common social and political challenges and about what makes the present day distinctively different from the past.

I’m often prepared to go farther than that. Many of my colleagues disliked a recent assessment exercise here at the college where we were asked about a very broad list of possible “institutional learning goals”. I disliked it too, mostly because of how assessment typically becomes quantitative and incremental. I didn’t necessarily dislike the breadth, though. Among the things we were asked to consider is whether our disciplines teach values and skills like “empathy”. And I would say that yes, I think the study of history can teach empathy. E.g., that a student might through studying history become more able to feel empathy in a wider and more generative range.

The key for me is that word, “might”. If moral philosophers are not significantly more moral, if economists are not significantly more likely to make superior judgments about managing businesses or finances, if historians are not significantly better at applying what they know about past circumstances to their own situations, if literary critics don’t seem altogether that better at understanding the interiority of other people or the meaning of what we say to one another, then that really does call into question that vague “other” that we commonly say separates a liberal arts approach to education from a vocational strategy.

No academic (I hope) would say that education is required to achieve wisdom. In fact, it is sometimes the opposite: knowing more about the world can be, in the short-term, an impediment to understanding it. I think all of us have known people who are terrifically wise, who understand other people or the universe or the social world beautifully without ever having studied anything in a formal setting. Some of the wise get that way through experiencing the world, others through deliberate self-guided inquiry.

What I would be prepared to claim is something close to something Wellmon says, that perhaps college might “might alert students to an awareness of what is missing, not only in their own colleges but in themselves and the larger society as well”.

But my “might” is a bit different. My might is literally a question of probabilities. A well-designed liberal arts education doesn’t guarantee wisdom (though I think it can guarantee greater concrete knowledge about subject matter and greater skills for expression and inquiry). But it could perhaps be designed so that it consistently improves the odds of a well-considered and well-lived life. Not in the years that the education is on-going, not in the year after graduation, but over the years that follow. Four years of a liberal arts undergraduate experience could be far more likely to produce not just a better quality of life in the economic sense but a better quality of being alive than four years spent doing anything else.

I think I can argue that the disciplinary study of history can potentially contribute to the development of a capacity for empathy, or emotional intelligence, an understanding of why things happen the way that they do and how they might happen differently, and many other crafts and arts that I would associate as much with wisdom as I do with knowledge, with what I think informs a well-lived life. But potential is all I’m going to give out. I can’t guarantee that I’ll make someone more empathetic, not the least because I’m not sure how to quantify such a thing, but also because that’s not something everybody can be or should be counted upon to get from the study of history. It’s just, well, more likely that you might get that than if you didn’t study history.

This sense of “might” even justifies rather nicely the programmatic hostility to instrumentally-driven approaches to education among many humanists. Yes, we’re cultivating humanity, it’s just that we’re not very sure what will grow from any given combination of nutrients and seeds. In our students or ourselves.

This style of feeling through the labyrinth gives me absolutely no title to complacency, however. First, it’s still a problem that increased disciplinary knowledge and skills do not give us proportionately increased probability of incorporating that knowledge into our own lives and institutions. At some point, more rigorous philosophical analyses about when to pull the lever on a trolley or more focused historical research into the genesis of social movements doesn’t consistently improve the odds of making better moral decisions or participating usefully in the formation of social movements.

Second, I don’t think most curricular designs in contemporary academic institutions actually recognize the non-instrumental portion of a liberal-arts education as probabilistic. If we did see it that way, I think we’d organize curricula that had much less regularity, predictability and structure–in effect, much less disciplinarity.

This is really the problem we’re up against: to contest the idea that education is just about return-on-investment, just about getting jobs, we need to offer an education whose structural character and feeling is substantially other than what it is. Right now, many faculty want to have their cake and eat it too, to have rigorous programs of disciplinary study that are essentially instrumental in that they primarily encourage students to do the discipline as if it were a career, justified in a tautological loop where the value of the discipline is discovered by testing students on how they demonstrate that the discipline is, in its own preferred terms, valuable.

If we want people to take seriously that non-instrumental “dark side of the moon” that many faculty claim defines what college has been, is and should remain, we have to take it far more seriously ourselves, both in how we try to live what it is that we study and in how we design institutions that increase the probabilities that our students will not just know specific things and have specific skills but achieve wisdoms that they otherwise could not have found.

]]>
https://blogs.swarthmore.edu/burke/blog/2014/07/25/playing-the-odds/feed/ 10
Read the Comments https://blogs.swarthmore.edu/burke/blog/2014/03/28/read-the-comments/ Fri, 28 Mar 2014 20:52:41 +0000 https://blogs.swarthmore.edu/burke/?p=2593 Continue reading ]]> I keep coming back, obsessively and neurotically, to the question of what a liberal arts education is good for.

I do think it helps with the skills that pay the bills. I do think it can make you a better citizen. I do think it can help you lay the foundation for the examined life. It doesn’t always do that, and there are many other ways to get skills, learn to be a better participant in your social and political worlds, be a critical thinker.

A modest example of the possibilities occurred to me today. The concept of social epistemology is becoming more important in philosophy as it is applied both analytically and technically to various kinds of digitally-mediated crowdsourcing. One strain of thought about social epistemology might suggest that philosophy could be as much an ethnographic discipline as an interpretative one, that it could look for how social groups generate epistemological or philosophical frameworks out of experience. There are plenty of other ways to take an interest in how people think in their social practices and everyday lives about ethics, knowledge, and so on, in any event. The question in part is, “What could a liberal arts education–or formal scholarship–add to such everyday, lived thinking that it doesn’t already have?”

I’m going to do something a bit unusual. Rather than the usual “don’t read the comments!” I’m going to suggest that at least sometimes comments on Internet sites offer some insights into how people in general think.

Take a look at this Gawker thread about a tailgater and the “karmic justice” meted out to him for following the driver ahead of him too closely and aggressively. (He eventually passes to the right at high speed, gives the driver the finger multiple times, merges back left on a lightly wet road and loses control of his truck, crashing into the median.)

The main story accepts the “karmic justice” narrative. But in the comments, three different interpretations eventually emerge.

The first validates the main story: the tailgater was unambiguously in the wrong and it is right to feel some vindication at his misfortune.

The second holds that the tailgater was acting poorly but also the driver making the videotape was also acting poorly, for several reasons. First, that the driver being tailgated was videotaping (and was therefore indulging in dangerous behavior as well) and second, that the driver being tailgated (the tailgatee?) should just have pulled to the right and let the faster driver go ahead.

The third is unabashedly on the side of the tailgater. These commenters hold that tailgating is a practical, even necessary, response to drivers who insist on blocking the left lane of any roadway at a speed slower than the speed that the tailgater wishes to go. They support both the tailgating and the obscene gesture and regret that the tailgater had an accident.

There’s a minor fourth faction that is primarily irritated at yet another person videotaping with a smartphone held in portrait mode. Protip hint: they at least are completely right.

What’s interesting in the comments is that each group has strategies for replying to the other two. The anti-tailgaters point out that the roadway in question is not a major highway, that the driver being tailgated was going the maximum speed limit, that the driver says she did not look at the camera while holding it, that she says she was going to be turning left very soon and that traffic to the right was fairly heavy. The blame-on-all-sides find that the videotaping driver has a history of being aggrieved about a lot of things, that there seemed to be plenty of space to the right, and that it’s unwise (especially in Florida) to tangle with a person demonstrating road rage. The pro-tailgaters…well, they don’t seem to have much other than a view that tailgating is necessary and justified.

It’s easy to just say, “A pox on all their houses” or to simply join in the debate on one side or another. I guess what I’m struck by is that when you pull back a little, each of these approaches is informed, whether the people are consciously aware of it or not, by some potentially consistent or coherent views of what’s right and wrong, wise and unwise, fair and unfair.

What I wonder sometimes is whether we could construct a coherent underlying credo or statement about our views, if we were all asked to step back from the views we can express so hotly in comments threads in social media or other contexts. So much of our discourse, online and offline, is reactive or dialectical. That’s actually good in the sense that real cases or experiences are a better place to start, perhaps, than arid thought-experiment scenarios about pulling trolley levers to save or not save lives. But maybe where some sort of liberal-arts experience could help. It could help us to go from a reactive reading to a more contemplative description of why each of us thinks what we think.

Suppose I’m against the tailgater: why? Because I object morally to tailgating period–its aggression, its danger? Is it ok to be aggressive in return? (The driver in the video apparently has specified that she did not break-check the tailgater.) How confident am I that tailgating is the result of road rage? How much do I actually know about another driver, and why should I be confident about my strong moral readings of someone whom I only know in a single dimension of their behavior? If was going really slowly, would tailgating me be justified?

Suppose I’m against both of them: why? Can I trust that someone can in fact be a good driver while holding up a smartphone and not looking at it? Why do I trust or not trust in that proposition? Why not, as this approach suggests, just yield to someone determined to be antisocial and get out of their way? Is being righteous in opposing a tailgater just a kind of self-indulgent or egotistical response? Or an aggression of another kind? What does that imply about other cases?

Suppose I’m certain that if I want to go a particular speed, it’s right to allow me to do so until or unless I am charged with the crime of speeding or unless I have an accident as a result? What else does that imply? Do I mean it in all cases or is driving a special case? Am I right that I’m a better driver than most others? What does that entitle me to if so?

I suspect that in a lot of cases, driving (or other everyday practices) are held to be “special cases”–that to try and work back to some bigger or more comprehensive view of the world isn’t going to work for many people in the Gawker thread. But that too is interesting: if much of how we read the “manners” of everyday life is ad hoc, that’s not necessarily bad, just significant.

]]>
Now I’m In For It https://blogs.swarthmore.edu/burke/blog/2014/01/20/now-im-in-for-it/ https://blogs.swarthmore.edu/burke/blog/2014/01/20/now-im-in-for-it/#comments Mon, 20 Jan 2014 15:49:15 +0000 https://blogs.swarthmore.edu/burke/?p=2558 Continue reading ]]> So I’ve overhauled my survey course on the history of the Atlantic slave trade in West Africa this semester as an experiment in “flipping the classroom”. I’m not quite flipping the way that some do, with lectures as homework and problem sets in the classroom, but that’s a bit of the spirit of what I’m doing.

The way the course is going to work is that the syllabus will be something of a work in progress, especially after the first five weeks or so.

I’ve identified two major questions that will drive the course: why did the Atlantic slave trade happen to West and Central African societies, and what were the consequences of incorporation into the Atlantic system for West and Central African societies? We will spend time in class sessions breaking down those questions into more manageable subquestions that have purchase in the existing historiography. During class, and sometimes outside of class, as an assignment, we will be locating relevant scholarship or other materials to help us work with these questions, and we will then read some of that work together in class, taking collaborative notes on a shared document.

I’ll have another shared document called “Lecture Requests” open during class where students can semi-anonymously request that I spend some time talking about a subject that is either confusing in the scholarly literature or that seems both important and too diffuse for us to fully grasp from the readings alone. Sometimes I’ll try to lecture as soon as I see a request, other times I’ll wait and do it in the next class, especially when I feel the need to prep a bit on that particular subject.

We’ll also keep a spreadsheet “reading log” that I will eventually export into Viewshare so we can create visualizations from our reading (say, a map of places in West and Central Africa that we read about during the semester). We’ll have a few other docs open during most class sessions (one for harvesting good specific search terms for further use in locating appropriate materials, for example).

I’m doing this because I’d like to see if there’s a better way to both produce more consistent command over a body of knowledge than my usual pedagogy does and at the same time do something more powerful or lasting in terms of showing students how to learn, how to build knowledge out of reading and note-taking. I’m fairly convinced by Randy Bass, Cathy Davidson, Douglas Thomas and others that if we want to make the case that maintaining the high quality of intensive face-to-face teaching requires and thus justifies hiring expensive, highly trained professionals, we need to find ways to make sure that the time we spend in classrooms is the best use of that time that we can think of within the information-rich, profoundly-networked world that we actually inhabit.

A lot of the class will be visible in public (and I’m linking it to Hastac’s #FutureEd initiative), so I invite curious onlookers and helpful kibitizers to take a look now and again and see what they think about how it’s going.

]]>
https://blogs.swarthmore.edu/burke/blog/2014/01/20/now-im-in-for-it/feed/ 5
Teleology and the Fermi Paradox https://blogs.swarthmore.edu/burke/blog/2013/07/25/teleology-and-the-fermi-paradox/ https://blogs.swarthmore.edu/burke/blog/2013/07/25/teleology-and-the-fermi-paradox/#comments Thu, 25 Jul 2013 18:21:22 +0000 https://blogs.swarthmore.edu/burke/?p=2399 Continue reading ]]> I sometimes joke to my students that “teleology” is one of those things like “functionalism” that humanist intellectuals now instinctively recoil from or hiss at without even bothering to explain any longer to a witness who is less in-the-know what the problem is.

But if you want a sense of how there is a problem with teleology that is a meaningful impediment to thoughtful exploration and explanation of a wide range of existing intellectual problems, take a look at io9’s entry today that reports on a recent study showing that self-replicating probes from extraterrestrial intelligences could theoretically reach every solar system in the galaxy within 10 million years of an initial launch from a point of origin.

I’ve suggested before that exobiology is one of the quintessential fields of research that could benefit from keeping an eclectic range of disciplinary specialists in the room for exploratory conversations, and not just from within the sciences. To make sure that you’re not making assumptions about what life is, where or how it might be found or recognized, and so on, you really need some intellectuals who have no vested interest in existing biological science and whose own practices could open up unexpected avenues and insights into the problem, whether that’s raising philosophical and definitional questions, challenging assumptions about whether we actually could even recognize life that’s not as we know it (or whether we should want to), or offering unexpected technical or artistic strategies for seeing patterns and phenomena.

As an extension this point, look at the Fermi Paradox. Since it was first laid out in greater detail in 1975 by Michael Hart, there’s been a lot of good speculative thinking about the problem, and some of it has tread in the direction I’m about to explore. But you also can see how for much of the time, responses to the concept remain limited by certain assumptions that are especially prevalent among scientists and technologists.

At least one of those limits is an assumption about the teleology of intelligence, an assumption that intelligent life will commonly or inevitably trend towards social and technological complexity in a pattern that strongly resembles some dominant modern and Western readings of human history. While evolutionary biology has long since moved away from the assumption that life trends towards intelligence, or that human beings are the culmination of the evolution of life on Earth, some parallel speculative thinking about the larger ends or directionality of intelligent life still comes pretty easily for many, and is also common to certain kinds of sociobiological thought.

This teleology assumes that agriculture and settlement follow intelligence and tool usage, that settlement leads to larger scales of complex political and social organization, that larger scales of complex political and social organization lead to technological advancement, and that this all culminates in something like modernity as we now live it. In the context of speculative responses to the Fermi Paradox (or other attempts to imagine extraterrestrial intelligence) this produces the common view that if life is very common and intelligent life somewhat common that some intelligent life must lead to “technologically advanced civilizations” which more or less conform to our contemporary imagination of what “technological advancement” forward from our present circumstances would look like. When you add to this the observation that in some cases, this pattern must have occurred many millions of years ago in solar systems whose existence predates our own, you have Fermi’s question: where is everybody?

But this is where you really have to unpack something like the second-to-last term in the Drake Equation, which was an attempt to structure contemplation of Fermi’s question. The second-to-last term is “the fraction of civilizations that develop a technology that releases detectable signs of their existence into space”. For the purposes of the Drake Equation, the fraction of civilizations that do not develop that technology is not an interesting line of thought in its own right, except inasmuch as speculation about that fraction leads you to set the value of that term low or high. All we want to know in this sense is, “how many signals are there out there to hear?”

But if you back up and think about these questions without being driven by teleological assumptions, if you don’t just want to shortcut to the probability that there is something for SETI to hear–or to the question of why there aren’t self-replicating probes in our solar system already–you might begin to see just how much messier (but more interesting) the possibilities really are. Granted that if the number that the Drake Equation produces is very very large right up until the last two terms (up to “the fraction of planets with life that develop intelligence”) then somewhere out there almost any possibility will exist, including a species that thinks very substantially the way we do and has had a history similar to ours, but teleology (and its inherent narcissism) can inflate that probability very wildly in our imaginations and blind us to that inflation.

For example:

We’ve been notoriously poor in the two centuries since the Industrial Revolution really took hold at predicting the forward development of technological change. The common assumption at the end of the 19th Century was to extrapolate the rapid development of transportation infrastructure and assume that “advancement” always would mean that travel would steadily grow faster, cheaper, more ubiquitious. In the mid-20th Century it was common to assume that travel and residence in space would soon be common and would massively transform human societies. Virtually no one saw the personal computer or the Internet coming. And so on. The reality of 2013 should be enough to derail any assumptions about our own technological future, let alone an assumption that there will be common pathways for the technological development of other sentient life. To date, futurists have been spectacularly wrong again and again about technology in fundamental ways, often because of the reigning teleologies of the moment.

It isn’t just that we tend to foolishly extrapolate from our technological present to imagine the future. We also have very impoverished ways of imagining the causal relationship between other possible biologies of intelligent life and technosocial formations, even in speculative fiction. What technologies would an underwater intelligence develop? An intelligence that communicated complex social thoughts through touch or scent? An intelligence that commonly communicated to other members of its species with biological signals that carried over many miles as opposed to at close distances? And so on. How much of our technological histories, plural (because humanity has many more than one technological history) are premised on our particular biological history, the particular contingencies of our physical and cultural environments, and so on? Lots, I think. Even within human history, there is plenty of evidence that fundamental ideas like the wheel may not be at all inevitable. Why should we assume that there is any momentum towards the technological capabilities involved in sending self-replicating probes to other star systems or any momentum towards signalling (accidentally or purposefully)?

Equally: why should we assume that any other species would want to or ever even think of the idea? Some scientists engaging the Fermi Paradox have suggested that signalling or sending probes might prove to be dangerous and that this is why no one seems to be out there. E.g., they’ve assumed a common sort of species-independent rationality would or could guide civilizational decision-making, and so either everyone else has the common sense to be quiet or everyone who wasn’t quiet is dead because of it. But more fundamentally, it seems hard for a lot of the people who engage in this sort of speculation to see something like sending self-replicating probes for what they really might be characterized as: a gigantic art project. It’s no more inevitable than Christo draping canyons in fabric or the pharoahs building pyramids. It’s as much about aesthetics and meaning as it is technology or progress. There is no reason at all to assume that self-replicating probes are a natural or inevitable idea. We might want to at least consider the alternative: that it is a fucking strange idea that another post-industrial, post-scarcity culture of intelligences with a lot of biological similarity to us might never consider or might reject as stupid or pointless even if it occurred to them.

Anthropocentrism has died slowly by a thousand cuts rather than a single decisive strike, for all that our hagiographies of Copernicus and Galileo sometimes suggest otherwise. Modern Western people commonly accept heliocentrism, and can dutifully recite just how small we are in the universe. Until we began getting data about other solar systems, it was still fairly common to assume that the evolution of our own, with its distribution of small rocky planets and gas giants, was the “normal” solar system, which is increasingly obviously not the case. That too is not so hard to take on board. But contemporary history and anthropology provide us plenty of information to suspect that our anthropocentric (specifically modern and Eurocentric) understandings of how intelligence and technology are likely to interrelate are almost certainly equally inadequate to the reality out there.

The more speculative the conversation, the more it will benefit from a much more intellectually and methodologically diverse set of participants. Demonstrating that it’s possible to blanket the galaxy with self-replicating probes within ten million years is interesting, but if you want to know why that (apparently) didn’t happen yet, you’re going to need some philosophers, artists, historians, writers, information scientists and a bunch of other folks plugged into the discussion, and you’re going to need to work hard to avoid (or at least make transparent) any assumptions you have about the answers.

]]>
https://blogs.swarthmore.edu/burke/blog/2013/07/25/teleology-and-the-fermi-paradox/feed/ 5