Inchworm

Over the last decade, I’ve found my institutional work as a faculty member squeezed into a kind of pressure gradient. On one side, our administration has been requesting or requiring more and more data, reporting and procedures that are either needed to document some form of adherence to the standards of external institutions or that are wanted in order to further professionalize and standardize our operations. On the other side, I have colleagues who either ignore such requests (both specific ones and the entire issue of administrative process) to the maximum extent possible or who reject them entirely on grounds that I find either ill-informed or breathtakingly sweeping.

That pressurized space forms from wanting to be helpful but wanting also to actually take governance seriously. I think stewardship doesn’t conform well to a hierarchical structure, but it also should come with some sense of responsibility to the reality of institutions and their relationship to the wider world. The strongest critics of administrative power that I see among faculty, both here at Swarthmore and in the wider world of public discourse by academics, don’t seem very discriminate in how they pick apart and engage various dictates or initiatives and more importantly, rarely seem to have a self-critical perspective on faculty life and faculty practices. At the same time, there’s a lot going on in academia that comes to faculty through administrative structures and projects, and quite a lot of that activity is ill-advised or troubling in its potential consequences.

A good example of this confined space for me perennially forms around assessment, which I’ve written about before. Sympathy to my colleagues charged with administrative responsibilities around assessment means I should take what they ask me to produce seriously both in the sense that there are consequences to the institution if faculty fail to do in the specified manner and seriously because I value them and even value the concepts embedded in assessment.

On the most basic human level, I agree that the unexamined life is not worth living. I agree that professional practices which are not subject to constant examination and re-evaluation have a tendency to drift towards sloppiness and smug self-regard. I acknowledge that given the high costs of a college education, potential students and their families are entitled to the best information we can provide about what our standards are and how we achieve them. I think our various publics are entitled to similar information. It’s not good enough to say, “Trust us, we’re great”. That’s not even healthy if we’re just talking to ourselves.

So yes, we need something that might as well be called “assessment”. There is some reason to think that faculty (or any other group of professionals) cannot necessarily be trusted to engage in that kind of self-examination without some form of institutional support and attention to doing so. And what we need is not just introspective but also expressive: we have to be able to share it, show it, talk about it.

On the other hand, throughout my career, I’ve noticed that a lot of faculty do that kind of reflection and adjustment without being monitored, measured, poked or prodded. Professionalization is a powerful psychological and intellectual force through the life cycle of anyone who has passed through it, for good and ill. The most powerfully useful forms of professional assessment or evaluation that I can think of are naturally embedded in the workflow of professional life. Atul Gawande’s checklists were a great idea because they could be inserted into existing processes of preparation and procedure, because they are compatible with the existing values of professionals. A surgeon might grouse at the implication that they needed to be reminded about which leg to cut off in an amputation but that same surgeon would agree that it’s absolutely essential to get that right.

So assessment that exists outside of what faculty already do anyway to evaluate student learning during a course (and between courses) often feels superfluous, like busywork. It’s worse than that, however. Not only do many assessment regimes add procedures like baroque adornments and barnacles, they attach to the wrong objects and measure the wrong things. The amazing thing about Gawande’s checklists is that they spread because of evidence of their very large effect size. But the proponents of strong assessment regimes, whether that’s agencies like Middle States or it’s Arne Duncan’s troubled bureaucratic regime at the U.S. Department of Education, habitually ignore evidence about assessment that suggests that it is mostly measuring the wrong things at the wrong time in the wrong ways.

The evidence suggests, especially for liberal arts curricula, that you don’t measure learning course by course and you don’t measure it ten minutes after the end of each semester’s work. Instead you ought to be measuring it over the range of a student’s time at a college or university, and measuring it well afterwards. You ought to be measuring it by the totality of the guidance and teaching a faculty member provides to individual students, and by moments as granular as a single class assignment. And you shouldn’t be chunking learning down into a series of discrete outcomes that are chosen largely because they’re the most measurable, but through the assemblage of a series of complex narratives and reflections, through conversations and commentaries.

In a given semester, what assessment am I doing whether I am asked to do it or not? In any given semester, I’m always trying some new ways to teach a familiar subject, and I’m always trying to teach some new subjects in some familiar ways. I am asking myself in the moment of teaching, in the hours after it, at the end of a semester and at the beginning of the next: did that work? What did I hope would work about it? What are the signs of its working: in the faces of students, in the things they say then and there in the class, in the writing and assignments they do afterwards, in the things they say during office hours, in the evaluations they provide me. What are the signs of success or failure? I adjust sometimes in the moment: I see something bombing. I see it succeeding! I hold tight in the moment: I don’t know yet. I hold tight in the months that follow: I don’t know yet. I look for new signs. I try it again in another class. I try something else. I talk with other faculty. I write about it on my blog. I read what other academics say in online discussion. I read scholarship on pedagogy.

I assess, I assess, I assess, in all those moments. I improve, I think. But also I evolve, which is sometimes neither improvement nor decline, simply change. I change as my students change, as my world changes, as my colleagues change. I improvise as the music changes. I assess.

Why is that not enough for the agencies, for the federal bureaucrats, for the skeptical world? Two reasons, namely. The first is that we have learned not to trust the humanity of professionals when they assure us, “Don’t worry, I’m on it.” For good reasons sometimes. Because professionals say that right up to the moment that their manifest unprofessionalism is laid screamingly bare in some awful rupture or failure. But also because we are in a great war between knowing that most of the time people have what my colleagues Barry Schwartz and Ken Sharpe call “practical wisdom” and knowing that some of the time they also have an innocent kind of cognitive blindness about their work and life. Without any intent to deceive, I can nevertheless think confidently that all is well, that I am teaching just as I should, that I am always above average and getting better all the time, and be quite wrong. I might not know that I’m not seeing or serving some group of students as they deserve. I might not know that a technique that I think delivers great education only appears to because I design tests or assignments that evaluate only whether students do what I want them to do, not whether they’ve learned or become more generally capable. I might not know that my subject doesn’t make any sense any longer to most students. Any number of things.

So that’s the part that I’ll concede to the assessors: it’s not enough for me to be thoughtful, to be practically wise, to work hard to sharpen my professionalism. We need something outside ourselves: an observer, a coach, a reader, an archive, a checklist.

I will not concede, however, that their total lack of interest in this vital but unmeasurable, unnumbered information is acceptable. This should be the first thing they want: our stories, our experiences, our aspirations, our conversation. A transcript of the lived experience of teaching. This is the second reason that the assessors think that what we think about our teaching is not wanted or needed. They don’t want that because they believe that all rhetoric is a lie, all stories are told only to conceal, all narrative is a disguise. They think that the work of interpretation is the work of making smoke from fog, of making lies from untruths. The reason they think that is that stories belong at least somewhat to the teller, because narratives inscribe the authority of the author. They don’t want to know how I assess the act of teaching as I perform it because they want a product, not a process. They want data that belongs to them, not information that creates a relationship between the interpreter and the interpreted. They want to scrub evidence clean, to make an antiseptic knowledge. They want bricks and mortar and to be left alone to build as they will with it.

——————

I get tired of the overly casual use of “neoliberal” as a descriptive epithet. Here however I will use it. This is what neoliberalism does to rework institutions and societies into its preferred environment. This is neoliberalism’s enclosure, its fencing off of commons, its redrawing of the lines. The first thing that gets done with data that has had its narrative and experiential contaminants scrubbed clean is that the data is fed back into the experience of the laborers who first produced it. This was done even before we lived in an algorithmically-mediated world, and has only intensified since.

The data is fed back in to tell us what our procedures actually are, our standards have always been. (Among those procedures will always be the production of the next generation of antiseptic data for future feedback loops.) It becomes the whip hand: next year you must be .05% better at the following objectives. If you have objectives not in the data, they must be abandoned. If you have indeterminacies in what you think “better” is, that’s inadmissable: rarely is this looping even subject to something like a Bayesian fuzziness. This is not some exaggerated dystopic nightmare at the end of a alarmist slippery slope: what I’m describing already happened to higher education in the United Kingdom, largely accomplishing nothing besides sustaining a class of transfer-seeking technocratic parasites who have settled into the veins of British universities.

It’s not just faculty who end up caught in the loop, and like frogs boiling slowly to death, we often don’t see it happening as it happens. We just did our annual fire drill here in my building, and this year the count that we did of the evacuees seemed more precise and drawn-out than last year, and this year we had a mini-lecture about the different scenarios and locations for emergency assembly and it occurred to me: this is so we can report that we did .05% better than last year.

We always have to improve just a little, just as everything has to be “growth-based”, a little bigger next year than last year. It’s never good enough to maintain ground, to defend a center, to sustain a tradition, to keep a body healthy happy and well. Nor is it ever good enough to be different next year. Not a bit bigger, not a bit better, but different. New. Strange. We are neither to be new nor are we to maintain. We are to incrementally approach a preset vision of a slightly better but never perfect world. We are never to change or become different, only to be disrupted. Never to commune or collaborate, always to be architected and built.

———————

So here I am in the gradient again, bowed down by the push on all sides. I find it so hard when I talk to faculty and they believe that their teaching is already wholly and infinitely sufficient. Or that it’s nobody’s business but their own how they teach, what they teach, and what comes of their teaching. Or that the results of their teaching are so sublime, ineffable and phenomenologically intricate that they can say nothing of outcomes or consequences. All these things get said, at Swarthmore and in the wider world of academia. An unexamined life.

Surely we can examine and share, express and create. Surely we can provide evidence and intent. Assess and be assessed in those ways. Surely we don’t have to bury that underneath fathoms of tacit knowledge and inexpressible wisdom. We can have our checklists, our artifacts.

But surely too we can expect from administrations that want to be partners that we will not cooperate in building the Great Machine out of the bones of our humane work. That we’re not interested in being .05% better next year, but instead in wild improvisations and foundational maintenance, in becoming strange to ourselves and familiar once again, in a month, a moment or a lifetime. Surely that’s what it means to educate and become educated in an uncertain world: not .05% more measured comprehension of the impact of the Atlantic slave trade on Sao Tome, but thinking about how a semester of historical study of the Atlantic slave trade might help make a poet forty years hence to write poems, might sharpen an analytic mind, might complicate what was simple or simplify what was complex. Might inform a diplomat ten years from now, might shape a conservative’s certainty that liberals have no answers when he votes next year’s Presidential race. Might inspire a semester abroad, might be an analogy for an experience already had. I can talk about what I do to build ramps to all those possibilities and even to the unknown unknowns in a classroom. I can talk about how I think it’s working and why I think it’s working. But don’t do anything that will lead to me or my successors having to forgo all of that thought in favor of .05% improvements onward into the dreary night of an incremental future.

This entry was posted in Academia, Defining "Liberal Arts", Oh Not Again He's Going to Tell Us It's a Complex System, Swarthmore. Bookmark the permalink.

5 Responses to Inchworm

  1. Sam Zhang says:

    One bizarre possible outcome is if a change of guard happens halfway through implementing a more qualitative assessment system. Suddenly a panic erupts around the lack of quantitative data, and the administration calls in Natural Language Processing experts to attempt to predict how well a teacher did based off this stack of essays. They maybe scramble to apply some labels to how well teachers did, then train a model based off text features like length, vocabulary, and so forth (both of the teachers, and the student feedback).

    Of course that isn’t a reason to avoid this course of action. It’s just interesting to think how qualitative data doesn’t stay “qualitative” without a constant interpretative effort.

    There are a variety of reasons why I think feeding assessments to machines could be disastrous (but perhaps appear magically efficient for the first year or two). But that might be a fight for another day — once I’m arguing against someone other than the devil in my brain.

  2. Timothy Burke says:

    Folks have got to understand that qualitative data is about putting the tricky, shifting world of human meaning back into the picture and insisting that it can’t be understood better by stripping all of its semantic character. Across a very broad front, this is where the data-fetishists keep getting mugged by human reality, and their usual response is to try and make people act more like information and code. It is for me a familiar and dreary aspect of virtual worlds: designers want players to act a certain way; players don’t; designers do their best to stamp out all the ways that the players might break the design. What’s left is a kind of ludic version of that scene in “Metropolis” where the man gets trapped in the machine, both comic and horrifying (all the more because people are doing it for the sake of fun.)

  3. Contingent Cassandra says:

    This rings true to me. I’ve long thought that one of the things lost in the increased use of faculty for whom what we usually call “service” is not an official part of the job is a parallel loss of a kind of localized, ongoing, often just-in-time research that takes place when faculty within a department or program talk to each other about what their students need, and how that is changing, and craft and re-craft curricula accordingly. At this point, even in schools with still relatively robust tenure-track systems and faculty governance, it’s often the case, especially with the core/intro curriculum, that the faculty who teach under average conditions have very little voice in shaping the curriculum, and the faculty who shape the core curriculum as part of their service work have little experience in teaching those courses under average (i.e. adjunct/contingent load) conditions.

    This is not a good thing, I’ve found, to point out to data-oriented tenure-track colleagues. It’s even worse to point out that some people are increasingly making their livings telling other people how to teach, based on “research” that is mostly divorced from the local classroom context, while doing very little teaching themselves. It *is* a good (or at least conciliatory) thing, in such conversations, to claim that one’s approach is “evidence-based,” even if one came up with the approach first and looked for the support later (which tends to be the way most “evidence-based” pedagogy arguments are born, I rather suspect).

    It’s also not a good idea to point out that most quantitative data has its origins in questions posed using language, with all the possibilities for multiple or mis-interpretations, unconscious steering, etc., etc. that involves.

    But you’ve described all of the above, and more, far better than I can.

  4. Shallot says:

    I’m a Swarthmore grad who got a PhD in history and then moved into administration, and I’m now working in assessment. I see my training as a historian as good preparation for thinking about assessment, because I try to frame it as about “evidence” rather than “data, ” a term that I try to avoid using In most situations.

    I understand many of the frustrations faculty members have with assessment, and I share some myself. I’ve seen institutions implement assessment requirements in ways that are inflexible and onerous, that result in making assessment a bureaucratic requirement without meaning. Some assessment leaders are coming to this realization as well, arguing against the “compliance culture” that has developed and seeking to make learning assessment meaningful. (See recent book from the National Institute for Learning Outcomes Assessment, Using Evidence of Student Learning to Improve Higher Education). Middle States has recognized this as well, and the new standards offer more flexibility in terms of assessing student learning outcomes.

    One of my main frustrations is that many faculty have little understanding of the history of assessment and accreditation and the relationship between them. Much of this recent emphasis on assessing student learning comes as a reaction to the report of the Spellings Commission, which seemed to suggest that mass standardized testing, using tests like the CLA, was likely to be adopted as a way to evaluate student learning gains in higher education. Regional accreditors, along with much of the higher education community, opposed this, especially given the wide variety of missions and types of institutions in the U.S. The assessment requirements that all regional accreditors have developed were a way to establish faculty at each institutions as the only ones who could determine what students should learn and assess their learning. I think it’s important to recognize this. Accrediting requirements could be so much worse in terms of assessing student learning, and I think failing to acknowledge this is an example of the lack of discrimination faculty sometimes show in attacking initiatives. The fact that faculty are supposed to be the ones leading assessment is why assessment can be a burden to faculty – we administrators cannot do the work for you. At the institutions I’ve worked out, the increase in service work from assessment has been a greater source of faculty resistance than philosophical opposition.

    Hopefully I am an administrator who is a partner to faculty, working with them to figure out how to incorporate assessment into work they already do and to make it meaningful and useful for them. I’m not looking for small, incremental progress, and I value stories and narratives as a part of the assessment process as long a couple anecdotes are not the sole evidence of student learning. At the heart of assessment I see the asking of questions about student learning and the attempt to answer those questions through some sort of systematic inquiry. This is similar to what faculty do in their own work. Unfortunately, in practice, many institutions do have reporting requirements that make it difficult to approach assessment in this way, but, at least in the Middle States region, I don’t think it has to be set up this way.

  5. Alice says:

    So, I teach in a high school, which is clearly different from college. But it’s a lot of the concerns you raise apply in that setting, and there are also many similarities between a well-endowed boarding school and a well-endowed residential college.

    We do a lot of qualitative assessment. My department has an external review starting next week. Three teachers (at the high school and college level) are going to come by, watch some of our classes, talk to us, talk to our students, read documentation that we prepared, etc. and give us feedback based on that. I’m a new teacher and I have a mentor who watches me teach and gives me written and verbal feedback every day.

    I guess I wanted to say that I think there is absolutely a feasible middle ground between hiding our practice and submitting to Science. And that I find it very helpful in my growth as a teacher, and in maintaining common purpose across a department.

Comments are closed.