I don’t think it’s a secret that I am very frustrated with prevailing trends in higher education assessment. I feel bad that this frustration often forces me to be a major annoyance to great local colleagues in the faculty and administration who have responsibilities for ensuring that Swarthmore keeps its commitments and conforms more closely to those prevailing trends.
I recognize that faculty at many institutions are sometimes overly defensive about assessment of any kind. All of us should be constantly re-evaluating what’s working and not working about our teaching. Good re-evaluations shouldn’t just be private and introspective, because it’s a bit too easy to convince yourself that everything’s fine and you’ve done enough. It’s also important that we create some kind of transcript or data or visible record that the entire world can critically examine. Our students and their families, as well as our publics in general, are owed that.
We shouldn’t be too sensitive about assessment. And we shouldn’t be against it simply because it’s more work, though it’s not unreasonable to actually subtract or remove some other part of the labor of teaching a course to compensate for producing assessment data. If it’s important, then it’s worth doing as something other than a freebie add-on to existing work.
I’m not against assessment in general. I’m against assessment as a diversionary tactic for government agencies trying to keep people from looking too closely at the failures of government. I’m against assessment as an unaccountable practice imposed upon professionals, a practice that actively contradicts what those professionals know about their own working conditions and practices and that cuts corners by using cookie-cutter bureaucratic procedures that treat all teaching institutions as if they’re doing the same thing under the same conditions. I’m against assessment when it trespasses against what my colleagues Barry Schwartz and Kenneth Sharpe describe as forms of professional and experiential “practical wisdom”.
I’m against assessment when it’s measuring the wrong things in the wrong ways. I’m against it when it’s about providing one organization the product they need in order to give another organization what it needs so that the third organization can please a fourth organization, all up and down the food chain. If that’s how meritocracies ensure their version of a full employment program, I’d just as soon have giant, clumsy, inflexible socialist bureaucracies instead, because at least more people get paid off a little bit that way.
In a recent discussion, one of my colleagues wearily suggested that we just render unto Caesar what is Caesar’s, do whatever our accreditors want so that they go away and let us get back to doing good work. In response one of my other colleagues said, “As long as you’re doing what Caesar wants, why not make it useful for you too?” My typically confusing attempt to play the metaphor further in response was this: “Convincing yourself what Caesar wants is good for you too is pretty bad if you’re a barbarian beyond the Roman frontier.” What Caesar wants in this sense is a “civilizing process”. If you have another way of doing things that you feel is better for you, for your culture, for your world, then making Caesar’s way your own is the beginning of the end.
Especially when what Caesar wants isn’t even good for Caesar.
And that’s where How College Works kicks in. Daniel Chambliss and Christopher Takacs have some explicit things to say about assessment. What they say explicitly is characteristically polite, measured, and backed up by detailed research. The most direct commentary comes in Chapter Eight, “Lessons Learned”. They first argue that some of the worst wastes of energy and resources at colleges and universities involve futile attempts to “microengineer human behavior” in strategic plans and other kinds of initiatives (here echoing Schwartz and Sharpe) and too much pursuit of “pedagogical innovation” (ok, that one leaves a bit of a mark on me personally). But they then proceed to note that after eleven years of close study of all of the major styles of educational assessment, they “came away skeptical of the entire assessment enterprise”.
1) Because assessment regularly works with the wrong units of analysis. Courses, teachers, programs and departments are the wrong units. Individual students are the right unit.
2) Because what you need to assess or understand is how students “experience your institution”. They add “Don’t assume that you know what matters.”
3) “Be open to all outcomes”. E.g., that specifying a set of learning outcomes on a syllabus and then measuring the learning outcomes just completely misses the point when it comes to understanding what is and is not working with education.
4) Because assessment practices create far more data–and far more work in chasing the data–than they need. Because assessment practices end up interfering with the work faculty are already doing to no good end. (I’ll add something to that: and because people trying to enforce assessment practices often don’t believe faculty when they say so.)
But I think there’s more said in the book that applies to assessment. Chambliss and Takacs argue throughout the book that a course or a semester or even several years of a matriculant’s experiences are not the right time frame for understanding what works and doesn’t work about assessment. That at the end of a semester, for example, students often don’t really know yet what they’ve gotten from a particular course.
The authors observe that the vagueness of many liberal arts programs about how students derive the benefits they derive from that education is empirically warranted. Meaning, that trying to break down each element of that education into measurable, atomistic units, via rubrics and standards and lists, and then tinker one-by-one with those atomized elements is missing the forest for the trees. It turns out, if you accept their research, that students get better at writing and speaking and thinking and understanding via the simultaneous, synergistic interaction between all of those activities, both in courses and outside of them. That they learn by watching others, by observing models (especially professors), by experimenting with their scholarly and personal personas in a safe environment. That efficacy in educating involves trying to nurture and support the richness and complexity of a purposeful, focused life.
Basically, I come away from How College Works thinking that the upshot of their argument, resting on their empirically-driven, carefully-designed research is basically what Geoffrey Rush’s character says repeatedly in Shakespeare in Love: that theater is naturally beset by “insurmountable obstacles on the road to disaster” but that in the end all turns out well. Why? he is asked (at first by a hostile investor who reminds me very much of an accreditor from Middle States). “I don’t know”, he says, “It’s a mystery.”
My dream is that some day accreditors and federal bureaucrats and parents and publics will learn to take that insight seriously. It’s not obfuscation or defensiveness. It’s the truth. Not a mystery beyond understanding, but a mystery in that the coming together of an education is about a great many things working together simultaneously, none of which are properly understood or measured or changed when they’re treated in isolation from one another. It’s about process and flow, not product.