How College Works: Assessment

I don’t think it’s a secret that I am very frustrated with prevailing trends in higher education assessment. I feel bad that this frustration often forces me to be a major annoyance to great local colleagues in the faculty and administration who have responsibilities for ensuring that Swarthmore keeps its commitments and conforms more closely to those prevailing trends.

I recognize that faculty at many institutions are sometimes overly defensive about assessment of any kind. All of us should be constantly re-evaluating what’s working and not working about our teaching. Good re-evaluations shouldn’t just be private and introspective, because it’s a bit too easy to convince yourself that everything’s fine and you’ve done enough. It’s also important that we create some kind of transcript or data or visible record that the entire world can critically examine. Our students and their families, as well as our publics in general, are owed that.

We shouldn’t be too sensitive about assessment. And we shouldn’t be against it simply because it’s more work, though it’s not unreasonable to actually subtract or remove some other part of the labor of teaching a course to compensate for producing assessment data. If it’s important, then it’s worth doing as something other than a freebie add-on to existing work.

I’m not against assessment in general. I’m against assessment as a diversionary tactic for government agencies trying to keep people from looking too closely at the failures of government. I’m against assessment as an unaccountable practice imposed upon professionals, a practice that actively contradicts what those professionals know about their own working conditions and practices and that cuts corners by using cookie-cutter bureaucratic procedures that treat all teaching institutions as if they’re doing the same thing under the same conditions. I’m against assessment when it trespasses against what my colleagues Barry Schwartz and Kenneth Sharpe describe as forms of professional and experiential “practical wisdom”.

I’m against assessment when it’s measuring the wrong things in the wrong ways. I’m against it when it’s about providing one organization the product they need in order to give another organization what it needs so that the third organization can please a fourth organization, all up and down the food chain. If that’s how meritocracies ensure their version of a full employment program, I’d just as soon have giant, clumsy, inflexible socialist bureaucracies instead, because at least more people get paid off a little bit that way.

In a recent discussion, one of my colleagues wearily suggested that we just render unto Caesar what is Caesar’s, do whatever our accreditors want so that they go away and let us get back to doing good work. In response one of my other colleagues said, “As long as you’re doing what Caesar wants, why not make it useful for you too?” My typically confusing attempt to play the metaphor further in response was this: “Convincing yourself what Caesar wants is good for you too is pretty bad if you’re a barbarian beyond the Roman frontier.” What Caesar wants in this sense is a “civilizing process”. If you have another way of doing things that you feel is better for you, for your culture, for your world, then making Caesar’s way your own is the beginning of the end.

Especially when what Caesar wants isn’t even good for Caesar.

And that’s where How College Works kicks in. Daniel Chambliss and Christopher Takacs have some explicit things to say about assessment. What they say explicitly is characteristically polite, measured, and backed up by detailed research. The most direct commentary comes in Chapter Eight, “Lessons Learned”. They first argue that some of the worst wastes of energy and resources at colleges and universities involve futile attempts to “microengineer human behavior” in strategic plans and other kinds of initiatives (here echoing Schwartz and Sharpe) and too much pursuit of “pedagogical innovation” (ok, that one leaves a bit of a mark on me personally). But they then proceed to note that after eleven years of close study of all of the major styles of educational assessment, they “came away skeptical of the entire assessment enterprise”.

Why?

1) Because assessment regularly works with the wrong units of analysis. Courses, teachers, programs and departments are the wrong units. Individual students are the right unit.

2) Because what you need to assess or understand is how students “experience your institution”. They add “Don’t assume that you know what matters.”

3) “Be open to all outcomes”. E.g., that specifying a set of learning outcomes on a syllabus and then measuring the learning outcomes just completely misses the point when it comes to understanding what is and is not working with education.

4) Because assessment practices create far more data–and far more work in chasing the data–than they need. Because assessment practices end up interfering with the work faculty are already doing to no good end. (I’ll add something to that: and because people trying to enforce assessment practices often don’t believe faculty when they say so.)

But I think there’s more said in the book that applies to assessment. Chambliss and Takacs argue throughout the book that a course or a semester or even several years of a matriculant’s experiences are not the right time frame for understanding what works and doesn’t work about assessment. That at the end of a semester, for example, students often don’t really know yet what they’ve gotten from a particular course.

The authors observe that the vagueness of many liberal arts programs about how students derive the benefits they derive from that education is empirically warranted. Meaning, that trying to break down each element of that education into measurable, atomistic units, via rubrics and standards and lists, and then tinker one-by-one with those atomized elements is missing the forest for the trees. It turns out, if you accept their research, that students get better at writing and speaking and thinking and understanding via the simultaneous, synergistic interaction between all of those activities, both in courses and outside of them. That they learn by watching others, by observing models (especially professors), by experimenting with their scholarly and personal personas in a safe environment. That efficacy in educating involves trying to nurture and support the richness and complexity of a purposeful, focused life.

Basically, I come away from How College Works thinking that the upshot of their argument, resting on their empirically-driven, carefully-designed research is basically what Geoffrey Rush’s character says repeatedly in Shakespeare in Love: that theater is naturally beset by “insurmountable obstacles on the road to disaster” but that in the end all turns out well. Why? he is asked (at first by a hostile investor who reminds me very much of an accreditor from Middle States). “I don’t know”, he says, “It’s a mystery.”

My dream is that some day accreditors and federal bureaucrats and parents and publics will learn to take that insight seriously. It’s not obfuscation or defensiveness. It’s the truth. Not a mystery beyond understanding, but a mystery in that the coming together of an education is about a great many things working together simultaneously, none of which are properly understood or measured or changed when they’re treated in isolation from one another. It’s about process and flow, not product.

This entry was posted in Academia, Swarthmore. Bookmark the permalink.

7 Responses to How College Works: Assessment

  1. David Barnes says:

    Tim, this is both cathartic (for those of us who are continually frustrated by assessment) and profound. Your third-to-last paragraph captures in a pithy way something I’ve been trying to communicate for a long time about the irreducibility of effective teaching. Thank you!
    David

  2. Paul Harvey says:

    David, you beat me to it. So — what David said.

  3. Tony Grafton says:

    What Tim and David said.

  4. Western Dave says:

    Since I made the switch from college to high school teaching, I’ve become more self-aware of my own assessment practices and the feedback I’m giving my students. To the extent that I can have explicit skills goals and articulate them to my students, they and I both benefit. But I am also keenly aware of the intangibles (and as a faculty at a high school we talk about these aspects a lot too). Which of our students are socially at risk? How open to parent communication are we? etc. etc.. Attempts to measure or replicate these factors across teachers are extremely hard. Parent contacts, for example, are not well-measured by the number of emails or phone calls nor do parent satisfaction surveys tell the whole story. Further, attempts to ever higher marks in parent satisfaction sometimes led to decisions that created an overall worse climate than trying to please individual parents.

    Just anecdotally, any teacher or professor who, when asked about what students learn in their course, responds primarily with intangibles, isn’t doing a good job at any aspect of their teaching. At the same time, teachers who can point to lots of different outcomes tend to rock the intangibles as well.

  5. Timothy Burke says:

    This is a good point, David. I did point out to one colleague the other day who is an even harder hard-case than I am about the “intangibles” point (to the point that he insists that he could and must never under any circumstances describe what a student might concretely ‘do’ with the learning they’ve done in his courses, because this would require breaking up the ineffability of what he’s doing) that if he hands back a paper with a “C” on it and the student asks what he needs to do to improve, it would be awful to just reply “Well, it’s all immersive and synergistic, just do everything a bit better”. We can all break down our grading and our advice into something like a rubric when it’s useful to do so, or least we ought to.

    I think the problem is that assessment agency rubrics are trying to create forms of standardization for the sake of quantitative measurement, not because that actually creates a better, richer understanding of what works and doesn’t work in teaching. That’s why Chambliss and Takacs say that individual students and the total range of outcomes associated with them are the right unit–so that you can appreciate that the student who maybe didn’t master “US history 1865-2014” in its content but did somehow become a more effective writer and a better critical thinker through that course has had a good outcome. E.g., when we standardize our outcomes and see every outcome that doesn’t match the standard as a failure, we’re likely missing that there are many other good (and maybe some not-good) outcomes of our teaching practices, some of which we don’t even know about.

  6. Western Dave says:

    I completely agree with you, Tim. It’s one of the reasons I work in a private K-12 school. I don’t have to deal with a lot of the nonsense public school teachers have to do to show effectiveness or whatever it is people are trying to measure these days. Interestingly, our recent reaccredidation process was far simpler than the last time we had it about 10 years ago. Not sure why. At least some of it has to do with the curriculum mapping that we now do that we didn’t before. Although there’s a real question as to whether the benefits of that process outweigh the time costs. In general,

    I think colleges do need to think more about teaching because, as you point out, other people are already thinking about it and therefore framing the discussion for college professors.

  7. Elizabeth Stevens says:

    Eloquent and spot on!

Comments are closed.