I was a part of an interesting conversation about assessment this week. I left the discussion thinking that we had in fact become more systematically self-examining in the last decade in a good way. If accrediting agencies want to take some credit for that shift, then let them. Complacency is indeed a danger, and all the more so when you have a lot of other reasons to feel confident or successful.
I did keep mulling over one theme in the discussion. A colleague argued that we “have been, are and ought to be” committed to teaching a kind of standardized mode of analytic writing and that therefore we have a reason to rigorously measure across the board whether our students are meeting that goal. Other forms of expression or modes of writing, he argued, might be gaining stock in the world but they shouldn’t perturb our own commitment to a more traditional approach.
I suppose I’m just as committed to teaching that kind of writing as my colleague, for the same reasons: it has a lot of continuing utility in a wide variety of contexts and situations, and it reinforces other less tangible habits of thought and reflection.
And yet, I found myself unsettled on further reflection about one key point: that it was safe to assume that we “are and ought to be” committed. It seems to me that there is a danger to treating learning goals as settled when they’re not settled, just as there is a danger to treating any given mix of disciplines, departments and specializations at a college or university as something whose general stability is and ought to be assured. Even if it is probable that such commitments will not change, we should always act as if they might change at any moment, as if we have to renew the case for them every morning. Not just for others, but for ourselves.
Here’s why:
1) even if a goal like “teaching standard analytic writing” is absolutely a bedrock consensus value among faculty and administration, the existence of that consensus might not be known to the next generation of incoming students, and the definition of a familiar practice for faculty might be unfamiliar to those students. When we treat some feature of an academic enviroment as settled or established, there almost doesn’t seem to be any reason to make it explicit, or to define its specifics, and so if students don’t know it, they’ll be continuously baffled by being held accountable to it. This is one of the ways that cultural capital acts to reproduce social status (or to exclude some from its reproduction): when a value that ought to be disembedded from its environment and described and justified is instead treated as an axiom.
2) even if something like “teaching analytic writing” is absolutely a bedrock consensus value among faculty, if some in a new generation of students consciously dissent from that priority and believe there is some other learning goal or mode of expression which is preferable it, then faculty will never learn to persuade those students, and will have to rely on a brute force model to compel students to comply. Sometimes that works in the same way that pulling a child away from a hot stove works: it kicks the can down the road to that moment when those students will recognize for themselves the wisdom of the requirement. But sometimes that strategy puts the goal itself at risk by exposing the degree to which faculty themselves no longer have a deeply felt or well-developed understanding of the value of the requirement they are forcing on their students.
3) Which leads to another point: what if the previously consensus value is not a bedrock consensus value even among faculty? If you assume it is, rather than treat the requirement as something that needs constantly renewed investigation, you’ll never really know if an assumed consensus is eroding. Junior and contingent faculty may say they believe in it, but really don’t, which contributes to a moral crisis in the profession, where the power of seniority is used to demand what ought to be earned. Maybe some faculty will say they believe in a particular requirement but actually don’t do it well themselves. That’s corrosive too. Maybe some faculty say they believe in it but what they think “it” is is not what other people think it is. You’ll never know if the requirement or value isn’t always being revisited.
4) Maybe there is genuine value-based disagreement or discord within the faculty that needs to be heard, and the assumption of stability is just riding roughshod over that disagreement. That’s a recipe for a serious schism at some point, perhaps at precisely the wrong moment for everyone on all sides of that kind of debate.
5) Maybe the requirement or value is a bedrock consensus value among faculty but it absolutely shouldn’t be–e.g., that the argument about that requirement is between the world as a whole and the local consensus within the academia. Maybe everything we think about the value we uphold is false, based on self-referring or self-validating criteria. At the very least, one should defy the world knowingly, if one wants to defy the world effectively.
I know it seems scary to encourage this kind of sense of contingency in everything we do in a time when there are many interests in the world that wish us ill. But this is the part of assessment that makes the most sense to me: not measuring whether what we do is working as intended (though that matters, too) but asking every day in a fresh way whether we’re sure of what we intend.
So this stands as a surely unintentional response to my comment to your last post…
I don’t think there is any reason why what you have written here and what i wrote there are in conflict. I think that one can reject the notion that there is a crisis or a severe lack that necessitates a radical rethink of pedagogical goals and methods while accepting that a professor/department/school can and should be continually reassessing such things. I would argue (and presume that you would agree) that continual reassessment would take away the need for radical shifts in the future. I also think this needs to happen through faculty and not administration. Though I am skeptical that it would get the disrupters off our back since I am not convinced they care much about education and the liberal arts to begin with.
For me the sea change along these lines has been to move from the ‘make a point’ model of research and analysis I was taught, to a ‘figure it out’ model. This has been gradual and influenced by the things I’ve tried to learn from feminist, postcolonial, and postructural critiques of the various centrisms, in conjunction with also trying to learn the lessons of complexity, non-linearity, feedbacks, fractals, and so on. So I now encourage students to walk around a topic and pay attention to it in a variety of dimensions and in its relations and connections, before / rather than drawing conclusions. It’s made a big difference, one I would be reluctant and probably unable to give up if the traditional model was being enforced.