I’m going to try and break down the coming Spellings Commision report in detail when the official final version is out, but by way of preparation, I’m interested in thinking creatively about the problem of assessment.
I agree that we really don’t think very rigorously, at all levels of higher education, about outcomes. Individual faculty may think a great deal about whether what they’re doing in the classroom “works”, but institutional conversations that make very divergent visions of outcomes mutually transparent are far less common.
Here’s what I’m worried about, though. The more pressure there is to create universal standards for outcomes, the more likely we are to see an invasion of experts peddling various tests, standards, metrics and regimes of documentation. I can only say that almost every bureaucratic system for measuring outcomes I’ve ever dealt with strikes me as nearly valueless. Such systems typically create a lot of work for individuals without creating useful, context-sensitive data that actually helps achieve some clearly defined goal.
Such systems also work to the advantage of small-minded power-hungry people who relish the opportunity to seize petty authorities over others, as well as create incentives for various kinds of data manipulation. Mandatory testing in public K-12 schools has led to all sorts of bad institutional behavior that is the very opposite of what such testing was meant to encourage. Another example: most academics have heard stories about various forms of data manipulation used to skew US News and World Report rankings.
So here’s what I’m thinking about: how could higher education be more sensitive to the question of outcomes in a way that would still be satisfyingly qualitative? How could you get a higher confidence about the difference between what your students know at the start of a semester and at the end of the semester, especially if you believe that part of what you’re teaching is “critical thought”? What are the instruments being used now that could be refined, improved or extended?
Well, most of you all will strongly disagree–but I’d like to see something like the old French system (of which vestiges remain today in
France). I would like institutions to say, Here are our tests for what you must know, and make those publicly available. This way anyone could take your “degree in History from Swarthmore” test, and have a degree in history from Swarthmore if they passed it.
IIRC, Swarthmore’s Honors program uses outside graders (that is, the professors who write and grade the exams are not the ones who teach the course)? That seems like a really elegant solution to the grade inflation problem — if the exam is beyond the control of the lecturer, then students who want good grades will have a strong positive incentive to seek out the most rigorous and comprehensive lecturers. But that’s theory — can you say anything about the practice, Timothy?
The only solution is to reject the assessment regime tout court. It serves only to distract (at best) or police (at worst) educators. The only assessment that’s really worthwhile is if students graduate and feel they have benefitted — education as corrupted by the assessment regime militates against that, turning education into a series of hoops to jump through. Tests and papers always were hoops to a certain extent, but they had a plausible claim to be means to an intrinsically worthwhile end — assessment makes the hoops into ends in themselves.
So: no assessment, period. Sorry.
(Or just the bare minimum of jumping through hoops so the institution stays accredited.)
Well, obviously I’m in favor of still keeping the traditional grades, tests, papers, etc. I mean none of this bureaucratic “accountability” nonsense that imposes on institutions from the outside.
I’m against an external measurement regime, yeah. But I don’t think it would be a bad thing to talk more clearly within institutions about outcomes, about what exactly it is that we think we’re doing and how exactly we think it is that whatever it is that we’re doing happens and why.
The Honors exams at Swarthmore are an interesting thing, but probably not a good model. They’re more a local cultural peculiarity (well, an anglophiliac idea as well, so not entirely local) rather than something that could be broadly emulated. I think you could only believe that they would be a good model for measuring outcomes in general as long as you believed in a single, clear canon.
Which come to think of it is a major concern I have about the people most enthusiastic for strong imposed accountability standards: they tend to be people who not only think there is a single clear canon in most scholarly fields, they don’t even think they have to make an argument on behalf of that canon, that those in the know are clear about what it is, and everyone else is some sort of naughty postmodern relativist.
This is very discipline specific – some of us in science have our majors approved by our professional organizations so there is some external control over what we do within our educational process.
This idea that external measurement regimes are bad is interesting as it seems to me that we already have them. The GRE subject exams, for example, define a core of information that students need to know before they enter graduate school. Likewise, our department has an advisory board that is composed of employers who give us feedback on how our students are doing in the “real world” of employment. We rely on these indicators heavily to plan curriculum, so that we meet the needs of our students and those who employ them.
Assessment is also a way of being accountable for what happens in your classroom and it opens the door to create a real, integrated curriculum. It forces everyone to talk about what they expect the students to achieve in their class and how that fits into the student’s whole development through their major. This is a worthwhile goal and if we don’t find way to articulate our definition of what students should know, we will be essentially inviting others to come in and define that for us. Perhaps we deserve what we get (in terms of external review) if we are not voluntarily proactive about assessment.
I use standardized tests heavily in reviewing my own program but I have the advantage of training a group in a highly technical area where my professional organization has defined a core set of competencies based on work practice surveys. My students also take state and national licensing exams. I know I’m doing a good job with the curriculum when my students score well on their exams. But the fact that people are fighting to employ my grads helps me feel confident that I’m succeeding in the areas the exams don’t test – the kinesthetic and affective domains. I wonder if there is anything similar for the social sciences and humanities.
Why not make the writings that your students do and your comments on them (although not the grades) public, perhaps with their names attached, perhaps not. This would allow us to see how well they wrote/thought at the start of the semester and how well they do at the end. It wouldn’t be immediately quantitative, but it would be a start.
Assessment without sunshine is difficult.