A bit more on grading. One of the responses to the conversation about grading over at Megan McArdle’s blog was from David Walser, who was frustrated with the idea of flexible or situational grading because he claims that he uses grades as a tool in hiring graduates, or that he would like to do so but can’t because too many professors grade flexibly or situationally. (As an aside: a lot of the commenters there pretty much just look at a post title and then start chewing up the scenery based on whatever prior disposition they have about a given topic. Relax and read, people.)
I want to engage that complaint a bit further. First off, surely any employer is going to recognize that higher education in the United States presents a massive problem when it comes to comparability of standards or information about graduates which a consistent approach to grading across a given institution would not in and of itself resolve. Walser would have to know how to compare grades from Harvard, Swarthmore, Bates, the University of Michigan, Bob Jones University and DuPage Community College and a thousand other institutions for grades to serve as the rigorous instruments that he wants them to be.
This is much harder than it seems, and not merely because of endless debates about how (or whether) to measure quality and excellence across institutions. You’d also have to know the precise comparability of individual curricular programs. Is history as it’s studied at Swarthmore the same thing as history as it’s studied at the University of New Mexico? It’s not: the class size and composition is different, the range of subjects is different, the structure of the majors is different. The classes are different: you can’t actually take a class centrally focused on African history at UNM, but there are many courses you can take there that you can’t take at Swarthmore.
Thus the question arises: why does Walser want to know precisely how to compare two history majors from two different institutions, to feel assured that the A in my course is the same as the A given to a history major at UNM? If he wants a fixed standard that would hold between the two institutions, there is really only one possibility, that we each offer a test that measures concrete knowledge in a specific area of competency (let’s say world history or comparative history or Atlantic history, which I occasionally teach and which is taught at UNM). Unless he is looking to hire into a field where that specific competency is a requirement, what’s the relevance of a highly comparable, objective standard to him as an employer? If he is looking for that competency, then I suggest he has other ways to measure it besides the grade in a course.
What am I grading, most of the time? Most of the time, I’m grading writing first and class participation or contribution second. On exams, I’m also grading basic knowledge of the subject matter (usually through identification questions) where skill in writing doesn’t matter as long as I can understand the answer and it contains the information I am looking for. When I’m grading writing, I’m assessing skill in persuasion, sometimes skill in research, skill in expression, skill in the ability to use information from the course. (Not merely whether the student knows that information, but whether they can do something with it.) It could be that Walser wants to know what I’m claiming about a student’s excellence or adequacy when it comes to written and verbal expression, and wants to be able to compare that claim to every other grade that every other candidate has.
There cannot possibly be that kind of objective standard for evaluating writing in the humanities or the social sciences. There is a good deal of consensus between a lot of professors about the general attributes of excellent, adequate and inadequate writing, consensus not just within a given institution like Swarthmore, but across institutions. That said, there are necessary limits to that consistency. Excellent expository writing in one context may be weak in another context. In a single semester, I cannot teach students to write well on research papers, short response essays, letter-writing to friends and family, memos to bosses or team members, short journal entries and so on: each a kind of writing that has its own kind of excellence (and failure). I cannot evaluate how students write in what I do assign to them against a single benchmark of absolute success or failure, either. I have benchmarks, rough standards, goals, but these move and adjust. They have to.
And here again, I’m asking: what do you need to know about a potential applicant that you’re looking for the grade to tell you? That the student is a competent writer, or an exceptional one? Presumably different kinds of employment have different requirements in that regard. In some jobs, I think competence is all you need; in others, much more. If Megan’s commentator is an editor at a newspaper, then skill at expository writing is obviously crucial. If he’s looking for a sales manager, there are other skills he needs to know about, some of which are measured very poorly by studying world history. Whatever his needs as an employer, he’s asking too much of one grade or even many grades if he thinks a single letter will contain all that information, stamped and guaranteed in a final, graven form.
If you’re looking at a transcript, and you know a bit about the quality of the institution from which it comes, you’ll have a ballpark sense of a student’s quality of mind. If you look at what they studied, you may have an approximate sense of what they might know and what skills they might have. If you have more specific requirements, however, you’ll need more information than grades and course titles could ever conceivably provide, no matter how consistent educators tried to be. That’s why you ask for letters of reference. That’s why you ask for writing samples. That why you look for the things a candidate has done above and beyond their courses. That’s why you interview candidates.
If an employer really felt that higher education should provide more information about the quality of graduates, don’t demand that we enforce absolute and rigid standards for grades. Instead you should be asking us to go in the opposite direction, closer to what Hampshire College does, and provide a written assessment of a student’s performance in a course, and a written description of the specific competencies which measured a student’s performance. Now I doubt that personnel directors for large organizations are going to want to read forty or so evaluations of this kind for each and every candidate who applies for a job, but if high-value information is what you crave, that’s really what you should be asking for from professors.