Ghosts in the Machine

Every once in a while, I’ll catch a colleague from the natural sciences playfully tweaking a humanist for the volume of prose they lavish on a routine task like writing an email or a committee report. (Or, yeah, a blog post. I am guilty, guilty, guilty.)

Humanistic scholars tend to cling to their words because that’s the alpha and omega of our scholarly work. I don’t enjoy listening to a full paper being read in a monotone during a conference panel, but good humanistic research often can’t be boiled down to a poster or a few slides. Persuasion, rhetoric, and description are the research, or at least they can’t be disentangled from it. This is also why humanists don’t collaborate or co-author very often, why their work tends to be private and individualistic even though it circulates through their disciplinary and topical communities, because voice and perspective are so much a part of carrying out and disseminating any scholarly study.

I think this is why I feel such discomfort every time I read about a case of academic plagiarism that ends up revealing that a scholar was making extensive use of research assistants to acquire archival materials or data, digest or summarize that data, and prepare drafts of work about that data. I’ve made some use of interviews conducted on my behalf by assistants (or by a co-author in the case of my work on children’s television), but that doesn’t seem radically different to me than quoting or citing interview transcripts deposited in archives by other researchers. I’d feel personally uncomfortable with telling an assistant to go into an archive or library and get me everything on a particular subject or topic, but as long as I read what they brought back myself rather than rely on their summary representation of it, I guess that could work okay in some cases. I know I wouldn’t be comfortable at all with drafts prepared by an assistant: if my name is on it, I wrote it or co-wrote it with a peer author based on extensive collaboration and conversation between us.

There are all sorts of acceptable variations and shadings of these practices, but I think a humanist or social scientist who is churning out assembly-line monographs, articles or reviews where all the work is largely done by unnamed research assistants has crossed an important line and isn’t really doing scholarship. It’s more being a brand-name than a scholar at that point. I’d really like to see most universities start to rein in cases of faculty who produce a large volume of work in this fashion. Even when it doesn’t lead to plagiarism or sloppiness, it breaks an important connection between research and the individual who produced it. That connection is why we trust research in the way that we do, because we know that a scholar has personal reputation capital on the line, has skin in the game. The more we accept excuses like, “I didn’t check the work of my assistants sufficiently”, the more that the work of all scholars becomes suspect. If you don’t clearly describe what is and is not ok when it comes to the use of research assistance, you end up making it possible for some research-factories to claim that they’re just doing what everyone else does.

The norms of co-authoring differ in the sciences (and some of the social sciences) because their work is genuinely and necessarily collaborative in a way that fundamentally differs from most humanistic scholarship. But here too it has become increasingly clear that the system is breaking down. I suspect it’s long been abused by predatory powerful figures who stick their names on work to which they are only nominally connected. The ghostwriting scandals of the last few months, which Margaret Soltan has tracked so well, are corrosive on a whole new scale.

Soltan’s most recent post on the subject is about the case of a McGill professor who put her name on an article written by a pharmaceutical-supported organization. The important thing about the details of this case is that they show how much the routine practice of scientific co-authorship has long since slippery-sloped its way to allowing serious abuses. Look at the official statement that the professor in question has issued. She says, “I wrote a portion of the article, but not all of it, although only my name was listed as its author”. She continues, “Other parts of that article were written with the assistance of DesignWrite, a firm which, it turns out, was employed by a pharmaceutical manufacturer to assist in the development of academic articles. I made an error in agreeing to have my name attached to that article without having it made clear that others contributed to it.”.

The statement aims to exonerate her by representing the practice as a routine act of co-authorship that went awry because the professor wasn’t diligent enough about looking at all of the particulars involved. She notes that she wasn’t paid for the article and that it underwent successful peer review.

Come on! Pull the other one! Did an article already partially written by DesignWrite fall from the sky onto her desk like the Coke bottle in The Gods Must Be Crazy, and she said, “Why, how lovely, this just happens to be a wonderful description of research I’ve already completed?” Who did she think employed DesignWrite? Santa Claus?

“It turns out” is a fantastic little phrase in that statement, and really needs to become a standard part of celebrity apologies. “A gun was located near my person, and it turns out that the gun had a bullet in it, which it turns out hit someone. I made an error in having my finger near to the trigger and allowing some of my nerve impulses to cause a muscular contraction in that approximate location at the same time that it turned out that the gun was near to that finger.”

Without getting in the way of legitimate practices, I think every university and college in this country can make a clear policy statement: if your name is on it, it’s your work. You did the research, you created a report, statement, abstract, essay or book that disseminated that research. We can work with other researchers or writers, we can get assistance of various kinds. But there shouldn’t be any ambiguity about working with a firm of ghostwriters paid by a business that has a financial interest in the outcomes of the research. That’s not scholarship. If a pharmaceutical wants to publish that kind of writing, let them put it out under their own name, let them hire in-house researchers if they like, but don’t pretend it’s independent scholarly research with the credibility of an academic institution behind it.

This entry was posted in Academia. Bookmark the permalink.

2 Responses to Ghosts in the Machine

  1. jfruh says:

    A friend of mine who was briefly a grad student in chemistry (he later switched fields and now is a PHD lit academic) explained to me the varying levels of prestige attached to various name placements in the list of authors of a collaborative science article. Essentially, it starts with the professor whose done the most work on it, then goes through other professors in descending order of contributions; then it shifts to the grad students and postdocs, and goes through their names in increasing orders of contribution. Thus, for an article published by my friend’s research group when he had already more or less checked out and was applying to English programs, and for which his contribution consisted mostly of his going into the lab in the wee hours of the morning and babysitting the equipment as the experiments ran, his name was dead center in the list of fifteen or so authors — the least prestigious spot, as he described it.

  2. Timothy Burke says:

    See, that seems to me to be a system which, quite aside from its abuse by ghostwriters, is crying out for some judicious reduction down to principal authors and no more, putting supporting participants into some kind of author’s note or contributions note. That, of course, would in turn take a sea change in how faculty, especially junior faculty and grad students, are counted as having done productive work. One of the reasons for the upwards creep in listings is that more and more institutions credit research productivity through the amassing of quantity, not at all through the senstivie assessment of quality. It reminds me a bit of a conversation I had with someone at a professional association about why they had such a large volume of submissions to be on conference panels given the evident indifference of most people attending the big professional meetings (not to mention the presenters themselves). It’s because a substantial number of institutions only financially support attending a big professional meeting if you’re on the program. Bad incentive systems produce increasingly perverse results in time.

Comments are closed.