More on Going to Graduate School

Notes for further revisions and additions to my old essay about whether graduate school is a good idea. Thanks to Paul Musgrave for helping me to think through some of these points, some of which involve the academic job market and others of which involve graduate school itself.

1) When I first wrote my essay and posted it on my old hand-written HTML blog in 2003, I was newly tenured and very inspired by conversations at the Invisible Adjunct’s blog. By that point, I’d only advised a relative handful of Swarthmore students about their plans to go into academia (looking back, it looks as if most of the folks who chose to go then have had good outcomes from that choice).

Now I’ve advised a lot more students and I’ve developed a deep frustration over my inability to reconcile two imperatives that govern my advice. Once I’ve delivered the basic warnings contained in my old essay and some of the cautions about how troubled academia is as a career, I move on to talking to students about more specific questions about what fields they might study, where they might apply, and so on. Sometimes I flat out tell a student that it’s a bad idea for them to go to graduate school, at least at this point in their lives. I’m usually comfortable if not happy about doing that. If I think there are no red flags and it’s possible that the student might find graduate school relatively congenial and have a decent shot at winning a good prize in the tournament, then we get down to brass tacks.

Here I hit the contradiction. Students who already have a commanding, mature sense of their academic interests and inclinations often are already thinking past or against disciplinary boundaries. Some of my smartest students with the best skills as writers and speakers are often practicing intellectuals in the best sense, combining ideas and methodologies from multiple disciplines. They’ve lived up to the ideal of a liberal arts education. I want them to continue being that way.

But now I have to advise them about what programs to apply to. Many of them are dissatisfied with disciplinary exclusivity. I hear from students who want to do ethnographic research but are also interested in studying the past, students who are interested in integrating theory from economics with qualitative sociology, students who want to do political philosophy but not as the discipline of philosophy or political science do them, students who want to combine film or media theory with some kind of media production but who don’t want to do an MFA or go to art school. You name it, there are interesting, well-reasoned combinations that I hear proposed, many of them founded in inspirational coursework students have done while at Swarthmore.

So I have to decide: am I going to carry water for narrower, more constrained, more territorial practices of disciplinarity that are governed from and set by the elite R1 institutions that have the Ph.D programs and cultural capital that my students are aspiring to? To be responsible, I should do so: I need to tell my students that they’ve got to make some constrained choices, give up some of their interesting ideas, conform. I should do that for pragmatic reasons, that this is what the next few years are going to be like (bowing the knee to the most controlling or authoritarian presences in their graduate program). I should do so maybe even because there’s a legitimate case to be made for getting a strong command of one disciplinary tradition under your belt before you mess around with several. (I’m less convinced of this now than I have been in the past, but it’s at least a legitimate discussion.) At the same time, I almost want to tell the students with the most creative and confident vision of their intellectual practice to just not go to graduate school, or to do graduate work in the one freaky program out there that would welcome their ideas even if it means they’ll be completely unemployable on the other end.

I know that some people will object and say that even the most odd-duck graduate program can find a place for its students. But honestly, I have been on the other side in way too many grant competitions, job searches, panel selections and so on. In a tournament economy with hundreds of highly qualified competitors, just one thing that irks one judge or evaluator is enough to knock you out of the race. If it’s a consistent thing, e.g., something that rubs up against an orthodox way of defining a field or discipline, it’ll knock you out of most races. It’s nearly impossible to convince most historians to hire someone whose degree is in anthropology and vice-versa. When the unlikely happens and someone from one discipline manages to infiltrate the redoubt of another, they’d better be happily oblivious to sniping and negativity, because I guarantee you they’ll have a constant background buzz trailing them wherever they go.

So as an advisor, do I carry water for a way of organizing the administrative and intellectual work of academic institutions? It’s the responsible thing for me to do, and offers much lower risks. But that’s the first step on the path to a lifetime of taking few risks in a career that offers protections that are intended to incentivize risk.

2) I’ve written before about how difficult it is to come up with intentional practices that help undergraduates acquire cultural capital, which I think is more important by far in social mobility than the content of the curriculum. For first-generation college students or students who have little familiarity with the hidden codes and assumptions of an elite liberal-arts institution, making it all transparent is absolutely critical. A student who loves literary criticism with a transforming passion but who has no idea how tenure works, where money comes from in a university, how scholars actually publish, what the big picture of disciplinarity is like, which famous literary critic is actually a notorious asshole, and so on, is heading for trouble at the exits if they decide to go to graduate school. This was such a classic pattern in the conversations at the Invisible Adjunct: people who wandered into the discussion still full of passion for some period of history, for some theoretical approach to social analysis, for poetry and fiction, or for the general idea of being an intellectual and so full of confusion and alienation about how little their graduate work seemed to resemble their romantic conception of what it should have been like. (Those heartfelt expressions have since been appropriated and bowdlerized in one strand of conservative ressentiment: check out Michael Bérubé’s latest at Crooked Timber for a particularly eye-rolling example.)

I feel this every time I meet a student who tells me he wants to go study with a famous scholar whose work the student has found inspirational and I gently have to tell the student that the scholar is dead, retired, doesn’t have graduate students, or is widely known as a monster to almost everyone in the field. Ok, sometimes you just don’t know these things. Sometimes I don’t know it any more either, because if you don’t train graduate students, you miss out on some of the cutting-edge gossip in various fields. But like as not, that first statement is going to be followed by other ones that tell me that no one has made anything transparent to the student until this very moment. Where I have to explain, for example, that a doctorate in political science is generally not seen as a great first step for a person whose main career objective is to run for elected office. That no one does simultaneous doctorates in microbiology, cultural anthropology and computer science except for characters in comic books. That you should expect to receive a stipend and a tuition waver if you’re admitted to a doctoral program and if you don’t, that this is a sign that they don’t really want you. That you don’t need to do a terminal MA first in one program as preparation for doctoral study. That there are no merit grants which fund more than a teeny tiny proportion of graduate work unless you fit some rare demographic blessed by an eccentric philanthropist, like being the child of an Orthodox Jew and a Quaker who would like to study medical anthropology in Patagonia.

And so on. All these little rules, ways of being, figures of speech. Most of them not at all defensible or rational, just the markers of a particular social habitus, of hierarchy. I can tell you which future graduate students generally already have the keys to the kingdom before they even start: the children of academics. Just about everyone else is likely to lack some crucial bit of insider knowledge that is important to flourishing. What makes this especially difficult is that so much of academic work both in graduate school and afterwards is inexplicit. I’m sure there are programs which are exceptions, but most of us were not trained to write (or interpret) peer reviews, letters of recommendation, grant applications. dossiers, paper presentations, and so on. You figure out by watching others, but if you have the bad luck to happen on the wrong template or guess wrong, at the very least, you’re heading for humiliation, at the worst for self-immolation.

So I struggle here too. Academic institutions endorse faculty diversity, but the conversation about diversity usually boils down to fixed identarian formulas, to improving the percentage of recognized groups, not to diversifying the kinds of experience (and passions) that professionals can bring to intellectual work. I feel intuitively that the generation of faculty just ahead of me, people from their late 50s to 70s, are more diverse in this sense if not racially so. I know considerably more first-generation scholars whose passionate connection to intellectual work got them into academia in that generation than in any younger cohort. The question is whether I should encourage someone who I think hasn’t been exposed to all the insider rules and codes to go on to graduate work. There’s no way I can make up for all that in one conversation or even several. The best I can do is tell someone bluntly that they’re going to be at a disadvantage and that they’ve got to do their best to break the code every chance they get. At the very least, you owe it to applicants to tell them about this problem.

3) I mentioned this above, but let me mention it again. With rare exceptions, no Ph.D. program that is primarily or exclusively aimed at an academic career is worth pursuing if the applicant is not given a tuition waver upon admission. Probably it’s not worth it if you don’t get some kind of stipend or support. (I’ll add by way of disclosure that I was not funded for my first year, but got funded by my second, though I had a waiver from the beginning. In retrospect, I should have gone with the offer where I was funded from the beginning, which might have been a better place for me to be in other ways.) Do NOT go into debt for a Ph.D. program that doesn’t have other well-paying career outcomes beyond academia. It is very easy to justify going into debt out of hope or even desperation, but this is some crushing stuff to overcome later on. People who tell me that it’s worth it to them because they love what they’re going to study so much, well, seriously, with an $80,000 credit card debt, you could buy a lot of books, pay for broadband, and live in a decent apartment in a city where there’s lots of free events with intellectual heft to them and maybe even find a decent job. That’s a better option both for consummating your love of intellectual work and for developing a career and life, really. A graduate program aimed at an academic career should admit you with no tuition obligation and support you with a stipend because in the end you’re going to save them money by being a cheap teacher.

4) One thing I’ve heard over the years (most recently from several people replying to my last post) is that graduate work has a way of pulling you out of your existing peer network and making your life feel very deferred or de-synchronized. Certainly one thing that I absolutely tell potential applicants is that by seeking an academic career, they need to give up on the idea of living in a particular part of the country that they prefer. There are many wonderful places that you could choose to live as a lawyer, doctor, psychologist, accountant, information technologist, etcetera, that you simply can’t choose to be as a professor because there are no universities or colleges in those places, or maybe just one. Your pre-academic friends, on the other hand, may be making all sorts of choices like that, not just about where to live but which will-o-the-wisp to chase. This is ok if one of your reasons for choosing an academic career is stability and predictability. But I talk to some students interested in graduate school whose self-image is that they are risk-takers, that they like change and dynamism, that they like the idea of being a professor but only if it’s being a professor IN BERKELEY or NEW YORK. This is going to cause trouble sooner or later. Relationships, life aspirations, a wide or diverse emotional range, are all structurally harder to work out if you’re a person chasing a tenure-track academic post via completion of a doctorate. This is something an applicant has got to understand in advance.

This entry was posted in Academia. Bookmark the permalink.

8 Responses to More on Going to Graduate School

  1. G. Weaire says:

    Just to add to the last bit about needing to be willing to live anywhere – I try to make sure that students understand that the realities of the job market are such that they also need to comfortable with the idea of ending up anywhere in the range of possible academic positions (irrespective of geography).

    (At the same time, of course, a prospective graduate student wants to have in their arsenal the ability to come across as the kind of person who will only be happy at an elite research institution. But students should know that it is risky actually to be that person.)

    Many are unclear on what the range of possibilities is – they generalize from what their instructors do, or rather appear to do, since a lot of what we do is not visible in the classroom. It needs to be explained.

    As you note, relationships are a problem. In particular, any student who believes that s/he has already met the person that they’re going to marry, really needs to understand how much s/he going to be asking of the other person. And even if they haven’t met someone: students need to be told about how efficient a matchmaker grad school is, and how effective an engine of divorce the subsequent trajectory of an academic career is.

    I worry about this a lot, because I’m one of the lucky ones – I met my wife in graduate school, but she has a job at the same institution as I do. I worry that my students may not always grasp how lucky that makes me, and how many people I know who were not so lucky.

    On the other hand, I’m less worried about students having their particular intellectual quirks brutally crushed. This isn’t just because I’m much more favorable towards disciplinary traditions.

    It’s also because I have quite the opposite criticism of American graduate education, that it’s not sufficiently open to the possibility that a person may not yet quite know what they want to do. The “apply to X because it’s a good place to do Y” principle is fairly dubious. There’s something wrong if, after several years of focused study in a discipline, a person not interested in aspects of it that they hadn’t realized interested them – aspects that they didn’t know existed – when they were 21. Certainly, no-one should be too sure at that age that they know for a fact the sort of intellectual work that they want to be doing for the rest of their lives.

    Early specialization makes some sense in the British and Irish system, but an American student has (a) had a more or less general undergraduate education and (b) is going to spend years on coursework before they get near to writing their dissertation. There are possibilities there that the current variant of the American model doesn’t exploit as fully as it might. There are obvious reasons why things are the way they are, but I’m not sure that it’s intellectually healthy, and I’d like to see a little pushback against it.

  2. Tim Lacy says:

    Tim,

    Both this and your last post contain some great advice. I wish I’d heard it before starting my graduate program. With that, another thought: Some (like me) enter, or seek entry, into their program of choice after a lot of thought. At this point, contrary advice feels, well, simply contrarian. If the advice is not delivered in a manner that exudes empathy, the listener is apt to view the giver’s advice as obstructionist. With that, I wonder if it shouldn’t be a requirement that for every humanities graduate program to require a one-year waiting period before entry. And that period would contain two mandatory meetings—spaced adequately apart. The first meeting should hold forth advice like what you’ve presented here, and given by a faculty member known for her/his ability to empathize. And then the second meeting, as required but held 6-months later, would require a test on that advice plus an essay explaining the supplicant’s intentions.

    After these two steps—then and only then, with time for “cooling and consideration” given—the student could be given an interview to enter the program.

    And another thought: Every first semester, or two semesters, in a graduate humanities program should be probational. No one—ever—should be given direct admissions into a PhD program without first having a mandatory probationary period.

    I know all of this is utopian in the American higher education scene, where revenues are driven more and more by tuition and enrollment. But the humanities are too important to be held to regular market rules. Plus, all of this is a kind of Elizabeth Warren-inspired consumer protection program for supplicant graduate students.

    – Tim Lacy

  3. Waldemar Gute says:

    You’ve provided a great service by describing aspects of grad school that applicants seldom consider, or even know exist. Far too many people simply do not realize how costly graduate school is in terms of time, money, and opportunity. In a sense, committing yourself to an academic discipline at the graduate level is like jumping into a funnel with a wide mouth but a very narrow end. By the time you have that Ph.D. in hand, your options are fewer than when you started.

    You may be interested in the 100 reasons NOT to go to graduate school:
    http://100rsns.blogspot.com/

  4. john theibault says:

    The problem you identify about how to advise students interested in testing disciplinary boundaries between e.g. anthropology and history applies equally to efforts to mainstream digital methods prompted by Tenured Radical’s post a week or so ago. I have been inclined to argue that historians interested in digital work would be best served by going to a higher profile history program where there might be digital opportunities (such as Virginia or Stanford) over programs that have lesser overall profiles but have bought into digital work in conspicuous ways (such as George Mason or Nebraska). But I’ve certainly encountered a number of history grad students doing digital work in departments where none of the senior faculty seem to have any experience in it (e.g. Princeton). And I wonder if that DIY spirit isn’t now widespread enough that there is no need for “bowing the knee to the most controlling or authoritarian presences.”

    The characteristics you describe of your best “public intellectual” students are, I think, the ones that have always led to students from elite small liberal arts colleges being overrepresented in graduate programs. Obviously, I don’t know how many of your favorite quirky students have had their enthusiasm and ingenuity crushed by grad school experiences, but my perspective from the time I taught in a second tier graduate program is that pushing at disciplinary boundaries and moving into new fields was very positively correlated with success in the job market.

    I do strongly endorse two of your points here: 1) only go to grad school if you get financial support from the program and 2) you have to be prepared to move just about anywhere — there are lots of colleges in the middle of nowhere, though many employ only a handful of historians.

  5. Leslie M-B says:

    Yes, yes, and yes.

    As a survivor of a Ph.D. in cultural studies, I second what you have to say about interdisciplinarity. The academic job market was really bad for me–five years and one interview for a “traditional” tenure-track job–which thank goodness resulted in a job offer in a very open-minded history department.

    I think if people had told me ten years ago how profoundly difficult it is to get a t-t job, I would still have thought my chances were good–not because I’m arrogant but because I’ve always been a high achiever in academic settings. Of course, that’s the case with most people in grad school, and I didn’t really understand that going in. (Naïveté ahoy!)

    I also like the challenge of living in places unlike the beach-adjacent, urban one where I grew up. However, if someone had sat me down and said, “Hey, your only t-t job offer is going to be 1,000 miles away from where everyone else in your family lives, and you’re going to start that job the year your first niece is born, your favorite grandmother is dying, your son is at an age where he could really understand the value of local extended family, and your parents are starting to show signs of aging,” I might have made a different choice of career.

    Ditto if someone had shown me a spreadsheet of the economic opportunity costs. I love what I do, but the financial end of it has dealt more than one painful blow to my own little family.

  6. Jerry White says:

    I can see what you mean about stretching disciplinary boundaries and all that, but I think one of the reasons that a history department might be reluctant to hire and anthropology PhD or vise-versa is a simple matter of teaching. Would that anthro PhD be able to teach World History, or US history, or any 100-level intro course? Or vice-versa? That’s not impossible, but my bet is that it’s pretty uncommon. I think that candidates need to make a strong case that something in their formation enables them to teach at the lowest, and thus very general levels of undergraduate education. Otherwise it seems like you are removing yourself from the heavy lifting of teaching because you’re just so committed to following your bliss. I hear a lot of grad students talk this way, and it seems to me that they are setting themselves up for either (1) failure to get a job or (2) success in really seriously annoying their colleagues when they get a job and then spend their career pleading incompetence when it comes time to carry the water and teach the big-enrollment first year courses. Or maybe I am missing something about History? What 100-level History course would a PhD in Anthro be likely to be able to teach thoroughly?

  7. Leslie M-B says:

    @Jerry – I like to think I teach a reputable 100-level U.S. history survey, even though I haven’t taken one since I was 15 years old. 🙂 That said, I will say that it’s been made clear to me that I won’t be expected to teach the survey much. Specifically, I was told it can be taught by adjuncts and special lecturers–whereas my “specialty” subjects (public history, digital history, museum studies, women’s history) aren’t sufficiently covered by our local adjuncts.

    It’s funny; all my mentors told me to emphasize in my job letters that I could teach–nay, would love to teach!–these lower-division courses, but now that I’m here I’ve been told my efforts are more needed at the upper-division and graduate levels, and that even though I want to teach intro courses, I shouldn’t expect to be able to do so after this fall. My teaching load this year (my second) is 2-1, and it will likely stay that way for a while, and my tenure mentoring committee is insisting I teach women’s and public history rather than surveys.

  8. Rana says:

    Some brief reactions to a number of things:

    One thing I’ve heard over the years (most recently from several people replying to my last post) is that graduate work has a way of pulling you out of your existing peer network and making your life feel very deferred or de-synchronized.

    This is even more the case if you find yourself leaving academia, either voluntarily or involuntarily. At least within an academic institution, you will find people with similarly deferred life developments; leave it, and you discover yourself out of sync with both those within, and those without. (Thinking about parenthood in one’s 30s or 40s is just one example – your academic cohort are usually non-parents and often single – while other couples starting families tend to be much younger; the ones with similar age references have children in college.)

    On the matter of being geographically flexible – the weird side effect of this is that being resigned to being at the mercies of the vagaries of the academic market when it comes to (re)locating, is that you become stunted in the skills and attitudes involved in actually choosing where to live. (My spouse and I – both now unemployed academics – are confronting the weirdness of not only having to choose a target region or city for our move, but also the lack of an existing housing network or workplace to focus our attentions on. It used to be, you got the job, you found something near the college you could afford, and that was that. This lack of structure is more than a bit daunting.)

    Re: disciplinarity and whether to support it or not… one thing I’ve noted about the people that I admire who do interdisciplinary work is that they almost always have a strong grounding in at least one specific discipline, and, if able to manage it, more than one. But it is rare that they are dabblers across the board. Having done a fair amount of interdisciplinary dabbling myself, I’ve come to believe that a lot that makes such boundary crossing appealing is the way that work within disciplinary boundaries develops its own logic, its own assumptions, its own best practices (as well as the knowledge base it develops). Even when I’m analyzing literature, I’m viewing it through the eyes of a historian, and engaging with the ways that literature has different working assumptions (even something as small as the idea that context matters can be a great chasm). Short version: I think you can’t appreciate the transgressive potential of interdisciplinary work unless you know, in your gut, what it is you are transgressing – and to have that foundation, I think you need training in disciplinary thinking. Now, I wouldn’t say this means one should go get one PhD and lumber on from there, or try to get multiple PhDs under one’s belt – I’d advise getting several masters’ degrees instead – but neither do I think one should be entirely project driven, either. They’re called disciplines for a reason, as you know, and I think that the experience of developing a particular flavor of intellectual rigor makes one better appreciate where others in other fields are coming from, and that their perspectives are not arbitrary but coming out of a particular scholarly discourse or habitus. This isn’t to say there are not risks of being a stealth interdisciplinarian studying within a particular discipline, but I would think that a student truly interested in cross-discipline thinking wouldn’t be discouraged so easily.

Comments are closed.