As I read it, Fish basically moves to identify digital humanists as playing out the next move of postmodern politics and epistemology. I think that’s both right and wrong. Fish argues that most digital humanists believe in diminishing the human subject (which he labels DH’s ‘theology’) and in reconstituting the institutions which govern, imagine and interpret human subjects (which he labels DH’s ‘politics’). Fish sees these moves as prescriptive. Fair enough as a reading of Fitzpatrick’s book, which argues for major changes in academic practices.
But many digital humanists in the academy think that many of these visions of how academics should produce knowledge and participate in culture are also an accurate description of how culture and knowledge are being and have been produced in global society since at least the rise of print culture. Digital humanists are therefore not just arguing for new practices, but against persistent mythologies about established practices. If DH has a “theology”, then to some extent its theses are nailed to the wall of the old humanism, a protest against its corruptions and illusions.
To push the metaphor a bit further, this is also where DH is very much not postmodernist in any strong epistemological or political sense. DH doesn’t leave the church, it just wants to be in it in a different way. In DH, authors are not dead, just brought down to human scale. There are still individual acts of authorship, distinctive moments of creation, original imaginations in both the digital culture of the present and the newly-seen culture of the past. This is not a hive mind, not the multitude. There are still texts meant to be fluid, partial, ephemeral, and texts written with other kinds of craft and other kinds of long-term prospects in view.
Moreover, much of the postmodern view of diminishment was essentially despairing, a kind of mournful cry for the unities and power of the failed modernist subject. DH’s diminishment is both pragmatic and hopeful. Pragmatic in that it describes how culture actually gets produced, and thus liberates us from the psychologically burdening and idolatrous worship of the Great Men and Women who create culture (and scholarship) that ordinary people can only consume (and cite). It reveals that most authors (in whatever medium and institution) are only just little people behind the curtain, aided in making a big show by the machinery of criticism and the accumulation of cultural capital among elites. Across every medium you care to name, what digital technologies are revealing is that the set of people who make “good culture” is vastly larger than what the post-1945 gatekeepers of high culture claimed: that there are hundreds of good photographers, webcomics creators, fiction writers, scholarship producers, documentarians, sketch artists, for every one that late 20th Century gatekeepers claimed there were.
The hopeful part of it, which drives Fitzpatrick’s book, is that in recognizing that this is how culture not only is, but probably always was, we can design intentional practices of cultural production and knowledge dissemination that will use our new technologies and our new understandings as rocket fuel for a culture, a politics, a way of being that really will be novel. But this in part is just about self-knowledge, about honestly recognizing what we have been already and living with ourselves as we are.
From the department of pointless but compulsory exercises: every single time Rick Santorum or anyone with similar views says the following two things:
a) What, you want gay marriage? What’s next, legitimating polygamy?
and
b) The only form of legal, sanctioned marriage that any human society in all of human history has ever sanctioned is between one man, one woman,
the following rejoinder should be automatic from anyone in the audience to whom these things are being said:
c) Actually, Rick, the most commonly sanctioned or legalized form of marriage in human history across a wide span of societies has been polygamy, albeit with numerous variants. You might notice this if you actually read the Bible like you claim to.
However, there’s something more at stake in this special cultural conservative version of an all-Cretans-are-liars paradox. It’s not just a question of whether it’s ignorance or cynicism lurking behind political pandering.
What this paired sentiment expresses more deeply is have-cake-and-eat-it-too vision of modernity and progress among cultural conservatives, and not just in the United States. I see something of the same in the most skilled recyclers of the tradition-modernity relation that was given its undead power under colonial rule in 20th Century African societies.
If I were able to actually have a conversation with Santorum in which the historical reality of sanctioned polygamy in most human societies was made impossible for him to ignore or soundbite into oblivion, I’m willing to bet that the likely way out of the trap would be to argue that contemporary life has overcome that old evil, that we’ve progressed. Santorum and other American Christian conservatives would likely put the origin of that progress somewhere other than secular liberals would. They’d probably ascribe it to the rise of Christianity, all the way back to the early Church, whereas a more secular (or at least not religiously conservative) view would probably be than contemporary companionate, monogamous marriage (or any companionate, monogamous relationship, really) is a direct consequence of the working out of liberal individualism and rights-based personhood after 1750.
But it really doesn’t matter which claim you turn to. If you think that the relative eclipse of polygamy (still practiced and legally as well as morally sanctioned in many parts of the world) is a good thing, as I presume Santorum does given his suggestion that legally sanctioning gay marriage would open the door to polygamy, you believe in progress, that some aspects of the human condition have improved over time through the deliberate efforts of human beings to reform or change their social structures. And the moment you believe in that, saying, “It’s natural for people to live a certain way, all societies have done it that way” is off the table as a justification of contemporary policy whether or not your claim about the naturalness of living that way is true or not.
(Which, in fact, Santorum’s claim about the universality of nuclear families and monogamous marriages is not. Not in any way, including its address to homosexual practices. The foundation stone of ‘the Western tradition’, classical Greece, very much included sanctioned homosexual relationships between male citizens, for example.)
The moment you accept that progress is the real explanation for a transformation in human practices that you defend or endorse, you shouldn’t be able to invoke the universal, unchanging natural character of that practice against some other argument for yet another change or reform.
And yet, of course, this is done all the time, because the rhetorical alternatives are to either embrace arbitrary bigotry or construct some weird Tower-of-Babel claim about the future consequences of reform. E.g., in the case of gay marriage, if modern companionate relationships are a good example of progress, that means that we’re capable of changing how we legally and socially sanction and regulate marriage or relationships for the better. If we’re capable of that, why not include sanctioning companionate relationships between same-sex couples? With the invocation of unchanging, natural traditions disallowed, the only ‘why nots’ left are: because we should hate or despise same-sex couples for fundamentally arbitrary or non-rational reasons; or because sanctioning same-sex relationships would lead to further bad consequences. American cultural conservatives often take a stab at the second argument in public discourse (indeed, that’s where Santorum leads into his ‘oh noes bestiality-will-be-legal’ line) but this is an even easier set of arguments to puncture: either the imagined consequences are those which already follow in full measure from legally sanctioned heterosexual relations or they involve a vision that legal sanction is the same as contagion, that it creates practices that would not otherwise exist, a belief that has a lot of odd collateral implications.
I’m not sure what to think of his argument for a revival of price-setting collaboration between small liberal-arts colleges, or his general vision of how to approach tuition, but I think any student, parent of a student, or employee of a selective private college or university would find his description of competition for applicants fairly interesting. I think it’s fairly on point: selectivity is the first and last guarantee of excellent outcomes for higher education institutions. If you can convince the best potential students to come to your institution, you’re going to graduate people who reflect well on your college whether or not you do right by them while they’re students. Arguably all you have to do is avoid making them markedly worse off intellectually or practically for those four years. This is of course is why assessment is such a vexed, fretful issue for faculty and administrators in elite higher education: proving that you add value above and beyond what a bright, skilled student would be able to do without exposure to your courses, environment and resources is really difficult.
This connects to the other big theme of Ferrall’s book, which is that a liberal arts curriculum that’s clear-headed about what that term means is the source of that added value. I completely agree with Ferrall’s assertion that a clear-headed approach to the meaning of “liberal arts” paradoxically involves an insistence on its mystery, that the liberal arts approach is the complete opposite of a vocational, career-specific curriculum. A liberal arts curriculum, in his view, has to be resolutely against constrained preparation to a specific career or purpose.
I’m largely sympathetic to this view, and Ferrall uses the familiar Jedi mind-trick of insisting that this non-preparation is in fact ultimately a better preparation for many white-collar professions, that the creativity, innovation and flexibility which are recognized both as important human values and as a key to the economic future are developed best through a liberal-arts education. And yet, I’m not sure his book helps much with explaining either what the liberal arts are or how best to structure an education around them, particularly not for the skeptical publics that increasingly look for concrete vocationally-oriented returns on investment out of a college education.
At least part of his argument is aimed less at a wider public and more at faculty at small liberal-arts colleges, whom he clearly regards as less than reliable custodians of the liberal-arts ideal. Two things in particular worry him. First, that faculty are trained and professionalized around a commitment to specialized disciplinary knowledge and for that reason are willing to countenance certain kinds of vocational or narrow pursuits. Ultimately, he suggests that at least some of us are the wrong kind of people for the liberal arts, and that our wrongness is aggravated by our training. Anyone who has read this blog before know that I’m at least somewhat in agreement with this characterization. And yet it’s not clear at all how you would go about building a faculty with a wider range of preparations drawn from a wider variety of backgrounds who would still be able to teach within a college curriculum in a way that recognizably related to the teaching of other faculty. Ferrall doesn’t help at all on this score, and I’m not sure anyone could. You dance with them that brung you. So the more realistic question is what SLACs could do to provide an alternative pathway for professional advancement for their faculties that widened their base of experience and focused their attention on other audiences besides colleagues sharing their immediate specialized interests. Ferrall is very intent on arguing that selective colleges should once again be allowed to collaborate in setting the terms of their competition for qualified students, but he doesn’t have much to say about this kind of potential collective effort. I don’t know myself what such an effort might entail. I think there are some thumbs that could be placed on various scales that would make faculty with a different vision of their professional goals feel at least somewhat fulfilled or appreciated rather than being the last few freaks left on the Island of Misfit Toys.
Ferrall’s second concern is that faculty spend way too much time obsessed over curricular design. Here again, I tend to agree with his view that a liberal-arts approach can come from anywhere and anyone, that arguing over precisely which fields or subjects need to be covered at precise proportionate amounts does nothing for insuring that the institution as a whole delivers a liberal-arts approach. I’m substantially more indifferent than many of my colleagues to those kinds of concerns. However, I don’t think Ferrall is sufficiently curious about asking why faculty tend to be so emotionally and professionally invested in these kinds of conversations. I may be less so than most, but I can get my blood up pretty quickly when we start talking about whether we need this or that subject, this or that methodology, this or that discipline in particular measure. Maybe I flatter myself by thinking that I teach to the liberal arts in the spirit that Ferrall describes, but caring about the craftwork of scholarship and the substance of teaching very naturally leads into treating curricular design as a vitally important issue. That’s what a good liberal-arts teacher does in the classroom: address other subjects, other approaches, other ways of seeing and doing, in relationship to the confines of a particular topic or discipline. You have to be excited by both intellectual and practical questions (and sometimes argue that they’re the same thing) and being excited naturally means you have views about how to maximize exposure to those questions in the curriculum as a whole, how to enrich the environment for yourself, your colleagues and your students. You might end up with a very different view of how to structure the work of faculty (say, putting far more emphasis on curricular designs which promote movement of both faculty and students between and outside of disciplines) but you’re not going to be indifferent to curricular issues, as Ferrall implies faculty ought to be.
The frustrating thing for me about the book, however, is that it really does not do much to move the ball downfield in terms of the toughest challenge that SLACs face, which is convincing many Americans that the non-directedness of a true liberal-arts approach is the very best way to educate bright young people. Paradoxically, it’s getting easier and easier to convince many people outside the United States that this is right way to go, as they try to break down some of the constraints of highly vocational systems of higher education that are tightly constrained by government policies and professional licensing. Ferrall’s probably right that selective liberal-arts colleges have to be far more internally clear about what they’re doing before they try to take up a renewed attempt to persuade the public of the value of their project. Still, somebody’s going to need to help us think about how to get beyond tired old cliches like “critical thinking” whenever we feel ready to begin that approach, and Ferrall doesn’t really provide much along those lines.
It’s a point that’s well-understood in some circles and completely not in others. Witness the degree to which users continue to express some preference for couching search queries to Google and Siri in the form of natural-language questions: according to Bo Pang and Ravi Kumar, that tendency seems to be steadily increasing as users become more familiar with the functioning of search engines rather than decreasing. Users sometimes relate to Google as if it were an oracle, a non-human being with its own personality and knowledge.
Understanding search algorithms as Jones describes them means understanding that however you phrase your query, you’re really asking us, not a creature named Google or Siri. It’s not quite garbage in, garbage out, but it is “what the set of all users and producers of online information know in, what the set of all users and producers of online information know out”. The really tricky thing is to understand how extensive use of that process both changes and expands that set: not just that we put more information online, but that information begets information.
When I started research on the content of children’s television for a co-authored book that was published in 1999, I had three principal sources of information to draw upon. First, my memories and my brother’s memories of watching TV. Second, the memories of contemporaries gathered from real-world conversations and in online discussions on Usenet and other early forums. (Hooray for alt.society.generation-x!) Third, published resources of various kinds, both old and new. Online information about children’s television, independent of message board conversation, was fairly sparse.
Only a few years later, Wikipedia, YouTube and so on came into existence, and at the same time, owners of media libraries began to much more comprehensively push their content out the door in various formats. Today if I want to see every episode of Jabberjaw, know every voice actor’s casting on the show, get comprehensive information about its production and broadcasting, the title character’s appearances in other Hanna-Barbera shows, and the lyrics to a song about the show by the band Pain, I can.
The general implications of this shift are constantly, incessantly discussed. But what I’m not so sure we fully appreciate are the specific implications of online information as a mirror of what we know and how knowing what we know is something that we’ve never really known before.
It’s true that there are still many things that people know, many kinds of information, which are not strongly represented in online repositories. It’s also true, as Eli Pariser has eloquently explained, that both the deliberate infrastructure of online information and the unintended practices arising from our collective use of it, is actively excluding or hiding some information through a progressively tighter series of feedback loops. Even if the “filter bubbles” were popped in some fashion, there would be human ways of knowing and interpreting that could never be adequately included in the most capacious digital informational space imaginable.
Those cautions noted, there is still a huge unused potential for generative changes to the nature of knowledge production that requires making the intellectual paradigm shift that Jones describes, to understanding the mirror of online information for what it is and looking closely at the never-before-seen reflection it provides. Just to cite one example that I have harped on so constantly that I’m sure my Swarthmore colleagues are tempted to punch me in the face every time I say it, suppose that every professor in every institution in the United States published every syllabus they taught in a form where the materials for the course (texts, images, films, etc.) were easily stripped and aggregated as metadata.
Suddenly the canon in a particular field of study would not be a matter of folk knowledge within a discipline, or would not be knowledge residing in four or five highly fragmented and proprietary archives (publishers, disciplinary associations, bookstores, etcetera). We’d know at any one moment what professionals in a particular field of study deemed to be the most teachable, useful or authoritative material. We’d know over time how that judgment had changed. We’d know if what scholars represented as authoritative through citations was significantly different from what they chose to teach.
Notice all the things that this knowledge doesn’t resolve in and of itself. It doesn’t tell us what to teach. It doesn’t tell us why or how to teach it. It doesn’t tell us if there’s a very large missing set of materials that professors would prefer to teach but cannot obtain (either out-of-print materials or things which have never been written or created). It doesn’t tell us what students did with this material, or how and whether they learned from it.
What does it tell us, then? It tells us what mirrors always tell us, if we look at them without flinching: the gap between how we look and how we imagine and claim we look. The mirror of information, our multitudinous automaton, shows us hidden depths we’ve never noticed and blemishes we’d rather not see.
Some of what we see makes clear what a mirror will never show us (whip out your Zen koans here: your face before you were born and all that).
Some of what we see puts older just-so stories and tall tales in their place, and that’s no small feat. Think about the way that academics have traditionally represented (and deconstructed) canons to each other. A comprehensive picture of pedagogical usage might surprise us in all sorts of ways, change our sense of what we think our practices are. Yes, with some potential for perverse or unintended effects, as in the case of comprehensively tracking citations and using citations as a metric of scholarly value. But mostly I think it is fantastically generative to be able to put aside a massive swamp of arguments and studies that never get beyond an initial attempt to answer the question of “what is it that people actually do“, whether or not the answer is what we expected it to be. Whether we’re scraping data from World of Warcraft to find out what the distribution of character choice is, compiling the totality of all print publication in world history, or learning what it is that we actually all use in our classrooms, what we see isn’t just the end of some fumbling-in-the-dark, it is the beginning of some more interesting conversations.
The mirror of information clears out the dead brush from the undergrowth. If we know, really know, that some high-culture canons are an infinitesimal fraction of the totality of global cultural production over the last five hundred years, it sharpens our conversation about why that happened, whether we should be studying all of the occluded culture that was lost in the light of a thin crescent of publication or creation, or whether there’s some reason to stay focused largely on that fraction. If we really know what we’re all teaching, what we value in that context of usage, we might have a far clearer view of what we’re trying to accomplish in creating scholarship, of how we read and interpret knowledge, of what works out in usage.
Understanding that search algorithms are a mechanical Turk–that it’s just us hiding inside–is, if we choose to see it as such, another chance to step towards wisdom through self-knowledge.
What Stark is planning to argue (and enable) connects to one of the thoughts behind my own warnings about graduate school, namely, I do not want prospective students to think that an MA or Ph.D (or a J.D., etc.) is primarily about learning how to do something or an extension of the spirit of a liberal arts education. It can be, but usually it’s not. (Exhibit A for the prosecution: the recent article in the New York Times that pointed out that most U.S. law schools don’t really teach their students how to be lawyers, unless the kind of law they’re going to practice is some weird, rarified domain where scholarly approaches to law have some unusual weight.) Graduate school is primarily about credentialling for particular professional objectives. That’s not particularly wholesome but that’s the way it is for now. If the goal is to pick up a new bit of concrete knowledge or skill, there are other and better ways to do it. If the goal is to extend a lifelong engagement with knowledge and critical thinking, graduate school will generally get in the way.
That said, a couple of cautionary thoughts about the project. First, while it’s possible that someone could self-train to understand and interpret neuroscience (for one example) there really are quite a large number of expert domains where understanding and practicing are different matters. An autodidact reader of neuroscience could learn to interpret and evaluate research, teach or write about the field, and imagine or advocate new directions for study or experiment, but it’s still pretty reasonable to have a bright, sharp fence up around “do neuroscientific experimentation on living subjects” and “conduct neurological interventions, surgical or otherwise, on living subjects”. I think it’s very true even there that existing researchers and doctors learn most of what they learn through experience rather than in formal classroom settings, but this is one of many cases where requiring certification of expertise and limiting that certification to appropriate institutions is the only way to hope for some kind of baseline minimum qualification before we collectively permit someone to engage in practices that have very high potential for harming people. Maybe you lose the occasional autodidactical genius who would come up with a completely new medical or research technique that way, but I think you also lose a lot of Dr. Frankensteins and quacks. Pope Brock’s Charlatan, a history of John L. Brinkley, the quack doctor who built a thriving practice on surgically inserting goat testicles into the scrotums of American men looking to revive their sexual potency, is a pretty good reminder of why American society increasingly embraced formal education and certification as a requirement for some kinds of expert practice.
Second, I completely believe that you can learn techniques of autodidacticism from people like Cory Doctorow and Quinn Norton, that at least some of how they learn new things is reproducible. As a self-identified generalist, I feel I can show other people how I do what I do in a way that’s partially reproducible. At the same time, just as I know that I hit some pretty firm cognitive limits in certain domains of intellectual practice, I do feel that there are some people who just are not going to be able to be autodidacts no matter how clear and reproducible the instructions on the box might be. Some people don’t think that way, some people weren’t brought up that way, some people have adapted so strongly to the structure of formal education that it would do them more harm than good for them to try and do without it. It’s Stark’s project, but my meddling-kids advice would be that the most irritating thing about a how-to project might be when it implies that its advice has a potentially global or universal scope. Even with projects, ideas and approaches that I like, I’m finding that I’m very unsatisfied if there isn’t serious attention given to shortcomings, failures and limit conditions. It’s good to interview people who are successful self-learners, but there have got to be some casualties out there too, whether it’s people who tried to learn how to operate a table saw on their own and cut their thumb off or people who have dedicated themselves to the independent mastery of calculus via a dozen routes and had to eventually surrender.
For various reasons, I’ve found myself this semester talking with colleagues about the migration of students through our curriculum: the courses where they busily cluster, the lonely cobwebbed courses, the majors and courses that follow regular oscillating cycles of interest. We’ve been trying to figure out which classes are interchangeable and which are not from a student perspective, about what our students see when they look at the curriculum.
I don’t think that any of us really know what kinds of decision rules students are consciously and unconsciously employing. Each department and each individual faculty or staff member has his or her own folkloric narrative that explains some or all of the patterns in enrollment. Sometimes that’s based on a smidgen of hard data: real enrollment numbers over a five or ten-year period, some kind of assessment data or evaluation from students, frank conversations with a handful of perceptive students. Some faculty and staff work in contexts where they get more insight into these questions, and others (such as the education faculty) have special expertise that’s relevant for thinking about the problem. But I honestly don’t think anyone has a really systematic handle on the issue at any scale, whether it’s guessing about the total movement of students across the entire curriculum or about their presence or lack of presence in any individual class.
There are reasons why it’s a hard problem to investigate. It’s not uncommon for faculty to misperceive (in either direction) their own enrollments in relationship to the overall distributions, even when they have good data to consult. In part, that’s because the workload involved in teaching doesn’t necessarily scale to the number of students nor is it the same across departments or even between any two individual faculty members. And at least one of the reasons why students flock to some classes and avoid others has to do with their perceptions (and perhaps sometimes misperceptions) of faculty quality and that is a subject that’s nearly impossible to talk about openly in any official context without quickly descending into cruelty and recriminations.
But there’s also no way to completely avoid trying to figure out some of what’s going on. If students are pounding down the doors of a single professor’s courses but not of department colleagues or faculty teaching similar subjects, it might be safe to mark that off as a case of pedagogical charisma, which has no further institutional implications (save that you want to figure out how it’s done and build some of that into a vision of best practices). If an entire program is getting hammered by enrollments, or a single course is constantly over-enrolled regardless of who teaches it, then it’s imperative to figure out why that is. On the flip side, if a course or program is in relative terms under-enrolled (not because of a requirement of small class sizes), it’s important to figure out if that’s because there is a consistent movement of students away from the subject matter, because the course or program is doing a poor job of labelling or framing the subject matter, or because of student antipathy to a particular faculty member. In all of those cases, there are big implications for long-term planning–and big risks to just accepting whatever explanatory mythology comes most readily to mind. When all of that information is put into the structures of a real curriculum with all of its moving parts, the possible explanations for enrollment patterns quickly multiply into near-incomprehensibility. General education requirements and major requirements, various subtle and gross devices that departments and divisions put into place in order to manage, route, repel or capture enrollments (and all their unintended effects), leave cycles and temporary faculty, new courses that are poorly promoted and old courses that are abruptly cancelled, and so on, all exert serious influence over what students take and avoid.
I accept, therefore, that there’s going to be a pretty hard limit to any model that accounts for (and tries to predict) student interest in courses and majors over a five or ten-year period. But what I’d love to be able to do is speak with a bit more confidence, based on a robust mix of qualitative and quantitative data (especially quantitative data that tracks the most common patterns of total enrollment over four years, rather than data about isolated courses or departments), about the relative weight of the following factors:
1) What students (and their parents) believe about the match between particular subjects or disciplines and particular careers or the likely job market at the time of graduation.
2) What students believe the content of particular disciplines or courses is before they begin their studies and how those beliefs change over four years of study.
3) How much of a role the titles, descriptions and “marketing” of particular courses plays in the decision to sign up for a course.
4) How much students are driven by strategies that respond to “traffic management” within the curriculum (trying to secure places in desirable mid-level courses by pursuing entry to an undesirable entry-level required course, for example). Equally, how often curricular barriers such as requirements prevent students from taking courses that they believe they would like to take.
5) How often students believe their enrollment decisions to be driven by a strong attraction to a particular topic, idea, methodology, discipline that they have developed after beginning their studies at the college. (Especially when this represents a change from the initial perceptions relevant to 1 and 2.)
6) How often reputation of individual faculty members (quality, difficulty of grading, etc.) plays a major role in the decision to enroll.
7) What courses students consider to be interchangeable. (E.g., if a student is lotteried out of a course, what courses will they regard as reasonable substitutions and why?)
Having at least an approximate model covering these factors isn’t just for planning. It’s also required for persuading students that the vision of liberal arts on offer to them in a curriculum is one that they should accept and embrace. If we don’t know how our students see the curriculum, we can’t really talk with them about what we believe they should be seeing, let alone what they might consider choosing. And this swings both ways, potentially: we might find out that what we think the curriculum contains or says is palpably not what students experience in their actual coursework, or that students are seeking a plausibly and wholly legitimate different curriculum that’s still completely commensurable with the spirit of the liberal arts, and it’s the faculty that should be persuaded to nudge or move the emphasis of their teaching in some new directions.
I bookmarked a blog entry earlier this month by Elijiah Meeks that was endorsing a longer essay by Natalia Cecire about the relationship between theory and tools in digital humanities work, and also the relationship between humanists and technologists. Meeks and Cecire both argue for humanists to reassert theory, to not be driven by the promise of tools which elide or erase the need for difficult conceptual work, to not accept the primacy of code and coders. Meeks observes that this is almost a Thunderdome struggle: two paradigms enter, one paradigm leaves.
If so, I guess I find myself a spectator who has money down on both combatants but who is really just waiting for Master Blaster to show up and put both together–or maybe I’m looking to be at another venue altogether.
I think Cecire in particular approaches theory in a fashion that I’ve grown more and more unsympathetic to in my own thinking and writing, as something which is recognizably achieved in positive relation to its difficulty and its refusal to reach closure. She notes that theory in this post is not 80s-style Theory, but “a catch-all term for thinking through the philosophical and cultural consequences of things”. I’m good with that, but I think we should be wary of the idea that thinking through is always a present-tense gerund, that theory ends if we’ve thought through to arrive at a commitment to a practice. This is what the “less yacking, more hacking” sentiment is partly about even from some humanists, not just coders. Cecire walks up to the edge of a pretty old trope, I think, that making and theory are opposed kinds of work, that to make something without a perpetual accompaniment of theoretical unmaking is to leave theory behind. Theory in this sense seems to involve a notion of a principled commitment to being on the perpetual verge: to consider, to problematize, that theory is a predicate and prelude whose horizon stretches infinitely out. She suggests that for some digital humanists, their practice is a refuge from theory, an evasion. I guess I think this is another kind of evasion, an unwillingness to see this division as something more like a disagreement between theories rather than theory and untheory. If some digital humanists think that a THATCamp on narrative or biopower is only a THATCamp kind-of-session if it’s about new ways to visualize biopower using a digital tool or ways that the meaning of narrative changes in hypertext rather than an abstracted reflection on narrative as a theoretical category, that is not untheorized. That is an argument that theory emerges out of certain kinds of commitment to practice or experience, or that it can’t be disaggregated out of creative or interpretative action, or that theory should be predictive, instructive, testable, experimental. One can, to use a favorite rhetorical construction of critical theory, contest or problematize that view, but don’t confuse it for absence or flight from theory.
I should be clear: I’m completely with Meeks and Cecire that simply waiting around for the tools to be created and then adapting or living with them as the coders see fit is absolutely the wrong way for digital humanists to operate, whether we’re looking to produce knowledge or create artistic works. This is precisely one of my major frustrations in game studies: I don’t accept the arguments of many people involved in the production of virtual worlds (both massively-multiplayer online games and open-world solo games) that procedural content and sandbox designs are technically impossible or of no interest to most possible audiences. I don’t have the coding ability or resources to prove them wrong, but the sociology and mindset of the gaming industry is a tightly wound, recursive loop that regularly regards all sorts of creative, successful work as impossible until someone manages to do it. Part of the work of humanists is to look at how expressive media have or could produce novelty and invention from within their own potentialities in defiance of what their standard practicioners believe to be possible or desirable.
In our own work as scholars and artists, digital humanists need to imagine not just tools to do work that we already know we want to carry out, but theories of representation, aesthetics, interpretation that will think beyond, against or around “tools”, around technologies. But I think Cecire and Meeks pine for sovereignty over tools and medium which not only doesn’t exist in the digital humanities, it has never existed in any non-digital medium. Writerly forms of expression and representation, including scholarship, were as dependent on tools that scholars and writers did not create and did not control. There have always been “coders” in that sense: font designers, layout specialists, copy editors, printing-press designers, booksellers. The bizarre publishing regimes which still have immense power in academia exist in part because of an older political economy of printing: it was once too expensive and too technically difficult for scholarly authors to operate the physical plant of publication in collaboration with one another, so we gave our work away to companies who then sold it back to us at high price. Almost no humanistic scholars in 1960 knew much of anything about the technical constraints or economic structures that gave highly particular shape to their work (or forbade other kinds of work). There are some interesting exceptions in artistic practice: many visual artists (and scholars of visual art) were and are trained in the technical infrastructure of their expressive work rather than just letting someone else provide their paints and inks and canvases and quarried marble. And many humanists for a very long time have had at least a passing ability to describe the technical infrastructure governing their work, if not an ability to “get under the hood” and do it for themselves.
Whether or not I can code, I’m comfortable continuing to theorize about what we could do, what we should do, what the point of humanist knowledge is, digital and otherwise, and where possible, letting that become a instruction to coders, a complaint against coders, a refusal to deploy or accept technologies or a user-level hacking of their capabilities to some unforeseen end. But at the same time, both the scholarly humanities and expressive culture have always had some complicated dependencies upon technologies of representation that they do not master, control or own. That’s sort of what we study at many junctures: the emergence of culture and thought from technologies whose designers neither desired or anticipated what their technologies would produce. I’m no longer content to peg my dissatisfactions and worries on the uncomplete, partial sovereignty of myself and my peers over some domain that we imagine we are entitled to possess, as if the completion of sovereignty would open the doors of a better kingdom. Digital media are good at reminding us of how much of the cultural and intellectual future is an unpredictable eruption from material, social and imaginative starting places. Rather than try to smooth it out, I’d rather fasten my seatbelt and enjoy the bumpy ride.
I’ve had a pretty demanding series of weeks where I couldn’t afford my usual distractedness, so the backlog of things I’ve been meaning to comment on is considerable.
To start, I had bookmarked a thread at Crooked Timber on Steven Pinker’s newest book that claims that violence is on the decline in human history. Chris Bertram and most of the CT commentariat is scornful of this argument, in no small measure because it’s Pinker making the argument. For the same reason, I’m also inclined to jump on the dogpile. Pinker usually assembles an army of straw men that could outnumber the terracotta soldiers in the biggest Chinese tomb, and makes them so flatteringly attired for the confidently preformed common sense of a certain kind of enthusiastic but unwary generalist reader that it takes either a withering dose of disproportionate snark or a patient long march of skeptical questioning about details and complexities to get people to look underneath the attractive exterior.
I haven’t read Pinker’s new book yet, but I can see from the CT thread and elsewhere that there are likely to be many assertions big and small in it that I’d challenge or question. Most of the CT commenters rightly zero in on the big epistemological and definitional problem that would haunt any book by any author that was intended to characterize the general arc of global history in terms of violence: what is violence, anyway? There are some very precise philosophical and empirical hairs to be split if you’re going to say that any number of state or official acts of violence are not ‘violence’, that the paucity of quantitative data about most areas of the world besides Western Europe and the United States justifies using the West as a metric of ‘universal’ trends, and on and on. Does every time a Belgian colonial official or plantation manager used the chicotte on an African worker or peasant count as one incidence of ‘violence’? It ought to. I am not going to put good odds on Pinker counting it as such. Does it count every time a parent slaps a child? A fistfight breaks out in a bar? A militia member loots at gunpoint? A Gitmo detainee gets waterboarded? An enforcer sticks a hockey forward? A bully menaces his victims without touching them?
And yet, there’s probably a version of Pinker’s argument that I would be perfectly ok with. As the commenter Soru says at CT, “Anyone who seriously thinks modern Norway is comparably violent than the land of the Vikings literally belongs in an institution, or at least under police watch so they don’t act on their belief.” The problem is that charting or counting or ennumerating violence is simply the wrong way to go about making that point.
I often struggle with how to think about premodern violence (whether we’re talking about 16th Century France or the Luba Empire or the expansion of Mongol rule). Something like the patented Foucauldian storyline of epistemic transformation seems to be in order: violence gets named and imagined and tracked and lived in and on the body in modern states in ways that almost can’t be compared to a variety of premodern ways of experiencing and understanding ‘violence’. And yet I don’t want to be a silly nominalist about this or any other point. There’s some continuity and relationship between getting killed by an iron spear hurled from a Hittite chariot and an incendiary dropped on Dresden, between a woman beaten by a spouse in a premodern household and a modern one, between murders in the night across time and space. People in any given premodern society may not have imagined violence categorically as we do, or connected to a particular belief in individual rights, or felt that moral progress was linked to the reduction or elimination of violent action. But just about no one ever has welcomed being beaten, tortured or murdered themselves, even if they were enthusiastic practictioners of beating, torture or murder.
The thing that seems right in some way to me is that modernity’s understanding and mapping of violence names it as a new kind of problem and connects it to new structures of power as well as to new kinds of self-fashioning and aspiration. Somewhere in there ‘progress’ beats yet, both as something which has happened and something which has yet to happen. It does seem to me to be important to not bury that lede in an avalanche of skepticism about the details–or the author.
1) Occupy is already a success if the model is to provoke reaction from its chief targets. It’s hard to imagine pundits passing up the chance to comment on anything: the 24/7 news cycle is a harsh taskmaster. Nevertheless, the number of surly, whiny or malicious commentaries as well as the dropping of any pretense of an ethos of objectivity from some reporters has been pretty striking. What’s more interesting is the extent to which active responses (as in Oakland) or threatened responses (as in New York City) from the powers-that-be have taken place. I honestly expected municipal and other authorities to just patronize and wait it out. I think there may be real anxiety inside the crony-capitalist/Washington nexus about the possible spread of mass protest or public discontent.
2) I’d continue to argue that there is a sociological limit in the current iteration of Occupy that mirrors similar limits in progressive electoral politics, and that this is where the reaction of Tea Party representatives has been instructive: they don’t want to explore the obvious connections and real overlaps between some of their rejection of the status quo and Occupy because they don’t like the sociological habitus of the people involved (a sentiment shared very much vice-versa). However, the single least interesting, least useful criticism of Occupy in circulation is that it lacks a concrete set of demands, that it needs some kind of concrete policy platform that politicians could adopt. This misses the point in every way possible. First, that Occupy’s critique can’t be boiled down into something like “Pass a new version of Glass-Steagall”, that the real issue is “Why did we get rid of sensible governance and guardianship of that type in the first place, and why can’t we have it back now?” You can’t solve our current situation with the passage of some laws if the institutions charged with implementing them will subvert, ignore or supercede those laws. You can’t solve our current situation if the next regulation you create will promptly be evaded or mocked by those it was intended to regulate. (Bank of America’s debit-use charge, I’m looking at you.) It’s the system that’s broken: you don’t solve systemic failure with a five-point legislative plan. Demands in this context have to be something more like, “Unelect everyone and comprehensively reform the process of electing a new group of representatives and leaders, expect accountability in both economic and political life and set real consequences for the failure of that expectation, make transparency in both business and government one of the sacred watchwords of a democratic society”. Maybe Occupy needs more of a boiled-down, two-sentence root-level philosophy or viewpoint (parity with something like “down with big government”) but it doesn’t need a set of demands that the political-financial complex can promptly ignore or play pointless legislative shell games with.
3) I think Matt Taibbi provides as good a “root-level philosophy” as you can ask for: that Occupy is not against wealth, is not against competition, is not against business, is not against banking. It’s a very specific argument that the game as it stands is rigged, that the cheaters are being allowed to operate with impunity, that the safeguards against cheating are compromised, and that the cheats are running the risk of destroying the game itself.
As my readers and colleagues know, I’m hopelessly addicted to analogies and metaphors. Here let me try an analogy that I don’t think is particularly metaphorical, that is in fact quite directly applicable to this situation: the history of the computer game Diablo II.
The game was a huge commercial successes and initially supported a large, thriving and heterogenous multiplayer community where the range of participation went from casual players who played few other games (online or otherwise) to dedicated, hardcore players with long experience in a variety of gaming genres and forms.
Diablo II allowed players to trade magical items obtained through play, as well as to compete with one another in various ways. It was consequently one of the first multiplayer games to generate an unplanned real-money transaction (RMT) market, as players offered desirable items to other players in return for cash payments through various third-party venues. This being a fairly new kind of thing at the time, neither the player community nor the game’s producer really anticipated what would follow. Initially, crucial data about characters was kept client-side, and so was relatively easy to hack. At first, only a small number of players used cheats in order to gain an edge in RMT transactions. At that point, the game’s multiplayer ecosystem was still relatively healthy: a large number of customers, a small number of cheaters. Arguably the cheaters may even have helped a bit by introducing highly desirable duplicates of items at a faster rate into the multiplayer economy. In short order, however, the ease of cheating, created mostly by a lack of governance and control over the playing environment on the part of the game producer, devastated the multiplayer community. Items lost all value as they were illicitly duplicated in massive quantities, and any sense of genuine competition between players evaporated as cheats proliferated. In the end, the cheaters were left to prey on each other, an activity which defines “diminishing returns”.
In the end, open cheating, or cheating which proliferates in the absence of governance and enforcement, is not even in the interests of the cheaters. But once a socioeconomic system moves headlong in that direction, its acceleration towards generalized disaster can be exponential. Cheaters themselves cannot be expected to stop that movement even if they understand that it’s not in their own interests, because they’ve specialized their economic activity to take advantage of cheats. The biggest hackers of Diablo II when it was at the tipping point probably couldn’t have played the game even marginally well if denied access to their hacks: the game had become about hacking at that point, and about the incomes they could obtain from doing so. When the prey left and the cheats become more difficult, the cheaters just went looking for some other racket. A parasite at some point can become too specialized in its reliance on a complex vector and on the ecology of a particular host: if through its own efficient depredation or in concert with other stresses, it kills too many hosts, the parasite can’t undo its evolution. At some point in the 1990s, a fraction of financial capitalism became so dependent upon subverting or unraveling safeguards and so expectant of a level of profit obtained through government-protected market manipulation that it became effectively unable to back off and seek some more stable equilibrium–and its political partners became the same. The idea that Goldman-Sachs in the last decade represents “the free market” is as laughable as saying that the 19th railroad industry in the US was a laissez-faire triumph: in both cases, plutocracy was secured through and within the state rather than in the absence of it.
Stopping that isn’t a matter of a policy here or a single bugfix there. It’s about a comprehensive change to the paradigm. It’s about the government of the people, by the people, for the people, not perishing from this earth.