Games and Gaming – Easily Distracted https://blogs.swarthmore.edu/burke Culture, Politics, Academia and Other Shiny Objects Fri, 17 Oct 2014 21:46:55 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.15 Gamergate. Shit, We’re Still Only in Gamergate. https://blogs.swarthmore.edu/burke/blog/2014/10/17/gamergate-shit-were-still-only-in-gamergate/ https://blogs.swarthmore.edu/burke/blog/2014/10/17/gamergate-shit-were-still-only-in-gamergate/#comments Fri, 17 Oct 2014 21:46:55 +0000 https://blogs.swarthmore.edu/burke/?p=2704 Continue reading ]]> A couple of nights ago, I got up to go to the bathroom. Still only partially awake, I flushed and stumbled back to bed, only to hear the gushing sounds of the toilet overflowing. I seriously considered just letting it keep going, but I did a U-turn and went back to plunge out the blockage and sop up the mess with towels.

That’s how I feel about writing about what’s going on with what has stupidly become known as “Gamergate” in the last month or so. (The title itself flatters the pretensions of the worst people drawn to it.) I really don’t want to, I’ve been trying to avoid it, but this whole thing is not going to go away. The truth is, for those of us who know both the medium and its audiences, the last month is not a sudden rupture that changed everything. It’s just an unveiling of a long-festering set of wounds.

That dense nest of pain and abuse raises such complex feelings and interpretations in me. I hardly know where to begin. I’m just going to set out some separate thoughts and hope that they ultimately connect with one another.

1) If there is such a thing as “a gamer”, meaning someone defined in part by their affinity for video and computer games as a cultural form, I’m a gamer. Games have been as important to me as both a leisure activity and a source of inspiration and imagination as books. Before I ever venture any deeper into the stakes of Gamergate, my most elemental reaction is raw disgust with other gamers who have the unmitigated arrogance to represent their feelings, their reactions, their ugliness as “what gamers think”, as if they’re the “us” being put upon by some other “them”. On several forums that I used to frequent before this last month, I’ve had the displeasure of reading other long-time participants anoint themselves as the representative voice of “gamers”. My first impulsive thought is always, “Look here, sonny jim, I was playing Colossal Cave Adventure on the campus network in 1983, and Apple Trek on an Apple IIe when you weren’t even a lustful thought in your parents’ minds, so don’t say anything about what real gamers think. I didn’t vote for you. You don’t represent me. You don’t represent most of the people who play games.”

2) As a result of my background, at academic meetings about digital culture and games, I’ve often identified myself, somewhat jokingly, as a “native informant” rather than a scholar who comes to games as an object of study with no prior affinity for them. (Which of course earned me a pious, self-righteous correction at one meeting from a literary scholar who wasn’t aware that I also work on African history about how I might not know that the word ‘native’ has a complex history…) In that role, I’ve often found myself suggesting that there were insider or “emic” ways to understand the content and experience of game and gameplaying that many scholars rode roughshod over in their critique of that content. In particular, I’ve tried to suggest that there are dangers to reductive readings that only take an interest in games as a catalog of racialized or gendered tropes whose meaning is held to be understood simply from the act of cataloging. Equally, I’ve observed that seeing games as directly conditioning the everyday social practices and ideologies of their audiences (particularly in the case of violence) is both demonstrably wrong as an empirical argument and is also a classic kind of bourgeois moral panic about the social effects of new media forms, something that often leads to empowering the state or other forms of authority in very undesirable ways. I’ve argued, and still would argue, that at least some kinds of mobilizations through social media against racist or sexist culture are both too simplistic in their interpretations of content and counterproductive in their political strategies. I’m not going to stop arguing that certain kinds of cultural activism are stuck on looking for soft targets, that they avoid the agonizingly difficult and painstaking work of social transformation.

But this is another reason I hate the people associated with “Gamergate”. They are working hard to prove me wrong in all sorts of ways. I’d still argue that the kind of tropes that Anita Sarkeesian has intelligently catalogued are subverted, ignored or reworked by the large majority of players, but it seems pretty undeniable at this point that there is a group of male gamers whose devotion to those tropes is deeply ideological in the most awful ways and that it absolutely informs the way they think of themselves across the broad spectrum of their social lives, including their real relationships to women. It seems pretty undeniable at this point that there are men who identify as “gamers” who are willing to threaten and harm simply to protect what they themselves articulate as a privileged relationship to gaming.

3) But then, my protestations about complexity have always been checked by my own experience as a game-player and as an academic thinking about games. I’ve always known that the “Gamergate” types were out there in considerable numbers. Ethnographic studies of game culture have been thinking about this issue for years. Players themselves have been thinking and talking about it, every time they’ve tried to think of ways to defeat griefing, ways to keep female players from being harrassed, ways to make more people feel comfortable in game environments.

In one of my essays for the now-defunct group blog Terra Nova, I noted how odd it was to find myself in virtual worlds like Ultima Online and World of Warcraft playing alongside teenagers and adult men that I intuitively recognized as the kind of people who had bullied me when I was a kid. Profane, aggressive, given to casually denigrating or insulting others, enjoying causing other people inconvenience and even real emotional pain, crudely racist, gleefully sexist. Not all of them were all of that, but many of them were at least some of that. In many environments, there were enough men like that to ensure that everyone else stayed away, or avoided many of the supposed affordances of multiplayer gaming. But maybe this is part of the problem, that geeks and nerds, especially those of us who identified that way back when it got you a lot of contempt and made you a target for bullies, convinced themselves that being victimized automatically conferred some kind of virtue you on you. Maybe the problem is that I and others always felt that “Barrens chat” was the work of some Other who had infiltrated our Nerd Havens, when in fact it was always coming from inside the room. I remember once in junior high school when the jocks were bullying a mentally disabled kid by shoving him inside the shed where all the equipment was kept and then holding the door closed on him. They yelled for a couple of the geeky kids, including me, to come help them keep the door shut while the boy cried and banged and tried to get out. And it was so uncharacteristic for the jocks to ask us to join in that we almost did it just out of relief at being included.

Being a target doesn’t vaccinate you against being an abuser later on. In fact, it creates for some gamers a justification for indulgent kinds of lulz-seeking bad behavior, a sort of lethal combination of narcissistic anarchism with the sort of revenge-fantasy thinking that’s normally only found in the comic-book monologues of supervillains.

4) What I’ve seen since “Gamergate” became a thing is that some of the older male gamers who have always been clear that they were just as annoyed by subliterate teenager brogamers on XBox Live, that they also hated griefers and catasses in MMOs, that they also think badly of the most creepy posters on Reddit, lots of these guys who postured as being the reasonable opponents of extremists of any kind, have turned out not at all the disinterested or moderate influence they imagine themselves to be. I’ve watched guys who claim to think that everyone’s being overexcited by this controversy becoming profoundly overexcited themselves, and very much in a one-sided way against “games journalists”, “neckbeards”, “feminists”, “the media”, “social justice warriors” and so on. At around the one-hundredth post professing not to care very much about the whole thing, you have to turn in your “I don’t care” card. Most of them say, half-heartedly, that of course it’s bad to harass or issue death threats, with all the genuine commitment of Captain Louis Renault saying he’s shocked about the gambling in the backroom of Rick’s Cafe Americain. They usually go on to specify a standard for harassment that disqualifies anything besides Snidley Whiplash tying Penelope Pittstop to the railroad tracks, and a standard for “real death threats” that disqualifies anything that doesn’t end with someone getting killed for real.

I can’t quite say I’m shocked by these non-shocked people, but I have found myself deeply disturbed to see significant groups of formerly reasonable-signifying male posters in various forums accepting without much dissent sentiments of tremendous moral vacuity like, “If you post feminist criticisms of games, then you just have to expect to get harassed and attacked” or “Well, some guy on XBox Live threatened to rape me during a game of Call of Duty, you just shrug it off”. I’ve been wondering just how wrong I am about people in general online when I think the best of them, or how misguided I am to try to see the most interesting possibilities in how someone else thinks, if it turns out that when the crunch comes, the people I’ve thought would have their hearts in the right place are instead too busy frantically defending their right to download Jennifer Lawrence nudes to care about much else.

5) The assertion by many “Gamergate” posters that they represent the economic lifeblood of the gaming industry is just demonstrably wrong. And this is an old point that should have long since had a stake driven through its heart. The current criticism is focused on various indie games, which the gamergaters charge wouldn’t get any attention at all if “social justice warriors” weren’t promoting them. But the fact is first that the most economically successful games in the history of the medium have not been made with the sensibilities of the most devotedly “gamerish” game-players in mind. Moreover, the history of video and computer games is full of interesting work that didn’t cater to a narrow set of preferences. Today’s “indie games” have many precursors. Arguing for the diversification of tropes, models, mechanics is good for gaming in every possible way. It’s not that companies should stop making games for these “gamers”, it’s more that the major commercial mystery of the gaming industry is that so MANY games should be made for them, considering how much money there is to make when you make a good game that appeals to other people too or instead. Maybe this is what accounts for the intensity of the reaction right now, that we are finally approaching the moment where games will be made by more kinds of people for more kinds of people. Fan subcultures are often disturbingly possessive about the object of their attachment, but this has been an especially ugly kind of upswelling of that structure of feeling.

6) Many of the most strident gamergate voices are bad on gender issues but they’re also just a nightmare in general for everyone involved in game development (except for when they ARE game developers). These are the guys who hurl email abuse and death threats when they don’t like the latest patch, when they think a game should be cheaper (or free), when they have a different idea about what the ending to a game should be, when they don’t like a character or the art design or a mechanic. These are the people who make most games-related forums a cesspool of casually-dispensed rhetorical abuse. These are the people who make it a near-religious obligation to crap on anything new and then to be self-indulgently amused by their own indiscriminate dislike. So much of the fun–the enchantment–of gaming has already been well and truly done in by gamergaters in other ways: they have destroyed the village they allegedly came to save. Much of what they do now is a bad dinner theater re-enactment of the anti-establishment sentiments of an earlier digital underground, one that elevates some of the troubling old tendencies and subtexts into explicit, exultant malice.

]]>
https://blogs.swarthmore.edu/burke/blog/2014/10/17/gamergate-shit-were-still-only-in-gamergate/feed/ 6
History 82 Fall 2014 Syllabus https://blogs.swarthmore.edu/burke/blog/2014/08/18/history-82-fall-2014-syllabus/ https://blogs.swarthmore.edu/burke/blog/2014/08/18/history-82-fall-2014-syllabus/#comments Mon, 18 Aug 2014 19:38:06 +0000 https://blogs.swarthmore.edu/burke/?p=2671 Continue reading ]]> Here’s the current version of the syllabus for my upcoming fall class on the history of digital media. Really excited to be teaching this.

———————

History 82
Histories of Digital Media
Fall 2014
Professor Burke

This course is an overly ambitious attempt to cover a great deal of ground, interweaving cultural histories of networks, simulations, information, computing, gaming and online communication. Students taking this course are responsible first and foremost for making their own judicious decisions about which of many strands in that weave to focus on and pursue at greater depth through a semester-long project.

The reading load for this course is heavy, but in many cases it is aimed at giving students an immersive sampler of a wide range of topics. Many of our readings are both part of the scholarship about digital culture and documents of the history of digital culture. I expect students to make a serious attempt to engage the whole of the materials assigned in a given week, but engagement in many cases should involve getting an impressionistic sense of the issues, spirit and terminology in that material, with an eye to further investigation during class discussion.

Students are encouraged to do real-time online information seeking relevant to the issues of a given class meeting during class discussion. Please do not access distracting or irrelevant material or take care of personal business unrelated to the class during a course meeting, unless you’re prepared to discuss your multitasking as a digital practice.

This course is intended to pose but not answer questions of scope and framing for students. Some of the most important that we will engage are:

*Is the history of digital culture best understood as a small and recent part of much wider histories of media, communication, mass-scale social networks, intellectual property, information management and/or simulation?

*Is the history of digital culture best understood as the accidental or unintended consequence of a modern and largely technological history of computing, information and networking?

*Is the history of digital culture best understood as a very specific cultural history that begins with the invention of the Internet and continues in the present? If so, how does the early history of digital culture shape or determine current experiences?

All students must make at least one written comment per week on the issues raised by the readings before each class session, at the latest on each Sunday by 9pm. Comments may be made either on the public weblog of the class, on the class Twitter feed, or on the class Tumblr. Students must also post at least four links, images or gifs relevant to a particular class meeting to the class Tumblr by the end of the semester. (It would be best to do that periodically rather than all four on December 2nd, but it’s up to each of you.) The class weblog will have at least one question or thought posted by the professor at the beginning of each week’s work (e.g., by Tuesday 5pm.) to direct or inform the reading of students.

Students will be responsible for developing a semester-long project on a particular question or problem in the history of digital culture. This project will include four preparatory assignments, each graded separately from the final project:

By October 17, a one-page personal meditation on a contemporary digital practice, platform, text, or problem that explains why you find this example interesting and speculates about how or whether its history might prove interesting or informative.

By November 3, a two-page personal meditation on a single item from the course’s public “meta-list” of possible, probable and interesting topics that could sustain a project. Each student writer should describe why they find this particular item or issue of interest, and what they suspect or estimate to be some of the key questions or problems surrounding this issue. This meditation should include a plan for developing the final project. All projects should include some component of historical investigation or inquiry.

By November 17, a 2-4 page bibliographic essay about important materials, sources, or documents relevant to the project.

The final project, which should be a substantive work of analysis and interpretation, is due by December 16th.

Is Digital Culture Really Digital? A Sampler of Some Other Histories

Monday September 1
Ann Blair, Too Much to Know, Introduction
Hobart and Schiffman, Information Ages, pp. 1-8
Jon Peterson, Playing at the World, pp. 212-282
*Adrian Johns, Piracy: The Intellectual Property Wars From Gutenberg to Gates, pp. 1-82
Tom Standage, The Victorian Internet, selection

Imagining a Digital Culture in an Atomic Age

Monday September 8
Arthur C. Clarke, “The Nine Billion Names of God”, http://downlode.org/Etext/nine_billion_names_of_god.html
Ted Friedman, Electric Dreams, Chapter Two and Three

Film: Desk Set
Colossus the Forbin Project (in-class)
Star Trek, “The Ultimate Computer” (in-class)

Monday September 15
Vannevar Bush, “As We May Think”, http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/
Paul Edwards, The Closed World, Chapter 1. (Tripod ebook)
David Mindell, “Cybernetics: Knowledge Domains in Engineering Systems”, http://21stcenturywiener.org/wp-content/uploads/2013/11/Cybernetics-by-D.A.-Mindell.pdf
Fred Turner, Counterculture to Cyberculture, Chapter 1 and 2
Alex Wright, Cataloging the World: Paul Otlet and the Birth of the Information Age, selection

In the Beginning Was the Command Line: Digital Culture as Subculture

Monday September 22
*Katie Hafner, Where Wizards Stay Up Late
*Steven Levy, Hackers
Wikipedia entries on GEnie and Compuserve

Film: Tron

Monday September 29
*John Brunner, The Shockwave Rider
Ted Nelson, Dream Machines, selection
Pierre Levy, Collective Intelligence, selection
Neal Stephenson, “Mother Earth Mother Board”, Wired, http://archive.wired.com/wired/archive/4.12/ffglass_pr.html

Monday October 6
*William Gibson, Neuromancer
EFFector, Issues 0-11
Eric Raymond, “The Jargon File”, http://www.catb.org/jargon/html/index.html, Appendix B
Bruce Sterling, “The Hacker Crackdown”, Part 4, http://www.mit.edu/hacker/part4.html

Film (in-class): Sneakers
Film (in-class): War Games

FALL BREAK

Monday October 20
Consumer Guide to Usenet, http://permanent.access.gpo.gov/lps61858/www2.ed.gov/pubs/OR/ConsumerGuides/usenet.html
Julian Dibbell, “A Rape in Cyberspace”
Randal Woodland, “Queer Spaces, Modem Boys and Pagan Statues”
Laura Miller, “Women and Children First: Gender and the Settling of the Electronic Frontier”
Lisa Nakamura, “Race In/For Cyberspace”
Howard Rheingold, “A Slice of Life in My Virtual Community”
Sherry Turkle, Life on the Screen, selection

Hands-on: LambdaMOO
Hands-on: Chatbots
Hands-on: Usenet

Monday October 27

David Kushner, Masters of Doom, selection
Hands-on: Zork and Adventure

Demonstration: Ultima Online
Richard Bartle, “Hearts, Clubs, Diamonds, Spades”, http://mud.co.uk/richard/hcds.htm

Rebecca Solnit, “The Garden of Merging Paths”
Michael Wolff, Burn Rate, selection
Nina Munk, Fools Rush In, selection

Film (in-class): Ghost in the Shell
Film (in-class): The Matrix

Here Comes Everybody

Monday November 3

Claire Potter and Renee Romano, Doing Recent History, Introduction

Tim Berners-Lee, Weaving the Web, short selection
World Wide Web (journal) 1998 issues
IEEE Computing, March-April 1997
Justin Hall, links.net, https://www.youtube.com/watch?v=9zQXJqAMAsM&list=PL7FOmjMP03B5v3pJGUfC6unDS_FVmbNTb
Clay Shirky, “Power Laws, Weblogs and Inequality”
Last Night of the SFRT, http://www.dm.net/~centaur/lastsfrt.txt
Joshua Quittner, “Billions Registered”, http://archive.wired.com/wired/archive/2.10/mcdonalds_pr.html
A. Galey, “Reading the Book of Mozilla: Web Browsers and the Materiality of Digital Texts”, in The History of Reading Vol. 3

Monday November 10

Danah Boyd, It’s Complicated: The Social Life of Networked Teens
Bonnie Nardi, My Life as a Night-Elf Priest, Chapter 4

Hands-on: Twitter
Hands-on: Facebook
Meet-up in World of Warcraft (or other FTP virtual world)

Michael Wesich, “The Machine Is Us/Ing Us”, https://www.youtube.com/watch?v=NLlGopyXT_g
Ben Folds, “Ode to Merton/Chatroulette Live”, https://www.youtube.com/watch?v=0bBkuFqKsd0

Monday November 17

Eli Pariser, The Filter Bubble, selection
Steven Levy, In the Plex, selection
John Battelle, The Search, selection

Ethan Zuckerman, Rewire, Chapter 4
Linda Herrera, Revolution in the Era of Social Media: Egyptian Popular Insurrection and the Internet, selection

Monday November 24

Clay Shirky, Here Comes Everybody

Yochai Benkler, The Wealth of Networks, selection
N. Katherine Hayles, How We Think, selection
Mat Honan, “I Liked Everything I Saw on Facebook For Two Days”, http://www.wired.com/2014/08/i-liked-everything-i-saw-on-facebook-for-two-days-heres-what-it-did-to-me

Hands-on: Wikipedia
Hands-on: 500px

Monday December 1

Gabriella Coleman, Coding Freedom, selection
Gabriella Coleman, Hacker Hoaxer Whistleblower Spy, selection
Andrew Russell, Open Standards and the Digital Age, Chapter 8

Adrian Johns, Piracy, pp. 401-518

Hands-on: Wikileaks

Film: The Internet’s Own Boy

Monday December 8

Eugeny Morozov, To Save Everything, Click Here
Siva Vaidhyanathan, The Googlization of Everything, selection
Jaron Lanier, Who Owns the Future? , selection

]]>
https://blogs.swarthmore.edu/burke/blog/2014/08/18/history-82-fall-2014-syllabus/feed/ 4
Lashed to the Rack, or the Ideology of Incremental Improvement https://blogs.swarthmore.edu/burke/blog/2013/09/30/lashed-to-the-rack-or-the-ideology-of-incremental-improvement/ https://blogs.swarthmore.edu/burke/blog/2013/09/30/lashed-to-the-rack-or-the-ideology-of-incremental-improvement/#comments Mon, 30 Sep 2013 15:59:31 +0000 https://blogs.swarthmore.edu/burke/?p=2440 Continue reading ]]> I was walking through a poster session at a conference on educational and learning games some years ago and came across two very nice presenters who had created a game for students who were planning to study abroad in countries where there was endemic malaria. The game was designed to teach the players the importance of taking anti-malarial medicine with strict adherence to the prescribed regimen. I asked, “Isn’t this a rather elaborate way to communicate a simple message? Why go to the trouble of making a game?” Well, they said, their institution had done a variety of things over the years to encourage students to follow the directions and had succeeded over time in reducing the number of students who failed to do so, but there was a small remaining contingent who were still not complying and of them, a few actually contracted malaria. These two presenters (I recall that one was a faculty member, the other an administrator) decided that a game might help them close the gap further.

The game itself seemed hugely unfun, in the manner of so many learning and “serious” games. It also seemed as if it took a good bit of effort to create. I wondered: why not just accept that there’s a few students who will not take their medicine and a few out of the few who will get malaria? When would their efforts to educate students be good enough such that they could simply keep doing what they’d been doing?

The answer in so many institutions is, “never”. Why not?

Perhaps in some small measure because the thought of even one student dying from malaria makes any effort to prevent that into a moral obligation.

Perhaps in some small measure because simply repeating a training or educational exercise each year actually reduces the pedagogical effectiveness of the teachers, that students can sense when something seems like an obligation or a repetition and that they take content less seriously when they sense that. Maybe always doing something slightly new or different is just a bit of professional craft that helps a teacher or trainer keep their edge.

But mostly I think this is what I would call the ideology of incremental improvement, a central dogma of the technocratic faith.

Incremental improvement is on one hand a way to ward off destabilizing or transformative arguments about the values and purposes of an institution: it coats the regularized processes of work with a layer of amber. The ideology denies that there could ever be a deep or fundamental branching point where an institution actually could make dramatic progress towards its stated goals by changing some fundamental aspect of its everyday workings. All progress in this view is small and constant, towards well-understood objectives using fixed or steady methods.

Incremental improvement is white-collar productivism. It’s driven by the proposition that a worker must always be more efficient and more productive, always making more of the same outputs through a constant intensification of the inputs. It’s also about carving out a stable niche in the institutional ecosystem for a supervisory class whose mission is to shepherd incremental improvement. As with most technocratic conventions, it is at least partially about defending the self-interest of the technocrats. Who can secure incremental improvement? Not the people whose practices are being improved. Only those who define the targets and specify the technical measures by which one moves steadily towards those targets can secure incremental improvement: it takes someone outside and above the labor that is being improved. Incremental improvement is a mutation of Taylorism, except that it stresses mind and affect rather than efficiencies of bodily movement.

Incremental improvement is one of several things that I think is fundamentally wrong with the current approach to assessment in both K-12 and higher education.

There is nothing wrong and everything right with a teacher or an institution performing a self-examination and asking whether the deliberate commitments and practices of instruction are improving the knowledge, performance, creativity or self-realization of students. It’s particularly important for selective private institutions to do this kind of assessment, because they might (and have) otherwise just assumed that they’re doing everything right when it’s possible that the only thing they need to do is recruit the very best students and have a lot of money in the bank. Assessment is the cure for smugness and unexamined privilege.

But you don’t go to your doctor for an annual check-up to get 1% better in every aspect of your health every year. You go to make sure you don’t have serious health problems building up and to make sure that your existing medication and therapeutic practices are working as intended. At least some of assessment is about the maintenance and fine-tuning of highly functional or successful practices.

And at least some of assessment should be about framing major choices about values or philosophy. Sometimes improvement isn’t about increments: sometimes it happens by leaps and bounds. And sometimes big changes aren’t improvements or degradations per se: they’re just change.

Most importantly, incremental improvement fundamentally doesn’t work for those values and practices that by nature are not incremental and measurable. Near the end of the classic The Phantom Toolbooth, the protagonist Milo confronts a series of “demons of ignorance”, one of whom is the Terrible Trivium.

TerribleTrivium

The Trivium attempts to ensnare Milo by asking him to move a pile of sand grain by grain. Some of the things we value in our teaching are only perceptible and imaginable as unbroken wholes, and some are only important as pervasive but unseen spirits. Incremental improvement denies both of those kinds of values. They’re unmeasurable, unimprovable, undefinable. If you’re dealing with an incremental improver, he or she might not deny you the right to think you are working towards these kinds of goals, but he or she is going to ignore you when you do. And so bit by bit–incrementally, even–whole institutions eventually move all their resources towards picking up grains of sand one-by-one, toward eliminating the last student to not take his malaria medication, to producing 1% more empathy in a history major.

]]>
https://blogs.swarthmore.edu/burke/blog/2013/09/30/lashed-to-the-rack-or-the-ideology-of-incremental-improvement/feed/ 6
Stagnation https://blogs.swarthmore.edu/burke/blog/2013/04/03/stagnation/ https://blogs.swarthmore.edu/burke/blog/2013/04/03/stagnation/#comments Wed, 03 Apr 2013 18:21:06 +0000 https://blogs.swarthmore.edu/burke/?p=2289 Continue reading ]]> By now, I think everyone knows that the new Sim City is a flaming car wreck, the gaming equivalent of Ishtar or Hudson Hawk, the kind of misfire that raises serious questions about its corporate creator and the entire industry.

But it’s only one of many so-called “AAA” titles in gaming in the last year to raise those questions. Even products that appear successful or well-regarded document the consistent aesthetic underperformance of the most expensive, lavishly produced work the gaming industry is selling–and I think that underperformance is in turn severely limiting the potential commercial and cultural impact of games. AAA titles are increasingly easy for everyone outside of a small, almost entirely male, subculture to ignore. The only games that really matter to the culture as a whole right now are either independent products like Minecraft or light “casual games” that are mostly played on mobile platforms. (A further symptom of the cluelessness of the industry is that many developers therefore conclude that it’s the platform, not the game itself, that consumers prefer, so just move it into mobile, whatever “it” might be.)

Two examples of failures that the gaming subculture has anointed as successes: Far Cry 3 and Bioshock Infinite.

Far Cry 3 is ostensibly an “open world” game, a form that at its best is one of the most powerfully distinctive ways that a digital game can be unlike any other modern media text or performance. It’s a very hard form to produce, demanding a huge amount of content, a lot of QA testing, and flexible, responsive AI scripting. Small wonder that most studios steer clear of the form, or falsely claim to have done it.

Far Cry 3‘s first problem is just that: it’s not really an open world. The player can in theory linger wherever he wants and do what he wants. But then it turns out that there’s a whole set of hidden “gates” throughout the gameworld that require the player to progress through the plot, to watch the cutscenes, to do what’s required of them by the developer. In a genuine open world, the player can go almost anywhere or do almost anything and progress the narrative at her discretion, which might open up a few new locations or new content, but as a supplement to the general environment rather than one more step in a linear sequence.

Progressing through the content wouldn’t be so bad in Far Cry 3 if the content weren’t so bad. And here we hit the second problem that is far more general to the industry: that the writing is not only for a very particular subculture of very particular men, it is largely BY that same subculture. Far Cry 3 is an almost laughably bad pastiche of racialized cliches: it makes Cameron’s Avatar seem like the most sophisticated postmodern rethinking of those tropes by comparison. What is worse both in narrative and gameplay terms is that the player is forced into inhabiting the subject position of one of those cliches. Rather than playing a cipher who simply witnesses and traverses the setting or a specific character who has an alienated or sideways relation to the gameworld, you have to be a character whose arc goes from “spoiled wealthy white American frat boy” to “low-rent dudebro version of John McClane killing dark-skinned drug dealers and bandits on a tropical island” to “white messiah saving natives”. You have to listen to “yourself” saying painfully stupid things throughout the entire game while saving painfully stupid friends. Perhaps worst of all for an allegedly open-world game, your character is frequently forced to do dumb things: walk into traps and trust the untrustworthy. (Plus you end up in QTEs for a number of key plot resolutions, which is like adding a extra turd to the shit sundae.) Far Cry 3 wants to be Grand Theft Auto but no one making it had an ear for the Rockstar aesthetic: all of the “interesting” people your character deals with make GTA IV‘s Roman Bellic seem like a soothing, well-balanced presence by comparison. The only people who could possibly enjoy Far Cry 3 for its diegetic elements are the narrow demographic that wrote the game and that identify with the protagonist.

What especially annoys me (and quite a few other commenters on digital games) is that the head writer, Jeffrey Yohalem, shrugs all the criticism of the narrative and content off because he claims it’s all meant to be stupid. Yes, a graduate of the English Department at Yale University is deploying the argument that he is subverting racist tropes by making them so enragingly stupid that they force players into a Brechtian relation with the game’s text, alienating them from the narrative “skin” lying over the gameplay structure. Or alternatively, that the game’s content seems racist because he’s forcing you to consume that content through the perspective of the young naively racist protagonist and therefore force a confrontation with his subjectivity. If there’s anything that this argument makes me have Brechtian feelings towards, it’s whatever body of cultural theory Yohalem thinks he’s deploying in good faith to make this bad faith apologia for a clumsy example of what’s wrong with a lot of AAA games.

In contrast, Bioshock Infinite has a very well-imagined and literate conceptual and visual setting, which has led a ton of middlebrow game critics to raid the thesaurus looking for sufficient quantities of superlatives. Middlebrow criticism of popular genres and forms, particularly geeky ones, is always poised with a certain undertone of desperation to try and convince mainstream cultural critics that they too are dealing with art, or at least the potential for art.

The problem with Bioshock Infinite, which takes place in a alternate-history version of an early 20th Century American experiment in communal living, in this case a city in the sky defined by racial purity and evangelical Christianity, isn’t much of a game. It’s described as a first-person shooter by most critics, but it’s largely a visually and narratively sophisticated reprise of an almost-dead genre of game, the “adventure game”, whose best-known example even today is the game Myst. Much of Bioshock Infinite consists of wandering in static environments clicking on objects to find out whether they will give you objects or money that you can use later or objects which provide more narrative details about the gameworld and the situation. This experience is periodically interrupted by combat setpieces where your character dispatches small squads of local law enforcers and by periodic dialogues with your companion Elizabeth, who is the other keystone to the eventual resolution of the plot. (The first being your own character.)

But just as Far Cry 3 forces you to endure your character’s cluelessness, Bioshock Infinite creates a very strange hybrid point-of-view and locks you into it. Your character knows things you do not know: about his past, about his motivations, about the events that set the game’s story in motion. Small details are revealed at first through environment and through your character’s occasional mutterings, then later by Elizabeth’s comments on you and your situation (and your subvocalized responses). But this creates a constant bizarre and uncomfortable tension: you are controlling the actions of a character who treats some of what you regard as novel or mysterious as expected or known, or who is blase and indifferent about some of what you (the player) find interesting, engaging or infuriating about the world of Columbia (the flying city).

Moreover, the gameworld only looks like it is three-dimensional. Like its two predecessors, Bioshock Infinite is the quintessential “roller-coaster ride”: there is almost nothing that actually turns on your choices or actions as a player, almost every environment can only be traversed in one way even if it looks like there are multiple pathways through it. You can choose how you want to kill your enemies–shoot them, burn them, rip them up with a whirling saw. You can choose whether to look at all the extra narrative content provided–none of it is needed to progress, but since it’s the game’s major virtue, why skip it? You can sort of choose whether to click on every barrel or crate to gather ammunition and food, but in normal mode, you could skip that and make it through easily enough. Otherwise you are herded through the experience of the game like a cow through one of Temple Grandin’s soothing kill chutes. If you die, nothing really happens, it’s not more than a momentary inconvenience. You can’t jump off Columbia even if you try. You can’t go in most of the storefronts or buildings. You can’t talk to people, just listen to their speak their one line of prerecorded atmospheric dialog. It is absolutely the essence of being on something like Mr. Toad’s Wild Ride in Disneyland. You stay inside the car, the environment swings around and beeps and bloops and moves and that’s it.

The consumer-side question Bioshock Infinite ends up posing is: why spend $50 to watch a 3-hour animated film that has some very good art design, one fairly engaging if rather stock supporting character, an interesting underlying setting, a “trick ending” straight out of the M. Night Shyamalan school of scriptwriting, and repetitively staged intervals of interactivity that very nearly amount to the 21st Century version of a William Castle cinematic gimmick? The distinctive affordances of the medium go largely unused, and there is little point to experiencing the game more than once.

At least Bioshock Infinite has an imaginative soul inside of it, unlike Far Cry 3. But it shows again how culture industries routinely miss the mark, not just or even mostly about artistic aspiration but about economic potential of the forms, genres and technologies that they supposedly mobilize to such fearsomely profitable effect. It may be that Bioshock Infinite or Far Cry 3 will make money for their producers, but the inefficiency of the relation between input and output in their cases ought to give anyone with an investment interest in the future of digital games serious pause. Particularly because the number of Sim Cities, unquestionable disasters, is also rather hard to ignore. Consumers don’t necessarily prefer casual games, mobile games or games like Minecraft because they don’t like long, intricate games that take advantage of the medium’s distinctiveness. They just don’t want to waste their time and money on games that were written for 16-year old boys who spend most of their time texting misanthropic comments to other teenagers or on games that don’t really have any “game-like” qualities.

]]>
https://blogs.swarthmore.edu/burke/blog/2013/04/03/stagnation/feed/ 3
Hell Is Other Gamers (And Some Games) https://blogs.swarthmore.edu/burke/blog/2012/08/02/hell-is-other-gamers-and-some-games/ https://blogs.swarthmore.edu/burke/blog/2012/08/02/hell-is-other-gamers-and-some-games/#comments Thu, 02 Aug 2012 20:04:20 +0000 https://blogs.swarthmore.edu/burke/?p=2048 Continue reading ]]> Game developers talking about “culture” are often deeply frustrating. Either they are overly credulous about how design directly and symmetrically can create a particular set of cultural practices and outlook within a game, as my friend Thomas Malaby has observed about Second Life, or they see gamer culture as a hard-wired or predetermined result of cognitive structures and/or the wider culture of the “real world”. Only rarely both in a somewhat more nuanced but contradictory way: Raph Koster, for example, has at times argued that particular design features in games (say, the implementation of dancing and music in Star Wars: Galaxies) can create or transform cultural predispositions among players but also has argued in his Theory of Fun that gameplay and “fun” are driven by fixed cognitive structures and tendencies.

Developers tend to favor one of these two viewpoints because they either make the culture of play in a particular game something that they can design towards or they make it a fixed property that they have no power over, something they can imagine either completely controlling or being completely helpless to control, and in any event, something easy to summarize in a reductive, mechanical way. They’d rather either than what the culture of play in a particular game really is, an emergent and contingent result of interactions between particular design features, the general cultural history of digital games and their genres, the particular sociological habitus of the players, and the interpretation of visual and textual elements within the game by different players (individually and in groups).

When Aris Bakhtanians said that sexual harassment was “part of the fighting-game community” he was, in a way, perfectly correct in an empirical sense. This is not to say that all or even most players of fighting games, even in competitive gaming, practice harassment of the kind Bakhtanians infamously displayed, but that sexual harassment and harassing attitudes are commonly witnessed or overheard in a great deal of online gaming, as are the harsh and infantile abusive responses flung at people who complain about such behavior or expression. The one truth sometimes spoken in such responses is that outsiders don’t really understand how such things get said or what they mean. Outside critics and designers alike would often prefer for “culture” of this kind to be easily traced to the nature of the game itself, either its semantic content or the structure of play, or for the culture of the game to be nothing more than a microcosm of some larger, generalized culture or cognitive orientation, an eyedrop of sexism or racism or masculine misbehavior in an ocean of the same. If that’s the case, either there’s something quite simple to do (ban, suppress or avoid the offending game or game genre) or the game is only one more evidentiary exhibit in a vastly larger sociopolitical struggle and not an issue in its own right.

Understanding any given game or even a singular instance of a game as “culture” in the same sense that we understand any other bounded instance of practice and meaning-making by a particular group of people, with all the unpredictable, slippery and indeterminate questions that approach entails, means that if you care about the game as an issue, you have to spend time reading and understanding the history and action of play around a particular game. The stakes are very much not just academic (are they ever?): certainly the viability of a particular game as a product in the marketplace hangs in the balance, sometimes an entire genre of game or an entire domain of convergent culture is at financial risk. But also at stake are the real human feelings and subjectivities of the players themselves, both within the game culture and in the ways that those identities and attitudes unpack or express in everyday life as a whole. If we’re going to argue that game cultures teach all sorts of interesting and useful social lessons, or lessons about systems and procedures (as we should) then we have to accept that some of the social lessons can be destructive or corrosive. Not in the simple-minded, witless way that the typical public complaint about violent or sexist media insists on arguing, sure, but we still have to ask what the consequences might be.

I sometimes identify myself as a “game culture native” who happens to express his views about games within scholarly discourse rather than a scholar drawn from outside to look at games. So in native parlance, one of the things that strikes me again and again when I play multiplayer games is that I find it extraordinarily painful to recognize that what I romantically imagine as a refuge for geeks is in fact horribly infested with the kinds of bullies that we were all trying to get away from back in the 1970s. When I first started playing computer and console games in early 1980s, they enraptured me more than stand-up arcade games in part because you could play them privately in the home or in quiet computer labs on a device that you controlled, and communicate with others in-game largely at your own discretion or preference. They also tended to be more complex and slower than coin-op games and to derive much more of their themes and narratives from existing science-fiction and fantasy. The games themselves were a refuge, and their enabling technology was a refuge. Much of the same was true, at least for me, with pen-and-paper role-playing games. They were so derided and marginalized in the mainstream culture of my peers that I never felt any particular risk that some popular kid or hulking bully was going to show up in the middle of a gaming session and take my lunch money.

By the time that game culture spread more widely in the 1990s and 2000s, neither of these feelings held particularly well, and nowhere did I feel that more acutely than in commercial virtual-world games from Ultima Online onward. Suddenly here I was, exploring a dungeon and fighting monsters with a group of strangers, at least some of whom seemed pretty much like the kids who had shoved me into fences or kicked me in elementary and junior-high school. It wasn’t as personally threatening to me as a confident, secure adult but it was at the least depressing and repellant. The general Hobbsean malaise that these players brought to gameplay was seasoned by extraordinary forms of malevolent play that came to be calling “griefing” and by an accelerating willingness to give uninhibited voice to crude sexual boasting, misogyny, racial hatred and gay-bashing. Sometimes, I ended up feeling that there wasn’t any real sentiment or deliberate feeling behind the braggadacio–at a certain cultural moment, calling something “gay” in gamer parlance really did feel to me as if it was a non-referential way to simply say something was dumb or annoying–but a lot of the time there was in fact real force and venom behind the words.

Over time, many of us learned to ignore much of this behavior as background noise or to use the increasingly responsive tools provided by developers to control exposure to obnoxious or harassing individuals. We played only with friends or trusted networks of people, we used /ignore tags in general chat to make it impossible to ‘hear’ offensive players, we didn’t play in games known to have particularly ugly or unpleasant internal cultures. We realized that some of the most offensive behavior and attitudes are basically adolescent transgressions against mainstream consensus. A griefer or troll doesn’t care what the semantic content of their griefing is, only that it bothers or angers someone, so the easiest way to deflate them is to ignore them. We learned that sometimes being offensive is also a competitive tactic, as it is in many sports or other games: being deliberately obnoxious can unbalance or obsess a competitor.

But it still gets to me sometimes personally. It’s just that doing anything about this cultural history is no easier than it is do something about anything else “cultural”.

To give an example of the complexity, let me turn to World of Warcraft. I hadn’t played World of Warcraft in months: I’m bored by the game itself and I feel as if I’ve learned everything in a scholarly or intellectual sense that I can from its player culture. In the last week, I played a bit at my daughter’s urging. It was interesting up to the point that I went off to do some “daily quests” in an area called Tol Barad where players fight each other every two hours or so. The quests are standard WoW design: boring, repetitive, Zynga-like exercises whose completion gives the player a bit of money and a small gain in reputation with an in-game faction. At a certain point, the player will have enough reputation with that faction to purchase improved gear that will make the character more powerful. The repetition is somewhat soothing, a kind of gentle mindlessness, but to really progress through doing the quests, players have to do them every day for a substantial period of time. In this particular area, the daily quests are leavened by a battle between the players themselves. If your side wins, it gains access to another set of daily quests within the zone and to several areas of content for larger groups to complete together. If your side loses, you have no access to these quests until the next battle several hours later.

The battles are at least potentially fun and interesting, and a relief from collecting crocodile hides. So I hung around Tol Barad until the battle. World of Warcraft has over the years refined its formula for these kinds of battles. It now caps the total participants (to keep one side from being ridiculously dominant in numerical terms), it forces everyone to join a single large “raid group” (to make it easier for everyone to communicate and monitor their own side), and it offers mechanics that try to balance strategic choices, short-term tactical coordination and a reasonably even chance for both sides to win. My side in this case lost, partly because it was less coordinated. Ok, fine, it was still sort of fun. But as the loss became imminent, a torrent of abuse began to spill out through the raid group. A small number of players started shrieking about how bad everyone else was, what failures we all were, how we should be embarrassed to play the game, how we were a bunch of useless faggots and so on. Over a basically trivial part of the game that will be repeated again and again all day long. That’s pretty typical in WoW: the more you play and the more that your play associates you with strangers, the more you will see both extraordinarily poor behavior by individuals (that is often condemned by the consensus of a group) and generically poor behavior that is ignored or accepted as inevitable even though most people do not themselves participate in that behavior.

This surely limits both the numbers of people who might play WoW or any game like it and the comfort level of players within the game to participate in all the activities it offers. But consider how complicated both the genesis and consequences of this aspect of the game’s culture really is.

First, consider the evolution of “chat” as an expressive practice within virtual-world games. A game like WoW is shaped by a very long design history that goes back to non-commercial MUDs and MUSHs in which chat channels were the major way in which the game supported a sense of community or sociality within the game, and thus the expectation that such a game should be social. The sociality of WoW and other games like it is still a defining attribute, and is notoriously credited with keeping players as participants long after they’ve grown bored with the content. So you have to have chat. Whenever the designers of WoW have attempted to curtail “global” or large-scale chat that tends to expose the totality of the game’s culture to the worst expressive practices of its ugliest margins, players have typically managed to subvert their intentions and recreated a global or large-scale chat channel. Early commercial virtual-worlds spent much more time and money trying to police the semantic content of player expression, or tried to use filters to prevent offensive expression. Both efforts were easy to defeat, the first simply through volume and persistence, the second through linguistic and typographic invention. Attempts by players themselves to discourage or sanction offensive expression only have had force inside small social groups. A competitive guild can often impose restrictions on what its members do, booting a griefer or harasser. But such a player is simply expelled into the “general population”, and there’s always another guild around the corner that needs a member, or in WoW’s later evolution, a random pick-up group that will endure such a player for the short time that it must bear his or her company.

It’s not just the mechanics by which you say things, but what you’re doing that matters. Almost all of WoW’s gameplay involves the incremental accumulation of resources that will help players in the incremental accumulation of better resources. This is competitive in two ways: first, that a resource you gain is often a resource denied to someone else. Second, that your total accumulation of resources is read off into the game’s public culture as a status effect, sorting players into hazy hierarchies. These hierarchies are temporally unstable: no matter how powerful you are, each expansion of the game will render your previous power over the environment and your previous superiority to other players null and void. They are structurally unstable: Blizzard frequently tinkers with the game mechanics and may at some point put a given type of character at a substantial in-built disadvantage or advantage to others, regardless of how much they have accumulated or how skilled the player is in controlling a character’s actions. These hierarchies do not have an even symbolic meaning across the whole of the game’s culture. Some players never engage in competitive accumulation: a dedicated “casual” who plays with a small group of friends and a serious “hardcore” who plays with a large group of equally dedicated and intense players rarely intersect, rivalrously or otherwise. But the large “middle class” of the game are often competitive with both poles: needing casuals in order to carry out competitive acquisition, wanting parity with the hardcores. When a game is built around the rivalrous but incremental accumulation of resources, its very structure encourages certain forms of aggression, status-laden disdain, and attempts to suppress rivalrous action by any means necessary.

If you want a contrast, look at something like the sharing of creature designs in Spore. Spore wasn’t a terribly successful game, but it did create a fantastically successful player ecosystem in terms of people being highly motivated to create interesting designs and share them with as many people as possible. The fundamental structure of a game’s design influences the kind of sociality that appears within its culture, and it invites or fosters imagined alignments between a game culture and the wider culture. Incremental accumulation, social hierarchy and the strong desire of people at the “top” to have permanent structural separations between themselves and the plebeians who have to collect boar livers or file TPS reports? That’s a bridge for a lot of ugly sentiment and frustration to cross regularly between WoW and the world.

But then consider also the history of gamer sociology, or the movement between games, neither which Blizzard is particularly responsible for or able to control. Even within virtual worlds, there are really bad neighborhoods and relatively anodyne ones. Sometimes by design. I actually accept and admire the ugliness of the internal culture of EVE Online: it has the same authorial intentionality (by both designers and players) that any other work of art set in an ugly or unpleasant aesthetic might. Toontown is light-hearted because of content, because of mechanics, and because it disables the sociality of players on purpose. Sometimes as an emergent, accidental evolution. I don’t think there’s any simple reason exactly why multiplayer game culture on X-Box Live should be as baroquely unpleasant and misanthropist as it is, but I simply won’t do anything multiplayer on that platform unless I absolutely have to for research. The worst I’ve experienced on WoW is nothing like what you’d hear in a really ugly session of a bunch of random strangers in a multiplayer shooter on XBLA. Gamer culture is and has been for a very long time leavened by young men who at their worst spew a lethal cocktail of nerdrage, bullying and slacker entitlement into conversational spaces, forcing other players to retreat, ignore or leave.

There is no simple instrumental pathway into that kind of “culture”: any attempt to change it by command is going to be useless at best, actively backfire at worst. Here game designers sometimes have good ideas: giving players tools to shape their socialities helps a lot. If being an “anonymous fuckwad” leads to increasing exclusion or marginality within a game culture, enforced by mechanics that players themselves control, then it takes much more deliberate agency to be a fuckwad. But if developers are going to consider giving players more agency over their own social practices and institutions, they also have to think about where their designs have become the equivalent of chutes herding cattle towards slaughter. The kind of operant conditioning that Blizzard has made the defining feature of MMO design, and which has been Zynga’s stock in trade, doesn’t encourage the growth of rich social worlds that can evolve and complicate. If you’re a farmer growing a monoculture, you don’t expect a forest–and you’re far more vulnerable to parasites and disease wiping out your crops.

]]>
https://blogs.swarthmore.edu/burke/blog/2012/08/02/hell-is-other-gamers-and-some-games/feed/ 1
The Work of Criticism https://blogs.swarthmore.edu/burke/blog/2012/01/20/the-work-of-criticism-2/ https://blogs.swarthmore.edu/burke/blog/2012/01/20/the-work-of-criticism-2/#comments Fri, 20 Jan 2012 17:56:37 +0000 https://blogs.swarthmore.edu/burke/?p=1872 Continue reading ]]> Jumping straight out of my Twitter feed about THATCamp Games, I want to work a bit more on a reaction I had to a morning panel on teaching games in a higher ed class.

I heard a pretty strong strain of thought that naturalized the proposition that the first thing to do with games in a class is to interrupt the activity of play, to stop the fun, to compel students to a critical attentiveness to the content and experience of a game. The student who knows how to play video games well was taken to be a sort of pedagogical enemy, both because they ‘split’ the instructors’ attention between the skilled player and the students who have never played and because the expert gamer was taken as a figure who actually has few or no critical thoughts about their consumption of games.

The problem of a class with split levels of preparation, competency, or cultural capital is a real one that comes up in much of higher education, so I don’t mean to belittle it. But because it’s so common, it might be a good thing to not see as specific or special to games except in who has that expertise or cultural capital within a classroom.

But the idea of the expert gamer as a sort of idiot savant who doesn’t want to talk about games, doesn’t think about games as a critical subject, and who is having altogether too much fun with games to be trusted as a practicioner of criticism worries me. Here too I don’t think this construct is limited to games as a cultural form. There’s a mirroring construction in film and television studies, indeed, in the relation of most bodies and pedagogies of academic cultural criticism and communities formed around and through cultural consumption. Literature professors often encounter and complain about the student who arrives in their classes with a professed ‘love of literature’. We sometimes come to see our job as grimly breaking those blithe spirits on the wheel of the hard labor of criticism and dismissing them from our company when they refuse to come into the quarry and break stone.

We set our teeth to this bit first because we hold dear the notion that criticism is work because it has work to do, that criticism has a function which requires training to perform, which is desperately needed as a part of the critical transformation (or preservation) of some wider sociocultural project, and towards which there will be opposition. A labor to learn, a labor to enact, a labor to endure.

We also do it because something which is fun, pleasurable or passionate seems an easy target for elimination within the academy, or indeed any contemporary institution with limited resources and a productivist sensibility. Yet it is against this sentiment particularly that humanists so often howl in protest in other ways, resisting the idea that what they do should ever be reduced to its naked, barren utilities. Why then it should be so urgent to disrupt, prevent or spoil the experience of culture when it seems passionate, pleasurable or fun is something of a mystery.

Nor do I think there is much sense in making the expert gamer, the romantic reader, the artist who creates for personal satisfaction, either an enemy of criticism or absent of a critical faculty. “Expert gamers” engage in a great deal of criticism: it simply isn’t expressed in terms that are native to scholarly enterprise, nor is it often concerned with the things that earn academic critics their reputation capital. But there’s a lot of value in the discourse of expert gamers for academic critics, and I think academic critics would find that this door swings both ways: there are things expert gamers want to know that they would gladly look to scholarship to engage.

]]>
https://blogs.swarthmore.edu/burke/blog/2012/01/20/the-work-of-criticism-2/feed/ 5
Move the Data Server-Side! Occupy Sanctuary! https://blogs.swarthmore.edu/burke/blog/2011/10/26/move-the-data-server-side-occupy-sanctuary/ https://blogs.swarthmore.edu/burke/blog/2011/10/26/move-the-data-server-side-occupy-sanctuary/#comments Wed, 26 Oct 2011 18:57:02 +0000 https://blogs.swarthmore.edu/burke/?p=1815 Continue reading ]]> Three things about Occupy, two short, one long.

1) Occupy is already a success if the model is to provoke reaction from its chief targets. It’s hard to imagine pundits passing up the chance to comment on anything: the 24/7 news cycle is a harsh taskmaster. Nevertheless, the number of surly, whiny or malicious commentaries as well as the dropping of any pretense of an ethos of objectivity from some reporters has been pretty striking. What’s more interesting is the extent to which active responses (as in Oakland) or threatened responses (as in New York City) from the powers-that-be have taken place. I honestly expected municipal and other authorities to just patronize and wait it out. I think there may be real anxiety inside the crony-capitalist/Washington nexus about the possible spread of mass protest or public discontent.

2) I’d continue to argue that there is a sociological limit in the current iteration of Occupy that mirrors similar limits in progressive electoral politics, and that this is where the reaction of Tea Party representatives has been instructive: they don’t want to explore the obvious connections and real overlaps between some of their rejection of the status quo and Occupy because they don’t like the sociological habitus of the people involved (a sentiment shared very much vice-versa). However, the single least interesting, least useful criticism of Occupy in circulation is that it lacks a concrete set of demands, that it needs some kind of concrete policy platform that politicians could adopt. This misses the point in every way possible. First, that Occupy’s critique can’t be boiled down into something like “Pass a new version of Glass-Steagall”, that the real issue is “Why did we get rid of sensible governance and guardianship of that type in the first place, and why can’t we have it back now?” You can’t solve our current situation with the passage of some laws if the institutions charged with implementing them will subvert, ignore or supercede those laws. You can’t solve our current situation if the next regulation you create will promptly be evaded or mocked by those it was intended to regulate. (Bank of America’s debit-use charge, I’m looking at you.) It’s the system that’s broken: you don’t solve systemic failure with a five-point legislative plan. Demands in this context have to be something more like, “Unelect everyone and comprehensively reform the process of electing a new group of representatives and leaders, expect accountability in both economic and political life and set real consequences for the failure of that expectation, make transparency in both business and government one of the sacred watchwords of a democratic society”. Maybe Occupy needs more of a boiled-down, two-sentence root-level philosophy or viewpoint (parity with something like “down with big government”) but it doesn’t need a set of demands that the political-financial complex can promptly ignore or play pointless legislative shell games with.

3) I think Matt Taibbi provides as good a “root-level philosophy” as you can ask for: that Occupy is not against wealth, is not against competition, is not against business, is not against banking. It’s a very specific argument that the game as it stands is rigged, that the cheaters are being allowed to operate with impunity, that the safeguards against cheating are compromised, and that the cheats are running the risk of destroying the game itself.

As my readers and colleagues know, I’m hopelessly addicted to analogies and metaphors. Here let me try an analogy that I don’t think is particularly metaphorical, that is in fact quite directly applicable to this situation: the history of the computer game Diablo II.

The game was a huge commercial successes and initially supported a large, thriving and heterogenous multiplayer community where the range of participation went from casual players who played few other games (online or otherwise) to dedicated, hardcore players with long experience in a variety of gaming genres and forms.

Diablo II allowed players to trade magical items obtained through play, as well as to compete with one another in various ways. It was consequently one of the first multiplayer games to generate an unplanned real-money transaction (RMT) market, as players offered desirable items to other players in return for cash payments through various third-party venues. This being a fairly new kind of thing at the time, neither the player community nor the game’s producer really anticipated what would follow. Initially, crucial data about characters was kept client-side, and so was relatively easy to hack. At first, only a small number of players used cheats in order to gain an edge in RMT transactions. At that point, the game’s multiplayer ecosystem was still relatively healthy: a large number of customers, a small number of cheaters. Arguably the cheaters may even have helped a bit by introducing highly desirable duplicates of items at a faster rate into the multiplayer economy. In short order, however, the ease of cheating, created mostly by a lack of governance and control over the playing environment on the part of the game producer, devastated the multiplayer community. Items lost all value as they were illicitly duplicated in massive quantities, and any sense of genuine competition between players evaporated as cheats proliferated. In the end, the cheaters were left to prey on each other, an activity which defines “diminishing returns”.

In the end, open cheating, or cheating which proliferates in the absence of governance and enforcement, is not even in the interests of the cheaters. But once a socioeconomic system moves headlong in that direction, its acceleration towards generalized disaster can be exponential. Cheaters themselves cannot be expected to stop that movement even if they understand that it’s not in their own interests, because they’ve specialized their economic activity to take advantage of cheats. The biggest hackers of Diablo II when it was at the tipping point probably couldn’t have played the game even marginally well if denied access to their hacks: the game had become about hacking at that point, and about the incomes they could obtain from doing so. When the prey left and the cheats become more difficult, the cheaters just went looking for some other racket. A parasite at some point can become too specialized in its reliance on a complex vector and on the ecology of a particular host: if through its own efficient depredation or in concert with other stresses, it kills too many hosts, the parasite can’t undo its evolution. At some point in the 1990s, a fraction of financial capitalism became so dependent upon subverting or unraveling safeguards and so expectant of a level of profit obtained through government-protected market manipulation that it became effectively unable to back off and seek some more stable equilibrium–and its political partners became the same. The idea that Goldman-Sachs in the last decade represents “the free market” is as laughable as saying that the 19th railroad industry in the US was a laissez-faire triumph: in both cases, plutocracy was secured through and within the state rather than in the absence of it.

Stopping that isn’t a matter of a policy here or a single bugfix there. It’s about a comprehensive change to the paradigm. It’s about the government of the people, by the people, for the people, not perishing from this earth.

]]>
https://blogs.swarthmore.edu/burke/blog/2011/10/26/move-the-data-server-side-occupy-sanctuary/feed/ 7
Out, Out Damned Spot https://blogs.swarthmore.edu/burke/blog/2011/08/01/out-out-damned-spot/ https://blogs.swarthmore.edu/burke/blog/2011/08/01/out-out-damned-spot/#comments Mon, 01 Aug 2011 19:54:45 +0000 https://blogs.swarthmore.edu/burke/?p=1678 Continue reading ]]> Is there anything more grating than an interpretation whose language slips and innocently anoints its analysis with the status of a fact?

I’m sure I noticed this pattern in the letters to the editor in this week’s New York Times Book Review because they were complaining about Laura Kipnis’ review of Maggie Nelson’s The Art of Cruelty.

Kipnis’ review started off with a wonderfully bracing slap to that most tedious kind of middlebrow NPR-listening muddled complaint against mass culture: “Well-meaning laments about violence in the media usually leave me wanting to bash someone upside the head with a tire iron. To begin with, the reformist spirit is invariably aimed down the rungs of cultural idioms, at cartoons, slasher films, pornography, rap music and video games, while the carnage and bloodletting in Shakespeare, Goya and the Bible get a pass.” Kipnis continues, “Low-culture violence coarsens us, high-culture violence edifies us. And the lower the cultural form, or the ticket price, or — let’s just say it — the presumed education level of the typical viewer, the more depictions of violence are suspected of inducing mindless emulation in their audiences, who will soon re-enact the mayhem like morally challenged monkeys, unlike the viewers of, say, ‘Titus Andronicus,’ about whose moral intelligence society is confident.”

If I could fit that on a tattoo, I’d get it put on my arm, just to save time the next time I want to say roughly the same thing, which my friends and colleagues can tell you is about once a day.

It’s just about as predictable that after saying it, you can expect some kind of rebuke from purveyors of the conventional wisdom, often one that speaks past rather than to the original critic.

When I’ve been on panels about media-effects arguments, I’ve always been a bit amused at the gentle chaos that articulating a critique like Kipnis’ tends to sow among researchers or audience members who follow the standard line. They’re ready for dramatic self-righteousness if by some chance an executive or producer from the culture industry should happen to show up and disagree, but not for zooming off in a more perpendicular direction, such as a more academic dismantling of the methodology or conclusions of long-standing media-effects work, or Kipnis’ point about how much criticism of violence in mass media is rather open in its pimping for high-culture snobbery.

As an example of what that gentle chaos can lead to, Josephine Hendin’s response to Kipnis is a really prime example of the aforementioned rhetorical transposition of an act of interpretation with a statement of a fact. Moreover, because Hendin talking about violence, art and popular culture, she does a pretty fair job in two paragraphs of demonstrating why there was a scholarly revolt against limiting the subject of literary study to high-culture works.

Hendin complains that Kipnis “does not clearly distinguish” between valuable artistic uses of violence and “shock value”. I’m sorry, were literary critics the people who were supposed to be especially skilled at close reading? Because as a starting observation, this leaves me a bit confused. Kipnis starts off her book review rather clear on this point: she thinks this distinction is bollocks. So perhaps Hendin meant to say, “I don’t agree with Kipnis: I’m going to argue that there is a distinction”. See, speaking of distinction, I think there’s one between saying, “I don’t agree with you” and “you didn’t make my argument and made your own instead, so I think you’re being unclear”.

The rest of the letter has the same problem: interpretations are converted by some invisible table into empirical data. I understand, it’s a two-paragraph letter, and not a monograph. But it’s not that hard to find monographs by literary critics that make the same rhetorical slip for hundreds of pages, refusing to characterize or imagine a claim as an interpretation and instead stating it as something which is. “Much of pop culture is about endemic desensitization to anything but the action of violence”. Much? Well, what have you got in mind? Tomb Raider and Andy Warhol, really? Not what I’d call major foundation stones of contemporary popular culture, but that’s how these arguments usually work: highbrow critics and audiences reach out desperately for the one or two pop culture texts or properties that they have some paratextual familiarity with, maybe from a panel four years ago at the MLA or from their teenage child’s unrefined cultural consumption.

“Does not clearly distinguish” is of a rhetorical piece with some of my least favorite repeated phrases in undergraduate papers. For example, the venerable favorite: that the author of a text “forgot” to make an important point in that work. For some reason, my students think this is a gentler, fuzzier way to say that the author is wrong on some important point, while also hoping that they will keep me from noticing that they don’t really have a fully worked-out understanding of what is wrong with the author’s argument. What I point out to my students is that this is both a more condescending characterization than simply saying that they disagree with the text (I’d rather be argued with than have it insinuated that I didn’t do my work properly) and it calls attention to rather than disguises a lack of command over the issues.

I agree that direct and declarative language is a good thing, whatever the length of an analysis. But it’s important to use language that always recalls what interpretation really is, and what it’s not. One of the requirements of that language is self-awareness. By all means generalize, but know that it’s you that’s doing it.

]]>
https://blogs.swarthmore.edu/burke/blog/2011/08/01/out-out-damned-spot/feed/ 6
Blizzard Is CLU https://blogs.swarthmore.edu/burke/blog/2011/01/13/blizzard-is-clu/ https://blogs.swarthmore.edu/burke/blog/2011/01/13/blizzard-is-clu/#comments Fri, 14 Jan 2011 00:44:39 +0000 http://weblogs.swarthmore.edu/burke/?p=1422 Continue reading ]]> x-posted to Terra Nova

I don’t understand why Tron: Legacy has come in for so much critical abuse. I like it as much as my colleague Bob Rehak does. Just taken as an action film, it’s considerably more entertaining and skillful than your usual Michael Bay explosion fest, with set-pieces a good deal more exciting than its predecessor. However, like the original Tron, the film also has some interesting ways of imagining digital culture and digital spaces, and more potently, some subtle commentary about some of the imaginative failures of the first generation of digital designers.

Some critics seemed disappointed that the film takes place in a closed system, the Grid, created by Jeff Bridges’ Kevin Flynn, expecting it to ape the original film’s many correspondences between its virtual world and the technology of mainframe computing and early connectivity. In the original Tron, once Kevin Flynn finds himself inside the world of software and information, he finds himself meeting embodied programs that correspond to actual software being used in the real world, he has a companion “Bit” who can only communicate in binary, he has to make it to an I/O tower so that the program Tron can communicate to his user and so on. Critics seemed to expect that Kevin Flynn’s son would be transported inside a world built on the contemporary Internet, that he would venture from Ye Olde Land of Facebook on a Googlemobile past the some pron-jpg spiders scrambling around the landscape of Tumblr and then catch a glimpse of the deserted wasteland of Second Life.

The director wisely avoided that concept, but I nevertheless think the film is in fact addressing at least one “real” aspect of contemporary digital culture. Kevin Flynn, trapped inside the Grid for more than a decade, discovers that his basic aspirations in creating a virtual world of his own were fundamentally misdirected. He sets out to build a private, perfect world populated by programs of his own design. The complexity of the underlying environment that he creates turns out to be a “silicon second nature” that spontaneously generates a form of a-life that uses some of what he’s put into the environment but that also supercedes his designs and his intentions. Too late, he realizes that the unpredictability of this a-life’s future evolution trumps any aspiration he might have had in mind for his world. Too late because his majordomo, a program of his own creation, modeled on himself, called Clu, stages a coup d’etat and continues Flynn’s project to perfect the world by eliminating contingency, unpredictability, organicism, redundancy. In exile, Flynn realizes that the most perfect thing he’s ever seen is imperfect, unpredictable life itself: the son he left behind, the life of family and community, and the life he accidentally engendered within a computer-generated world.

Whether the analogy was intended or not, that narrative strikes me as a near-perfect retelling of the history of virtual world design from its beginnings to its current stagnant state. The first attempts to make graphically-based persistent virtual worlds as commercial products, all of them built upon earlier MUD designs, sometimes made a deliberate effort to have a dynamic, organic environment that changed in response to player actions (Ultima Online’s early model for resource and mob spawning). But even products like Everquest and Asheron’s Call offered environments which could almost be said to be shaped by virtual overdetermination: underutilized features, half-fleshed mechanics, sprawling environments, stable bugs and exploits that gave rise to entire subcultures of play, all contributing to worlds where the tangle of plausible causalities made it difficult or impossible for either players or developers to fully understand why things happened within the gameworld’s culture or what players might choose to do next.

Some of the next generation of virtual worlds, such as Star Wars: Galaxies, ran into these dynamics even more acutely. Blizzard, on the other hand, launched World of Warcraft with a clear intent to make a persistent-world MMO that was more tractable and predictable as well as one that had a more consistent aesthetic vision and a richer, more expertly authored supply of content.

That they succeeded in this goal is now obvious, as are the consequences of their success: other worlds have withered, faded or failed, unable to match either the managerial smoothness or content supply offered by Blizzard. Those that remain are either desperately trying to reproduce the basic structure of WoW or have moved towards cheap, fast development cycles and minimal after-launch support with the intent to make a profit from box sales alone, in the model of Cryptic’s recent products.

With the one major exception, as always the lone exception, of Eve Online. In terms of Tron: Legacy, Eve is the version of the Grid where the a-life survived. Though in the film, the a-life, the isomorphic algorithms, that appears are said to be innocent, creative, imaginative; the moral nature of Eve’s organic, undesigned world is infamously rather the opposite.

But what Eve proves has also been proven by open-world single player games like Red Dead Redemption or the single-player version of Minecraft: many players crave unpredictable or contingent interactions of environment, mechanics and action. In RDR, if you take a dislike to Herbert Moon, the annoyingly anti-semitic poker player, you can go ahead and kill him, in all sorts of ways. He’ll be back, but more than a few players found some pleasure in doing their best to get rid of him in the widest range of creative ways. You can solve quests in ways that I’m fairly sure the designers didn’t anticipate, using the environment and the mechanics to novel ends. You can do nothing at all if you choose, and the world is full of things to do nothing with.

Open-world single-player games allow a range of interactions that Blizzard long since banished from the World of Warcraft. In the current expansion of WoW, I spent a few minutes trying to stab a goblin version of Adolf Hitler in the face rather than run quests on his behalf, even knowing, inevitably, that I would eventually end up opposing his Indiana-Jones-derived pseudo-Nazis and witnessing his death. I’d have settled for the temporary resolution that RDR allows with Herbert Moon, but WoW is multiplayer and Blizzard has decided that the players aren’t allowed to do anything that inconveniences, confuses or complicates the play of other players.

I don’t know that this is Blizzard’s fault, exactly: the imperfections of virtual worlds are precisely what so many of us have spent so much time discussing, worrying about, and trying to critically engage. Trolls, Barrens chat, griefers: you name it, we (players, scholars, developers) have fretted about it, complained about it, and tried to fix it.

The problem is that the fix has become the same fix CLU applied to the Grid: perfection by elimination, perfection by managerialism. What now strikes me as apparent is that this leaves virtual worlds as barren and intimidated as the Grid has become in the movie, and as bereft of the energetic imperfections of life. That way lies Zynga, eventually: the reduction of human agency in play to the repetitions of code, to binary choices, to clicks made when clicks are meant to be made.

Where the spirit of open worlds survives, it survives either because the worlds are open but the hell of other players has been banished and the game stays safely single-player or minimally multiplayer or because the world has surrendered to a Hobbesian state of nature, to a kind of 4chan zeitgeist.

I can’t help but wonder, as Flynn does, whether there’s some slender remnant possibility that is neither of these.

]]>
https://blogs.swarthmore.edu/burke/blog/2011/01/13/blizzard-is-clu/feed/ 2
Mimesis and Interactivity https://blogs.swarthmore.edu/burke/blog/2011/01/06/mimesis-and-interactivity/ https://blogs.swarthmore.edu/burke/blog/2011/01/06/mimesis-and-interactivity/#comments Thu, 06 Jan 2011 22:44:12 +0000 http://weblogs.swarthmore.edu/burke/?p=1417 Continue reading ]]> Here comes a bunch of blogging! Fasten your seat belts.

================

So yes, we got a Kinect at our house. I am the very model of the modern gamer tech geek. As an incremental change to the wand-driven interface design of the Wii and PS3, I admire it. I’m far more fascinated by the really imaginative hacking of the powerful capabilities of the device, and the unintended ends to which they may lead. I confess I was also a bit disappointed that the interface didn’t function like a combination of “Minority Report” and the Bat-Computer to the extent that I’d secretly hoped it might.

What frustrates me most about the Kinect, however, is not the device itself but the common misapprehension of some middlebrow game and digital media critics, most prominently Seth Schiesel of the New York Times, that the Kinect is the future of a naturalistic, real-world mode of interacting with digital appliances and media. Schiesel states the hope succinctly: that the banishment of game controllers, iPod dials, keyboards and other control devices in favor of intuitive motions of physical bodies and natural language commands is the end of a geek-favoring barrier to the consumption of digital media and the use of digital tools and the beginning of a great democratization of the digital.

This is in the end a very geek-oriented way of imagining why some media practices seem to cohere to geeks, that design is destiny, that technology intrinsically favors or excludes users because of its particular material or conceptual nature, usually a feature or architecture that a critic or designer believes can be and should be changed.

I don’t entirely disagree with this perspective. Design matters, and it matters in ways that are not purely a mirror of sociology or culture. This is even true of the Kinect or Wii or Sony Move control systems in particular. Schiesel and others are perfectly correct to say that kicking a virtual soccer ball or doing a virtual exercise routine with a motion-capture system is intuitive in a way that using a multi-button controller is not, and that this intuitive design permits many people to play some digital games when they would otherwise think that the effort of learning a control scheme doesn’t justify the reward of playing the game.

What bugs me about the middlebrow celebration of the downfall of the multibutton controller and its kindred devices (keyboards, etc.) is the naive understanding of mimesis buried inside that enthusiasm. The driving faith here is that representation and lived experience should have a 1:1 correspondence in order to rid ourselves of the work and difficulty that comes from a slippage between the two. There’s at least a kissing-cousin resemblance between this view and older positivist ideas, lingering on in some scientific and social-scientific circles, that we should tinker ceaselessly with language until all ambiguity is banished from it and it thus can be used for the efficient description of the real world.

Let’s say that Microsoft continues to hammer the bugs and quirks out of the Kinect, making its recognition of both language and motion closer and closer to how we hear and interpret speech and action with our own perceptual systems. Let’s even pretend that there won’t be a more and more obvious uncanny valley of some kind as it does so. As the system becomes more and more mimetic, at least in theory, will that truly rid of us of complex control schemes that only a geek could love?

Of course not, at least not if digital games work with the unreal, the imaginary, the impossible. What an odd thing that anyone should wish for games to become more restrictively mimetic to “reality” at a moment when digital technologies are otherwise opening up representational possibilities in film and television.

Stick for a moment even to sports games. A programmer could make a better and better Kinect-controlled soccer game, but if that is only going to involve those actual physical routines we use in a real game of soccer (which are themselves not something that human beings are born knowing, and are in some cases anything but intuitive: a game where you can’t use your hands? Not exactly a natural idea for a primate with opposable thumbs), two problems will quickly arise. First, if the action I see on the screen is to be synchronized with the action I perform in real-world space, the action can be in general be no faster or slower than my real physical motion. Maybe you’re different than me, but I don’t play soccer at the speed of a Ronaldo or Beckham. So nothing in the digital game can appear to be enhanced from the world unless everything is enhanced or exaggerated to the same degree, and every computer-controlled player or physical action has to be as slow and boring to watch as I am in real life. Second, I can’t do anything that doesn’t involve a match between an on-screen avatar’s motions and my motions. The avatar can be first-person or third-person, but I can’t do something like control multiple avatars, or control markedly non-human objects or creatures unless I learn to do something very imaginative, abstract or counter-intuitive with my body in real-world physical space. I can be a tiger in a Kinect game, but that either has to involve translating my normal bipedal ape motions into the motions of a four-legged feline or it has to involve my mimicking the motions of a four-legged feline.

From there, it’s a pretty short step to the Kinect version of having to memorize a series of finishing moves. It’s not as if this is something that digital media force upon our normally naturalistic, intuitive bodies. A boss fight in World of Warcraft has always seemed to me to have a very strong analogy to choreography, and I can easily see a Kinect-style future for a game of that kind where getting the right sequence of heals on the tank would look more like T’ai Chi than keyboard typing. But all of that will involve something as intricate and complex as contemporary controller interfaces (or real-world multiperson dance recitals). Without that slippage-filled interfacing complexity, I won’t be able to be a Jedi in a Kinect game: a game could interpret my raised hand as a Force choke, a push as a Force push (much as it looks in films and cartoons) but I can’t tell my avatar to do a eight-foot tall Jedi backflip without a gesture which is very fundamentally not an eight-foot tall backflip.

We can’t be freed of the work of representation, the ambiguity of language. Why should we want to be? That is like imagining a freedom from life itself. It will be all to the good if the Kinect makes a game designer deep in his or her cubicled warrens wonder if the best way to connect a player’s actions with the attack of a fantasy warrior in an imaginary world is X O O X left trigger long-hold-on-X as opposed to the player making a fist in the air and waving it around. Anything that unsettles byzantine practices of culture by reminding us of their contingency is good, because that’s what catalyzes the creative discovery of the novel and unfamiliar. That creativity will be stillborn if it has to satisfy the expectation of the Schiesels of the world that they will never again have to learn something unfamiliar in order to control the unfolding of the imaginary.

]]>
https://blogs.swarthmore.edu/burke/blog/2011/01/06/mimesis-and-interactivity/feed/ 3