Digital Humanities – Easily Distracted https://blogs.swarthmore.edu/burke Culture, Politics, Academia and Other Shiny Objects Sat, 09 May 2015 13:22:32 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.15 Apples for the Teacher, Teacher is an Apple https://blogs.swarthmore.edu/burke/blog/2015/05/09/apples-for-the-teacher-teacher-is-an-apple/ https://blogs.swarthmore.edu/burke/blog/2015/05/09/apples-for-the-teacher-teacher-is-an-apple/#comments Sat, 09 May 2015 12:35:31 +0000 https://blogs.swarthmore.edu/burke/?p=2814 Continue reading ]]> Why does AltSchool, as described in this article, as well as similar kinds of tech-industry attempts to “disrupt” education, bug me so much? I’d like to be more welcoming and enthusiastic. It’s just that I don’t think there’s enough experimentation and innovation in these projects, rather than there being too much.

The problem here is that the tech folks continue to think (or at least pretend) that algorithmic culture is delivering more than it actually is in the domains where it has already succeeded. What tech has really delivered is mostly just the removal of transactional middlemen (and of course added new transactional middlemen–the network Uber has established in a really frictionless world wouldn’t need Uber, and we’d all just be monetizing our daily drives on an individual-to-individual basis).

Algorithmic culture isn’t semantically aware yet. When it seems to be, it’s largely a kind of sleight-of-hand, a leveraging and relabelling of human attention or it is computational brute-forcing of delicate tasks that our existing bodies and minds handle easily, the equivalent of trying to use a sledge hammer to open a door. Sure, it works, but you’re not using that door again, and by the way, try the doorknob with your hand next time.

I’m absolutely in agreement that children should be educated for the world they live in, developing skills that matter. I’m also in agreement that it’s a good time for radical experiments in education, many of them leveraging information technology at new ways. But the problem is that the tech industry has sold itself on the idea that what it does primarily is remove the need for labor costs in labor-intensive industries, which just isn’t true for the most part. It’s only true when it’s true for jobs that were (or still are) rote and routinized, or that were deliberate inefficiencies created by middlemen. Or that tech will solve problems that are intrinsic to the capabilities of a human being in a human body.

So at the point in the article where I see the promise that tech will overcome the divided attention of a humane teacher, I both laugh and shudder. I laugh because it’s the usual tech-sector attempt to pretend that inadequate existing tech will become superbly useful tech in the near-term future simply because we’ve identified a need for it to be (Steve Jobs reality distortion field engaged) and I shudder because I know what will happen when they keep trying.

The central scenario in the article is this: you build a relatively small class with a relatively well-trained, attentive, human teacher at the center of it. So far so good! But the tech, ah the tech. That’s there so that the teacher never has to experience the complicated decision paths that teachers presently experience even in somewhat small classes. Right now a teacher has to decide sometimes in a day which students will get the lion’s share of the attention, has to rob Peter to pay Paul. We can’t have that in a world where every student should get all the attention all the time! (If nothing else, that expectation is an absolutely crystallized example of how the new tech-industry wealthy hate public goods so very much: they do not believe that they should ever have to defer their own needs or satisfactions to someone else. The notion that sociality itself, in any society, requires deferring to the needs of others and subsuming one own needs, even for a moment, is foreign to them.)

So the article speculates: we’ll have facial recognition software videotaping the groups that the teacher isn’t working with, and the software will know which face to look at and how to compress four hours of experience into a thirty-minute summary to be reviewed later, and it will also know when there are really important individual moments that need to be reviewed at depth.

Here’s what will really happen: there will be four hours of tape made by an essentially dumb webcam and the teacher will be required to watch it all for no additional compensation. One teacher will suddenly not be teaching 9-5 and making do as humans must, being social as we must. That teacher will be asked to review and react to twelve or fourteen or sixteen hours of classroom experience just so the school can pretend that every pupil got exquisitely personal, semantically sensitive attention. The teacher will be sending clips and materials to every parent so that this pretense can be kept up. When the teacher crumbles under the strain, the review will be outsourced, and someone in a silicon sweat shop in Malaysia will be picking out random clips from the classroom feed to send to parents. Who probably won’t suspect, at least for a while, that the clips are effectively random or even nonsensical.

When the teacher isn’t physically present to engage a student, the software that’s supposed to attend to the individual development of every student will have as much individual, humane attention to students as Facebook has to me. That is to say, Facebook’s algorithms know what I do (how often I’m on, what I look at, what I tend to click on, when I respond) and it tries (oh, how it tries!) to give me more of what I seem to do. But if I were trying to learn through Facebook, what I need is not what I do but what I don’t! Facebook can only show me a mirror at best; a teacher has to show a student a door. On Facebook, the only way I could find a door is for other people–my small crowd of people–to show me one.

Which probably another way that AltSchool will pretend to be more than it can be, the same way all algorithmic culture does–to leverage a world full of knowing people in order to create the Oz-like illusion that the tools and software provided by the tech middleman are what is creating the knowledge.

Our children will not be raised by wolves in the forest, but by anonymously posted questions answered on a message board by a mixture of generous savants, bored trolls and speculative pedophiles.

]]>
https://blogs.swarthmore.edu/burke/blog/2015/05/09/apples-for-the-teacher-teacher-is-an-apple/feed/ 4
History 82 Fall 2014 Syllabus https://blogs.swarthmore.edu/burke/blog/2014/08/18/history-82-fall-2014-syllabus/ https://blogs.swarthmore.edu/burke/blog/2014/08/18/history-82-fall-2014-syllabus/#comments Mon, 18 Aug 2014 19:38:06 +0000 https://blogs.swarthmore.edu/burke/?p=2671 Continue reading ]]> Here’s the current version of the syllabus for my upcoming fall class on the history of digital media. Really excited to be teaching this.

———————

History 82
Histories of Digital Media
Fall 2014
Professor Burke

This course is an overly ambitious attempt to cover a great deal of ground, interweaving cultural histories of networks, simulations, information, computing, gaming and online communication. Students taking this course are responsible first and foremost for making their own judicious decisions about which of many strands in that weave to focus on and pursue at greater depth through a semester-long project.

The reading load for this course is heavy, but in many cases it is aimed at giving students an immersive sampler of a wide range of topics. Many of our readings are both part of the scholarship about digital culture and documents of the history of digital culture. I expect students to make a serious attempt to engage the whole of the materials assigned in a given week, but engagement in many cases should involve getting an impressionistic sense of the issues, spirit and terminology in that material, with an eye to further investigation during class discussion.

Students are encouraged to do real-time online information seeking relevant to the issues of a given class meeting during class discussion. Please do not access distracting or irrelevant material or take care of personal business unrelated to the class during a course meeting, unless you’re prepared to discuss your multitasking as a digital practice.

This course is intended to pose but not answer questions of scope and framing for students. Some of the most important that we will engage are:

*Is the history of digital culture best understood as a small and recent part of much wider histories of media, communication, mass-scale social networks, intellectual property, information management and/or simulation?

*Is the history of digital culture best understood as the accidental or unintended consequence of a modern and largely technological history of computing, information and networking?

*Is the history of digital culture best understood as a very specific cultural history that begins with the invention of the Internet and continues in the present? If so, how does the early history of digital culture shape or determine current experiences?

All students must make at least one written comment per week on the issues raised by the readings before each class session, at the latest on each Sunday by 9pm. Comments may be made either on the public weblog of the class, on the class Twitter feed, or on the class Tumblr. Students must also post at least four links, images or gifs relevant to a particular class meeting to the class Tumblr by the end of the semester. (It would be best to do that periodically rather than all four on December 2nd, but it’s up to each of you.) The class weblog will have at least one question or thought posted by the professor at the beginning of each week’s work (e.g., by Tuesday 5pm.) to direct or inform the reading of students.

Students will be responsible for developing a semester-long project on a particular question or problem in the history of digital culture. This project will include four preparatory assignments, each graded separately from the final project:

By October 17, a one-page personal meditation on a contemporary digital practice, platform, text, or problem that explains why you find this example interesting and speculates about how or whether its history might prove interesting or informative.

By November 3, a two-page personal meditation on a single item from the course’s public “meta-list” of possible, probable and interesting topics that could sustain a project. Each student writer should describe why they find this particular item or issue of interest, and what they suspect or estimate to be some of the key questions or problems surrounding this issue. This meditation should include a plan for developing the final project. All projects should include some component of historical investigation or inquiry.

By November 17, a 2-4 page bibliographic essay about important materials, sources, or documents relevant to the project.

The final project, which should be a substantive work of analysis and interpretation, is due by December 16th.

Is Digital Culture Really Digital? A Sampler of Some Other Histories

Monday September 1
Ann Blair, Too Much to Know, Introduction
Hobart and Schiffman, Information Ages, pp. 1-8
Jon Peterson, Playing at the World, pp. 212-282
*Adrian Johns, Piracy: The Intellectual Property Wars From Gutenberg to Gates, pp. 1-82
Tom Standage, The Victorian Internet, selection

Imagining a Digital Culture in an Atomic Age

Monday September 8
Arthur C. Clarke, “The Nine Billion Names of God”, http://downlode.org/Etext/nine_billion_names_of_god.html
Ted Friedman, Electric Dreams, Chapter Two and Three

Film: Desk Set
Colossus the Forbin Project (in-class)
Star Trek, “The Ultimate Computer” (in-class)

Monday September 15
Vannevar Bush, “As We May Think”, http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/
Paul Edwards, The Closed World, Chapter 1. (Tripod ebook)
David Mindell, “Cybernetics: Knowledge Domains in Engineering Systems”, http://21stcenturywiener.org/wp-content/uploads/2013/11/Cybernetics-by-D.A.-Mindell.pdf
Fred Turner, Counterculture to Cyberculture, Chapter 1 and 2
Alex Wright, Cataloging the World: Paul Otlet and the Birth of the Information Age, selection

In the Beginning Was the Command Line: Digital Culture as Subculture

Monday September 22
*Katie Hafner, Where Wizards Stay Up Late
*Steven Levy, Hackers
Wikipedia entries on GEnie and Compuserve

Film: Tron

Monday September 29
*John Brunner, The Shockwave Rider
Ted Nelson, Dream Machines, selection
Pierre Levy, Collective Intelligence, selection
Neal Stephenson, “Mother Earth Mother Board”, Wired, http://archive.wired.com/wired/archive/4.12/ffglass_pr.html

Monday October 6
*William Gibson, Neuromancer
EFFector, Issues 0-11
Eric Raymond, “The Jargon File”, http://www.catb.org/jargon/html/index.html, Appendix B
Bruce Sterling, “The Hacker Crackdown”, Part 4, http://www.mit.edu/hacker/part4.html

Film (in-class): Sneakers
Film (in-class): War Games

FALL BREAK

Monday October 20
Consumer Guide to Usenet, http://permanent.access.gpo.gov/lps61858/www2.ed.gov/pubs/OR/ConsumerGuides/usenet.html
Julian Dibbell, “A Rape in Cyberspace”
Randal Woodland, “Queer Spaces, Modem Boys and Pagan Statues”
Laura Miller, “Women and Children First: Gender and the Settling of the Electronic Frontier”
Lisa Nakamura, “Race In/For Cyberspace”
Howard Rheingold, “A Slice of Life in My Virtual Community”
Sherry Turkle, Life on the Screen, selection

Hands-on: LambdaMOO
Hands-on: Chatbots
Hands-on: Usenet

Monday October 27

David Kushner, Masters of Doom, selection
Hands-on: Zork and Adventure

Demonstration: Ultima Online
Richard Bartle, “Hearts, Clubs, Diamonds, Spades”, http://mud.co.uk/richard/hcds.htm

Rebecca Solnit, “The Garden of Merging Paths”
Michael Wolff, Burn Rate, selection
Nina Munk, Fools Rush In, selection

Film (in-class): Ghost in the Shell
Film (in-class): The Matrix

Here Comes Everybody

Monday November 3

Claire Potter and Renee Romano, Doing Recent History, Introduction

Tim Berners-Lee, Weaving the Web, short selection
World Wide Web (journal) 1998 issues
IEEE Computing, March-April 1997
Justin Hall, links.net, https://www.youtube.com/watch?v=9zQXJqAMAsM&list=PL7FOmjMP03B5v3pJGUfC6unDS_FVmbNTb
Clay Shirky, “Power Laws, Weblogs and Inequality”
Last Night of the SFRT, http://www.dm.net/~centaur/lastsfrt.txt
Joshua Quittner, “Billions Registered”, http://archive.wired.com/wired/archive/2.10/mcdonalds_pr.html
A. Galey, “Reading the Book of Mozilla: Web Browsers and the Materiality of Digital Texts”, in The History of Reading Vol. 3

Monday November 10

Danah Boyd, It’s Complicated: The Social Life of Networked Teens
Bonnie Nardi, My Life as a Night-Elf Priest, Chapter 4

Hands-on: Twitter
Hands-on: Facebook
Meet-up in World of Warcraft (or other FTP virtual world)

Michael Wesich, “The Machine Is Us/Ing Us”, https://www.youtube.com/watch?v=NLlGopyXT_g
Ben Folds, “Ode to Merton/Chatroulette Live”, https://www.youtube.com/watch?v=0bBkuFqKsd0

Monday November 17

Eli Pariser, The Filter Bubble, selection
Steven Levy, In the Plex, selection
John Battelle, The Search, selection

Ethan Zuckerman, Rewire, Chapter 4
Linda Herrera, Revolution in the Era of Social Media: Egyptian Popular Insurrection and the Internet, selection

Monday November 24

Clay Shirky, Here Comes Everybody

Yochai Benkler, The Wealth of Networks, selection
N. Katherine Hayles, How We Think, selection
Mat Honan, “I Liked Everything I Saw on Facebook For Two Days”, http://www.wired.com/2014/08/i-liked-everything-i-saw-on-facebook-for-two-days-heres-what-it-did-to-me

Hands-on: Wikipedia
Hands-on: 500px

Monday December 1

Gabriella Coleman, Coding Freedom, selection
Gabriella Coleman, Hacker Hoaxer Whistleblower Spy, selection
Andrew Russell, Open Standards and the Digital Age, Chapter 8

Adrian Johns, Piracy, pp. 401-518

Hands-on: Wikileaks

Film: The Internet’s Own Boy

Monday December 8

Eugeny Morozov, To Save Everything, Click Here
Siva Vaidhyanathan, The Googlization of Everything, selection
Jaron Lanier, Who Owns the Future? , selection

]]>
https://blogs.swarthmore.edu/burke/blog/2014/08/18/history-82-fall-2014-syllabus/feed/ 4
The Listicle as Course Design https://blogs.swarthmore.edu/burke/blog/2014/08/11/the-listicle-as-course-design/ https://blogs.swarthmore.edu/burke/blog/2014/08/11/the-listicle-as-course-design/#comments Mon, 11 Aug 2014 18:51:58 +0000 https://blogs.swarthmore.edu/burke/?p=2658 Continue reading ]]> I’ve been convinced for a while that one of the best defenses of small classes and face-to-face pedagogy within a liberal arts education would be to make the process of that kind of teaching and coursework more visible to anyone who would like to witness it.

Lots of faculty have experimented with publishing or circulating the work produced by class members, and many have also shared syllabi, notes and other material prepared by the professor. Offering the same kind of detailed look at the day-to-day teaching of a course isn’t very common and that’s because it’s very hard to do. You can’t just videotape each class session: being filmed would have a negative impact on most students in a small 8-15 person course, and video doesn’t offer a good feel for being there anyway. It’s not a compressed experience and so it doesn’t translate well to a compressed medium.

I have been trying to think about ways to leverage participation by crowds to enliven or enrich the classroom experience of a small group of students meeting face-to-face and thus also give observers a stake in the week-by-week work of the course that goes beyond the passive consumption of final products or syllabi.

In that spirit, here’s an idea I’m messing around with for a future course. Basically, it’s the unholy combination of a Buzzfeed listicle and the hard, sustained work of a semester-long course. The goal here would be to smoothly intertwine an outside “audience” and an inside group of students and have each inform the other. Outsiders still wouldn’t be watching the actual discussions voyeuristically, but I imagine that they might well take a week-to-week interest in what the class members decided and in the rationale laid out in their notes.

——————–

History 90: The Best Works of History

Students in this course will be working together over the course of the semester to critically appraise and select the best written and filmed works that analyze, represent or recount the past. This will take place within a bracket tournament structure of the kind best known for its use in the NCAA’s “March Madness”.

The initial seeding and selection of works will to be read by class members will be open to public observers as well as enrolled members of the class. The professor will use polls and other means for allowing outside participants to help shape the brackets. One side of the bracket will be works by scholars employed by academic institutions; the other side will be works by independent scholars, writers, and film-makers who do not work in academia.

The first four weeks of the class will be spent reading and discussing the nature of excellence in historical research and representation: not just what “the historian’s craft” entails, but even whether it is possible or wise to build hierarchies that rely on concepts of quality or distinctiveness. Class members will decide through discussion what they think are some of the attributes of excellent analytic or representational work focused on the past. Are histories best when they mobilize struggles in the present, when they reveal the construction of structures that still shape injustice or inequality? When they document forms of progress or achievement? When they teach lessons about common or universal challenges to human life? When they amuse, enlighten or surprise? When they are creatively and rhetorically distinctive? When they are thoroughly and exhaustively researched?

At the end of this introductory period, students will craft a statement that explains the class’ shared criteria, and this statement will be published to a course weblog, where observers can comment on it. Students will then be divided into two groups for each side of the bracket. Each group will read or view several works each week on their side of the overall bracket. During class time, the two groups will meet to discuss their views about which work in each small bracket should go forward in the competition and why, taking notes which will eventually be published in some form to the course weblog. Students will also have to write a number of position papers that critically appraise one of the books or films in the coming week and that examine some of the historiography or critical literature surrounding that work.

The final class meeting will bring the two groups together as they attempt to decide which work should win the overall title. In preparation, all students will write an essay discussing the relationship between scholarly history written within the academic and the production of historical knowledge and representation outside of it.

]]>
https://blogs.swarthmore.edu/burke/blog/2014/08/11/the-listicle-as-course-design/feed/ 3
Fighting Words https://blogs.swarthmore.edu/burke/blog/2014/07/08/fighting-words/ https://blogs.swarthmore.edu/burke/blog/2014/07/08/fighting-words/#comments Tue, 08 Jul 2014 18:31:29 +0000 https://blogs.swarthmore.edu/burke/?p=2626 Continue reading ]]> Days pass, and issues go by, and increasingly by the time I’ve thought something through for myself, the online conversation, if that’s the right word for it, has moved on.

One exchange that keeps sticking with me is about the MLA Task Force on Doctoral Study in Modern Language and Literature’s recent report and a number of strong critical responses made to the report.

One of the major themes of the criticisms involves the labor market in academia generally and in the MLA’s disciplines specifically. Among other things, this particular point seems to have inspired some of the critics to run for the MLA executive with the aim of shaking up the organization and galvanizing its opposition to the casualization of academic labor. We need all the opposition we can get on that score, though I suspect that should the dissidents win, they are going to discover that the most militant MLA imaginable is nevertheless not in a position to make a strong impact in that overall struggle.

I’m more concerned with the response of a group of humanities scholars published at Inside Higher Education. To the extent that this response addresses casualization and the academic labor market, I think it unhelpfully mingles that issue with a quite different argument about disciplinarity and the place of research within the humanities. Perhaps this mingling reflects some of the contradictions of adjunct activism itself, which I think has recently moved from demanding that academic institutions convert many existing adjunct positions into traditional tenure-track jobs within existing disciplines to a more comprehensive skepticism or even outright rejection of academic institutions as a whole, including scholarly hierarchies, the often-stifling mores and manners that attend on graduate school professionalization, the conventional boundaries and structures of disciplinarity, and so on. I worry about baby-and-bathwater as far as that goes, but then again, this was where my own critique of graduate school and academic culture settled a long time ago, back when I first started blogging.

But on this point, the activist adjuncts who are focused centrally on abysmal conditions of labor and poor compensation in many academic institutions are right to simply ignore much of that heavily freighted terrain since what really matters is the creation of well-compensated, fairly structured jobs for highly trained, highly capable young academics. Beyond insuring that those jobs match the needs of students and institutions with the actually existing training that those candidates have received, it doesn’t really matter whether those jobs exist in “traditional” disciplines or in some other administrative and intellectual infrastructure entirely. For that reason, I think a lot of the activists who are focused substantially on labor conditions should be at the least indifferent and more likely welcoming to the Task Force’s interest in shaking the tree a little to see what other kinds of possibilities for good jobs that are a long-term part of academia’s future might look like. Maybe the good jobs of the academic future will involve different kinds of knowledge production than in the past. Or involve more teaching, less scholarship. If those yet-to-exist positions are good jobs in terms of compensation and labor conditions, then it would be a bad move to insist instead that what adjuncts can only really want is the positions that once were, just as they used to be.

They should also not welcome the degree to which the IHE critics conflate the critique of casualization with the defense of what they describe as the core or essential character of disciplinary scholarship.

The critics of the Task Force report say that the report misses an opportunity to “defend the value of the scholarly practices, individual and collective, of its members”. The critics are not, they say, opposed in principle to “innovation, expansion, diversification and transformation”, but that these words are “buzzwords” that “devalue academic labor” and marginalize humanities expertise.

Flexibility, adaptability, evolution are later said to be words necessarily “borrowed” from business administration (here linking to Jill Leopore’s excellent critique of Clayton Christiansen).

For scholars concerned with the protection of humanistic expertise, this does not seem to me to be a particularly adroit reading of a careful 40-page document and its particular uses of words like innovation, flexibility, or evolution. What gets discounted in this response is the possibility that there are any scholars inside of the humanities, inside of the MLA’s membership, who might use such words with authentic intent, for whom such words might be expressive of their own aspirations for expert practice and scholarly work. That there might be intellectual arguments (and perhaps even an intellectual politics for) for new modes of collaboration, new forms of expression and dissemination, new methods for working with texts and textuality, new structures for curricula.

If these critics are not “opposed in principle” to innovation or flexibility, it would be hard to find where there is space in their view for legitimate arguments about changes in either the content or organization of scholarly work in the humanities. They assert baldly as common sense propositions that are anything but: for example, that interdisciplinary scholarship requires mastering multiple disciplines (and hence, that interdisciplinary scholarship should remain off-limits to graduate students, who do not have the time for such a thing).

If we’re going to talk about words and their associations, perhaps it’s worth some attention to the word “capitulation”. Flexibility and adaptability, well, they’re really rather adaptable. They mean different things in context. Capitulation, on the other hand, is a pretty rigid sort of word. It means surrendering in a conflict or a war. If you see yourself as party to a conflict and you do not believe that your allies or compatriots should surrender, then if they try to, labelling their actions as capitulation is a short hop away from labelling the people capitulating as traitors.

If I were going to defend traditional disciplinarity, one of the things I’d say on its behalf is that it is a bit like home in the sense of “the place where, when you have to go there, they have to take you in”. And I’d say that in that kind of place, using words that dance around the edge of accusing people of treason, of selling-out, is a lousy way to call for properly valuing the disciplinary cultures of the humanities as they are, have been and might yet be.

The critics of the MLA Task Force say that the Task Force and all faculty need to engage in public advocacy on behalf of the humanities. But as is often the case with humanists, it’s all tell and no show. It’s not at all clear to me what you do as an advocate for the humanities if and when you’re up against the various forms of public hostility or skepticism that the Task Force’s report describes very well, if you are prohibited from acknowledging the content of that skepticism or prohibited from attempting to persuasively engage it on the grounds that this kind of engagement is “capitulation”. The critics suggest instead “speaking about these issues in classes” (which links to a good essay on how to be allies to adjunct faculty). In fact, step by step that’s all that the critics have to offer, is strong advocacy on labor practices and casualization. Which is all a good idea, but doesn’t cover at all the kinds of particular pressures being faced by the humanities, some of which aren’t confined to or expressed purely around adjunctification, even though those pressures are leading to the net elimination of jobs (of any kind) in many institutions. Indeed, even in the narrower domain of labor activism, it’s not at all clear to me that rallying against “innovation” or “adaptability” is a particularly adroit strategic move for clawing back tenure lines in humanities departments, nor is it clear to me that adjunct activists should be grateful for this line of critical attack on the MLA Task Force’s analysis.

Public advocacy means more than just the kind of institutional in-fighting that the tenurati find comfortable and familiar. Undercutting a dean or scolding a colleague who has had the audacity to fiddle around with some new-fangled innovative adaptability thing is a long way away from winning battles with state legislators, anxious families, pragmatically career-minded students, federal bureaucrats, mainstream pundits, Silicon Valley executives or any other constituency of note in this struggle. If the critics of the MLA Task Force think that you can just choose the publics–or the battlegrounds–involved in determining the future of the humanities, then that would be a sign that they could maybe stand to take another look at words like flexible and adaptable. It’s not hard to win a battle if you always pick the fights you know you can win, whether or not they consequentially affect the outcomes of the larger struggles around you.

]]>
https://blogs.swarthmore.edu/burke/blog/2014/07/08/fighting-words/feed/ 4
Yesterday, All Our MOOC Troubles Seemed So Far Away https://blogs.swarthmore.edu/burke/blog/2013/12/19/yesterday-all-our-mooc-troubles-seemed-so-far-away/ https://blogs.swarthmore.edu/burke/blog/2013/12/19/yesterday-all-our-mooc-troubles-seemed-so-far-away/#comments Thu, 19 Dec 2013 21:40:18 +0000 https://blogs.swarthmore.edu/burke/?p=2485 Continue reading ]]> Everybody remember the expectation that a smart, professorial President would hire an equally smart, skilled staff who would prove that a well-run government can be quickly responsive to the needs of the society, efficient in the execution of its duties, and not just services to the highest bidder?

Yeah, me neither. The current Administration seems determined to help us forget. Today the President’s Council of Advisors on Science and Technology issued a report on massively online open courses (MOOCs) that not only reads as if it was written a year ago, but manages even in the frame of a year ago to take the most cravenly deferential and crudely instrumental posture available in that moment. It’s a love letter to the venture capitalists scrambling to gut open higher education, written at a time when the most thoughtful entrepreneurs and executives involved in organizing MOOCs have all but conceded that whatever their value might be, they’re not going to solve the problem of labor-intensivity in education nor are they going to serve as a primary vehicle for achieving equity of access to higher education for potential pupils.

There was a good deal of I-told-you-so-ing after Sebastian Thrun announced that Udacity would move towards offering MOOCs for something other than basic higher education, in part because Thrun had concluded that they simply couldn’t substitute for existing models of teaching. I don’t think anyone should have mocked Thrun for saying so, even though many of us did say that this is what was going to happen. Not the least because it has happened before, at each major milestone in the development of mass communication in modern societies: the new medium was eagerly held up as a chance to affordably massify education and extend its transformative potential, only to fall short. Largely because no matter what mass medium we’re talking about, this kind of education is essentially an assisted form of autodidacticism. It has worked and still works largely only for those who already know what they want to know, and who already know how to learn.

There are some people who deeply believe that new technological infrastructures can in and of themselves solve problems of cost, equity and efficacy, in higher education or anything else. But at least some of the people who were preaching the MOOC gospel a year ago, where the President’s Council just went in their time machine, did so hoping to draw a Golden Ticket in the “I Made an IPO and Broke Something Important” sweepstakes. Most of those folks seem to be moving on now. In the Silicon Valley game, you don’t have to make money, but you do need to show that you can displace and disrupt an existing service with some speed. That’s not going to happen in this case.

One of the reasons that so many faculty who are otherwise very friendly to digitally-mediated innovation and change were so annoyed with MOOCs is that the intense push by companies and investors to draw attention to MOOCs drew energy and resources away from existing projects that have been using information technology to enhance and enrich traditional modes of teaching, often called “blended learning”. Now that the craze for the MOOC is starting to fade, maybe the blended learning conversation can gain the public attention it deserves once again.

But also, maybe we can hold on to what we’ve learned about the genuinely interesting possibilities of MOOCs. So they’re not going to magically solve the economic problems of education or public goods, or for the more anti-intellectual backers, they’re not going to create a world where algorithms will replace truculent faculty. If we get lucky, they might put some of the sleazier for-profit online educators out of business. However, existing MOOCs are still a potentially terrific implementation of three possible objectives, all of which might even have market value.

1) MOOCs are a model form of new digital publication. If you read this blog, you’ve seen me say this before (and seen me say before, somewhat crossly, that I’ve been saying it for years.) But this is no longer just potential: it’s reality. Does anyone remember how many people bought “For Dummies” books? Or in recent years, how many institutions are paying for a lynda.com account? The MOOC is a BOOC: it’s an enhanced, interactive instructional guide where other readers and the authors are there to help you learn. An instructional book has never been confused for a face-to-face course in a university, but it’s also a concept that’s been in existence longer than the university itself.

2) MOOCs are learning communities. Again, this is a potential that’s been around since the WELL, but existing MOOCs are a good demonstration of mature technologies and practices that help dedicated groups learn and explore together at various levels of commitment and interest. They can’t teach calculus to a single student who is underprepared to learn calculus, but they can help a very big group of people who have diverse knowledge and a common interest in the future of higher education learn and discuss that topic together.

3) The mass response to MOOCs are evidentiary proof of the transformative potential of traditional higher education. They’ve been misused as vehicles for transforming higher education, but what they really document is that people who’ve had higher education want to have more learning experiences like that for the rest of their lives. It’s why I always feel so sad when I talk to a Swarthmore alum who just wants to talk about books and ideas and research again and who starts to think that this alone is a reason to go on to graduate school. It’s not a good reason to do that because that’s not what graduate school typically services. But look at who takes MOOCs: it’s a close overlap with people who take community college courses for enrichment, with people who join book clubs and go to lectures, with people who just want to know more and talk with people who also have that aspiration. What have MOOCs shown so far? That there are a lot of people like that. They’re busy people, so they often drop out. But I bet those people are to support for educational institutions as a public good, ready to believe in the potentialities of education for a democratic society. MOOCs might not entirely scratch the itch for lifelong learning that many people who’ve had a taste of education develop, but they’re one way to respond to that desire, and more potently, an affirmation that the desire exists.

If the White House wants to pay attention to something important, they might start there rather than embracing the hope that market forces will automagically deploy the MOOC to finally relieve the technocrats of the burden of maintaining and extending public goods.

]]>
https://blogs.swarthmore.edu/burke/blog/2013/12/19/yesterday-all-our-mooc-troubles-seemed-so-far-away/feed/ 5
Mind It Is Blown https://blogs.swarthmore.edu/burke/blog/2013/09/03/mind-it-is-blown/ Tue, 03 Sep 2013 15:13:07 +0000 https://blogs.swarthmore.edu/burke/?p=2430 Continue reading ]]> Me reading suggestions from Twitter about how to do some data visualizations of co-citations out of my Honors seminar on colonial Africa.

Some days the difference between a dilettante who just likes to read about cool stuff (me) and people who actually do cool stuff (them) is rather stark.

A list of the helpful, stimulating suggestions:

http://jgoodwin.net/ant-cites/cites-slider.html
http://scatter.wordpress.com/2013/07/17/clusters-of-sociology/
http://metalab.harvard.edu/2012/07/paper-machines/
http://www.scottbot.net/HIAL/?p=38272
http://nodexl.codeplex.com/

For starters.

I think the one difference in my concept from some of these examples is that I want to start with essentially a semantically-aware tagging process, e.g., I want the students and me to tag the citations that they see as being especially crucial or formative to the historiographical thinking of the scholars we’re reading based on what they’ve read, and then to do some exploration of how close our semantic intuitions are to actual networks of citation. But I think I can see how we could do this using some of these models.

]]>
Historians Don’t Have to Live in the Past https://blogs.swarthmore.edu/burke/blog/2013/07/24/historians-dont-have-to-live-in-the-past/ https://blogs.swarthmore.edu/burke/blog/2013/07/24/historians-dont-have-to-live-in-the-past/#comments Wed, 24 Jul 2013 14:42:13 +0000 https://blogs.swarthmore.edu/burke/?p=2397 Continue reading ]]> In what way is the American Historical Association’s notion of a six-year embargo on digital open-access distribution of dissertations even remotely sustainable in the current publishing and media environment surrounding academia?

On one side, you have disciplinary associations like the Modern Language Association and the American Anthropological Association that have somewhat similar traditions of tying assessment and promotion to the publication of a monograph that are to varying degrees embracing open-access publishing and digital dissemination and trying to work out new practices and standards.

On the other side, you have disciplines that have no particular obsession with the idea of the published monograph as the standard.

Whether or not the published monograph is or ever was a good standard for judging the worth of a historian’s scholarship, how long does the AHA think that historians can stand alone in academia as a special case? “Oh, we don’t do open-access or digital distribution until we’ve got a real book in hand and are fully tenured, those few of us remaining who are in tenure-track positions, because that’s a fundamental part of history’s particular disciplinary structure.”

Um, why?

“Because history dissertations take a long time to write and thus need protection?” Right, unlike anthropology or literary criticism or other fields in the humanities. FAIL.

“Because many publishers won’t publish an open-source dissertation?” Right, so this assumes: a) the dissertation will be so little revised that the two texts would be essentially identical and b) but the magic fairy-dust of a book makes it the real benchmark of a properly tenurable person. E.g., “Oh noes, we couldn’t decide if someone’s scholarship was tenurable from a dissertation that is nearly identical to a book”. Here’s where the real fail comes in because it reveals how much the disciplinary association is accepting the clotted, antiquated attachment of a small handful of tenured historians to their established practices even when those practices have had any semblance of reason or accommodation to reality stripped from them.

Let’s suppose that university presses do stop publishing essentially unrevised dissertations. I can’t blame them: they need to publish manuscripts that have some hope of course adoption and wider readership, sold at a reasonable price, or they need to price up library editions high enough that the remaining handful of “buy ’em all” libraries will make up for the loss of libraries that buy in a more discretionary fashion.

You can understand why the publishers who are largely following option #B would not want to publish monographs that were marginally revised versions of open-access dissertations, because even the richest libraries might well decide that buying a $150 physical copy is unnecessary. But by the same token, again, why should a tenure and promotion process value the physical copy over the digital one if they’re the same? Because the physical copy has been peer-reviewed? Meaning, if two scholars who do not work for the same institution as the candidate have reviewed the manuscript and deemed it publishable, that alone makes a candidate tenurable? Why not just send out the URL of a digital copy to three or four reviewers for the tenure and promotions process to get the same result? Or rely more heavily upon the careful, sophisticated reading of the manuscript (in whatever form) by the faculty of the tenuring department and institution?

What the AHA’s embargo embarrassingly underscores is the extent to which many tenured faculty have long since outsourced the critical evaluation of their junior colleagues’ scholarship to those two or three anonymous peer reviewers of a manuscript, essentially creating small closed-shop pools of specialists who authenticated each other with little risk of interruption or intervention from specialists in other fields within history.

Thirty years ago, when university presses would publish most dissertations, you could plausibly argue that the dissertation which persistently failed review and was not published by anyone had some sort of issue. Today you can’t assume the same. Maybe we never should have given over the work of sensitive, careful engagement with the entire range of work in the discipline as embodied in our own departments, but whether that was ever a good idea, it isn’t now and can’t be kept going regardless.

Suppose we’re talking about option #A instead, the publishers who are being more selective and only doing a print run of manuscripts with potential for course adoptions or wider readership. Suppose you use that as the gold standard for tenurability?

That’s not the way that graduate students are being trained, not the way that their dissertations are being shaped, advised and evaluated. So you would be expecting, with no real guidance and few sources of mentorship, that junior faculty would have the clock ticking on their first day of work towards adapting their dissertations towards wider readability and usefulness. That’s a dramatic migration of the goalposts in an already sadistic process. You could of course change the way that dissertations are advised and evaluated and therefore change the basic nature of disciplinary scholarship, which might be a good thing in many ways.

But this would also accelerate the gap between the elite institutions and every other university and college in even more dramatic fashion: writing scholarship that had market value would qualify you for an elite tenure-track position, writing scholarship that made an important if highly specialized contribution to knowledge in a particular field of historical study would qualify you for more casualized positions or tenure-track employment in underfunded institutions that would in every other respect be unable and unwilling to value highly specialized scholarship. (E.g., have libraries that could not acquire such materials, curricula where courses based on more specialized fields and questions could not be offered, and have little ability to train graduate students in fields requiring research skills necessary for such inquiry.) In terms of the resources and needs of institutions of higher learning, it arguably ought to be the reverse: the richest research universities should be the institutions which most strongly support and privilege the most specialized fields and therefore use tenure and promotion standards which are indifferent to whether or not a scholar’s work has been published in physical form.

Yes, it’s not easy to move individual departments, disciplines or entire institutions towards these kinds of resolutions. But it is not the job of a professional association to advocate for clumsy Rube Goldberg attempts to defend the status quo of thirty years ago. If individual faculty or whole departments want to stick their heads in the sand, let that be on them. An organization that aspires to speak for an entire discipline’s future has to do better than that. The AHA’s position should be as follows:

1) Open-access, digitally distributed dissertations and revised manuscripts should be regarded as a perfectly suitable standard by which to judge the scholarly abilities of a job candidate and a candidate for tenure in the discipline of history. A hiring or tenuring committee of historians is expected to do the work of sensitive and critical reading and assessment of such manuscripts instead of relying largely on the judgment of outside specialists. The peer assessment of outside specialists should be added to such evaluation as a normal part of the tenure and promotion process within any university or college.

2) The ability of a historian to reach wider audiences and larger markets through publication should not become the de facto criteria for hiring and tenure unless the department and institution in question comprehensively embraces an expectation that all its faculty in all its disciplines should move in the course of their career towards more public, generalized and accessible modes of producing and disseminating knowledge. If so, that institution should also adopt a far wider and more imaginative vision of what constitutes engagement and accessibility than simply the physical publication of a manuscript.

]]>
https://blogs.swarthmore.edu/burke/blog/2013/07/24/historians-dont-have-to-live-in-the-past/feed/ 19
The Humane Digital https://blogs.swarthmore.edu/burke/blog/2013/05/03/the-humane-digital/ https://blogs.swarthmore.edu/burke/blog/2013/05/03/the-humane-digital/#comments Fri, 03 May 2013 17:23:21 +0000 https://blogs.swarthmore.edu/burke/?p=2318 Continue reading ]]> As a way of tackling both the question “whither the humanities” and the thorny issue of defining “digital humanities” in relationship to that question, I’ll offer this: maybe one strategy is to talk about what can make intellectual work humane.

First, let’s leave aside the rhetoric of ‘crisis’. Yes, if we’re talking about the humanities in academia, there are changes that might be called a crisis: fewer majors, less resources, a variety of vigorous attacks on humanistic practice from inside and outside the academy. Are the subjects of the humanities: expressive culture, everyday practices, meaning and interpretation, philosophy and theory of human life, etc. going to end? No. Will there be study and commentary upon those subjects in the near-term future? Yes. There will be a humanities, even if its location, authority and character will be much more unstable than they were in the last century. If we want to speak about and defend the future of the humanities with confidence, it is important to to concede that a highly specific organizational structuring of the highly specific institution of American higher education is not synonymous with humane inquiry as a whole. Humane ways of knowing and interpreting the world have had a lively, forceful existence in other kinds of institutions and social lives in the past and could again in the future. To some extent, we should defend the importance of humane thinking without specific regard for the manner of its institutionalization in part to make clear just how important we think it is. (E.g., that our defense is not predicated on self-interest.) Even if we think (as I do) that the academic humanities are the best show in town when it comes to thinking humanely.

I keep going back to something that Louis Menand said during his talk at Swarthmore. The problem of humanistic thought in contemporary American life is not with a lack of clarity in writing and speaking, it is not with a lack of “public intellectuals”. The problem, he said, is simply that many other influential voices in the public sphere do not agree with humanists and the kind of knowledge and interpretation they have to offer.

With what do they disagree? (And thus, who are they that disagree?) Let’s first bracket off the specifically aggrieved kind of highly politicized complaint that came out of the culture wars of the 1980s and 1990s and is still kicking around. I don’t think that’s the disagreement that matters except when it is motivated by still deeper opposition to humanistic inquiry.

What matters more is the loose agglomeration of practices, institutions and perspectives that view human experience and human subjectivity as a managerial problem, a cost burden and an intellectual disruption. I would not call such views inhumane: more anti-humane: they do not believe that a humane approach to the problems of a technologically advanced global society is effective or fair, that we need rules and instrumments and systems of knowing that overrule intersubjective, experiential perspectives and slippery rhetorical and cultural ways of communicating what we know about the world.

The anti-humane is in play:

–When someone works to make an algorithm to grade essays

–When an IRB adopts inflexible rules derived from the governance of biomedical research and applies them to cultural anthropology

–When law enforcement and public culture work together to create a highly typified, abstracted profile of a psychological type prone to commit certain crimes and then attempt to surveil or control everyone falling within that parameter

–When quantitative social science pursues elaborate methodologies to isolate a single causal variable as having slightly more statistically significant weight than thousands of other variables rather than just craft a rhetorically persuasive interpretation of the importance of that factor

–When public officials build testing and evaluation systems intended to automate and massify the work of assessing the performance of employees or students

At these and many other moments across a wide scale of contemporary societies we set out to bracket off or excise the human element , to eliminate our reliance on intersubjective judgment. We are in these moments, as James Scott has put it of “high modernism”, working to make human beings legible and fixed for the sake of systems that require them to be so.

Many of these moments are well-intentioned, or rest on reliable and legitimate methodologies and technologies. As witnesses, evaluators, and interpreters, human beings are unreliable, biased, inscrutable, ambiguous, irresolvably open to interpretation. Making sense of them can often be inefficient and time-consuming, without hope of resolution, and sometimes that is legitimately intolerable.

Accepting that this is the irreducible character of the human subject (the one universal that we might permit ourselves to accept without apology) should be the defining characteristic of the humanities. The humanities should be, across a variety of disciplines and subjects, committed to humane ways of knowing.

So what does that mean? To be humane should be:

Incomplete. E.O. Wilson recently complained that the humanities offer an “incomplete” account of culture, ethics and consciousness (and kindly offered to complete the account by removing the humanities from the picture completely). What Wilson sees as a bug is in fact a feature. The humanities are and should be incomplete by design—that is, there should be no technology or methodology which we might imagine as a future possibility that would permit complete knowledge achieved via humane inquiry nor should we ever want such a thing to begin with. A humane knowledge accepts that human beings and their works are contingent to interpretation. Meaning much, if not absolutely anything, can be said about their meaning and character. And they are contingent in action. Meaning that knowledge about the relatively fixed or patterned dimensions of human nature and life is a very poor predictor of the future possibilities of culture, social life, and the intersubjective experience of selfhood.

Slow. As in “slow food”, artisanal. Humane insights require human processes and habits of thought, observation and interpretation, and even those processes augmented by or merged into algorithms and cybernetics should be in some sense mediated by or limited to a hand-crafted pace. At the very bottom of most of our algorithmic culture now is hand-produced content, slow-culture interpretation: the fast streams of curation and assemblage that are visible at the top level of our searching and reading and linking rest on that foundation. This is not a weakness or a limitation to be transcended through singularity, but a source of the singular strength of humane thought. We use slow thought to make and manipulate algorithmic culture: social media users understand very quickly how to ‘read’ its infrastructures but it is slow thought, gradual accumulations of experience, discrete moments of insight, that permit that speed. There is no algorithmic shortcut to making cultural life, just shortcuts that allow us to hack and reassemble and curate what has been and is made slowly.

Dedicated to illegibility. By this I do not mean “difficult writing” in the sense that has inspired so much debate within and about the humanities. By this I mean a permanent, necessary suspicion baked into our knowledge about all political and social projects that require a human subject to be firmly legible and compliant to the needs of governance in order to succeed in their operations. Often the political commitments of humanists settle down well above this foundational level, where they are perfectly fine as the choices of individual intellectuals and may derive from (but are not synonymous with) humane commitments. That is to say, our political and social projects should arise out of deeply vested humane skepticism about legibility and governability but as a general rule many humanists truncate or limit their skepticism to a particular subset of derived views.

Is this a riff on Isaiah Berlin’s liberal suspicions of the utopian? Yes, I suppose, when it’s about configuring the human subject so that it is readily understandable by systems of power and amenable to their workings. But this is also a riff on “question authority”: the point is that if power can be in many places, from a protest march to a drone strike, the humane thinker has to be a skeptic about its operations. Humane practice should always be about monkey-wrenching, always be the fly in the ointment, even (or perhaps especially) when the systems and legibility being made suit the political preferences of a humane thinker.

Playful, pleasurable, & extravagant. My colleague in a class I co-taught last semester made me feel much more comfortable with my long-felt wariness about influence of Bourdieu-ian accounts of institutions and culture, and how in particular they’ve had a troubling effect on humanistic inquiry that often amounts to functionalism by another name. My colleague’s reading of Michele Lamont’s How Professors Think was to read it as calling attention to how much academics do not simply make judgments as an act of capital-d Distinction, as bagmen for a sociological habitus. Instead, she argued that it was evidence for the persistance of an attention to aesthetics, meaning, pleasure that is not tethered to the sociological (without arguing that this requires depoliticitizing the humanities). That our intellectual lives not only should be humane but that they are already.

This is very much what I mean by saying that humane knowledge should be playful and even extravagant: that every humanistic work or analysis should produce an excess of perspectives, a variety of interpretations, that it should dance away from pinning culture to the social, to the functional, to the concrete. Humane work is excess: we should not apologize meekly for that or try to recuperate a sense of the dutifully instrumental things we can do, even as we ALSO insist that excess, play and pleasure are essential and generative to any humane society. That their programmatic absence is the signature diagnostic of cruelty, oppression and injustice. This is what I think Albie Sachs was getting at in 1990 when he said that with the beginnings of negotiations for the end of apartheid, South African artists and critics should now “be banned from saying culture is a weapon of the struggle”. Whatever fits the humane to a narrow instrumentality, whatever yokes it to efficiency, is ultimately anti-humane.

So what of the digital? Many defenders of the humane identify the digital as the quintessence of the anti-humane, recalling the earlier advent of computational or cliometric inquiry in the 1970s and 1980s. Should we prefer a John Henry narrative: holding on to last gasp of the humane under the assault of the machine?

Please, please no. digital methods, digital technologies and digital culture are already a good habitus of humane practice and the best opportunity to strengthen the human temperament in humanistic inquiry.

Again and again, algorithmic culture has confronted the inevitable need for humane understanding, often turning away both because of its costs (when the logic of such culture is to reduce costs by eliminating skilled human labor) and because of a lack of skill or expertise in humane understanding among the producers and owners of such culture. I’ve long observed, for example, that the live management teams for massively-multiplayer online games frequently try to deal with the inevitable slippages and problems of human beings in digital environments by truncating the possibilities of human agency down to code, making people as much like a codeable entity as possible, engineering a reverse Turing-Test. And they always fail, both because they must fail but also because they don’t understand human beings very well.

This is an opportunity for humane knowledge (we can help! Give us jobs!) but also often evidence of the vigor of humane understandings and expertise, that the human subject as we understand it recurs and reinvents so insistently even in expressive and everyday environments that see a humane sensibility as an inconvenience or obstacle.

But this is not just an extension of the old, it is sometimes in a very exciting way genuinely new. “Big data” and data analytics are seen by some intellectuals as an example of opposition to the humane. But in the hands of many digital humanists or practicioners of “distant reading”, they demonstrate that the humane can become strange in very good ways. Schelling’s “segregation model” is not an explanation of segregation but a demonstration that there are interpretations and analyses that we would not think of out of ourselves, a reworking without mastery. The extension and transformation of the humane self through algorithmic processing is not its extinction: approached in the right spirit, it is the magnification of the humane spirit as I’ve described it.

This is not a CP Snow “two cultures” picture, either. Being humane is not limited to the disciplines conventionally described as the humanities. Natural science that is centrally interested in phenomena described as emergent or complex adaptive systems, for example, is in many ways close to what I’ve described as humane.

We might, in fact, begin to argue that most academic disciplines need to move towards what I’ve described as humane because all of the problems and phenomena best described or managed in other approaches have already been understood and managed. The 20th Century picked all the low-hanging fruit. All the problems that could be solved by anti-humane thinking, all the solutions that could be achieved through technocratic management, are complete. What we need to know next, how we need to know it, and what we need to do falls much more into the domains where humane thinking has always excelled.

]]>
https://blogs.swarthmore.edu/burke/blog/2013/05/03/the-humane-digital/feed/ 3
Digital Learning Is Like a Snow Leopard (Real, Beautiful, Rare and Maybe To Be Outdated by a New Operating System) https://blogs.swarthmore.edu/burke/blog/2013/03/04/digital-learning-is-like-a-snow-leopard-real-beautiful-rare-and-maybe-to-be-outdated-by-a-new-operating-system/ https://blogs.swarthmore.edu/burke/blog/2013/03/04/digital-learning-is-like-a-snow-leopard-real-beautiful-rare-and-maybe-to-be-outdated-by-a-new-operating-system/#comments Mon, 04 Mar 2013 22:27:39 +0000 https://blogs.swarthmore.edu/burke/?p=2277 Continue reading ]]> Maybe it’s just because it’s my obsession of the moment, but the digital camera strikes me as the single greatest example of a new “disruptive” technology that permits a fundamentally new kind of learning experience.

However, precisely because digital photography is the best example, it also defines a limit or border: a great many other discrete skills, tasks and competencies do not have the crucial characteristics of digital photography and are therefore not nearly so open to pedagogical transformation through digitization or online communication.

What gives a digital camera a special kind of capacity for auto-didactic learning? Two things: speed and cost. As in fast and none. Add to that an unusually robust communicative infrastructure for sharing, circulating and viewing photographs, so that aspirant photographers will always have a very large, responsive community of fellow learners available to them.

Each picture taken with a digital camera is an experiment with the medium. The photographer can see in a viewfinder immediately the consequences of every press of the shutter trigger. Change the composition, the focus, and if it’s a DSLR, almost every setting, and see a new outcome. Processing software allows images, particularly those taken in RAW format, to become something radically different in a few seconds of adjustment. There is no cost to each experiment once the initial investment is made, save storage to accommodate images on-camera and in some kind of archive.

The sharing and circulation of images was one of the earliest defining practices of the Internet and is now one of the focal points of very large social media communities. Photographers who share images–and even those who just view–are now able to see exceptionally large flows of cultural work on a daily basis and to interpret the action of algorithmic practices of rating, ranking, curating, and so on–a focus of some recent writing here at this blog.

So here we have a technology whose intrinsic properties and emergent interactions with communities of use and production incidentally make it extremely easy to learn new techniques, approaches and themes. Digital photographers not only can discover through trial and error material facts about light, composition, tonality and subject but also can watch large assemblages of algorithmic culture collectively “discover” preferences and tropes, which also instruct the learning photographer not just about how to shoot but what to shoot. When you see the five hundredth image of a Southeast Asian fisherman throwing a net that is backlit by a setting or rising sun, you can come to understand both what the human eye and the algorithmic culture of the moment “like” about that trope. And maybe, as with the best learning, you can abstract that insight to other compositions and ideas. What thin or diaphanous objects catch light? What other kinds of ‘exoticized’ subjects and scenes draw attention? And so on.

This is not to say that all digital photography is auto-didactic, or that all digital photographers are equally capable of taking up these lessons. But there is an unusual density of online resources available to instruct photographers at key moments in their learning. Take for example David Hobby’s seven-year old site Strobist.com, with its Lighting 101 course, which I think deserves to be regarded as one of the first fully-realized and implemented “massively open online courses”, well before Thrun and Norvig’s Stanford AI class. The site and the course have had a major impact on digital photography over those seven years, evidence of the degree to which this particular medium has a learning community that scales up to a global expanse.

And of course there are limits here: for one, involving costly equipment and the vested cultures that gently and not-so-gently push at each other around that equipment. A photographer with an iPhone camera and an open-source or cheap processing app will have a harder time learning some of the concepts and techniques that are privileged in some cultures of photographic production and consumption. For another, there are forms of technological literacy that auto-didactic approaches to digital photography assume rather than offer, and moments in learning where the presence of a regular, physically present teacher would provide more efficient, transformative or richer understanding.

Most importantly, the camera and the software and the communities cannot provide the answer to the question, “Why do I take photographs, and what kind of art or product do I want to make?” A technology and a large community of fellow learners and users, do not tell you how to think about visuality, and how images speak to audiences and each other. A teacher–or an overall education–might make progress on that front, but the device and its infrastructure will not answer of its own accord.

————

The thing is, much of the rest of what people might want to learn in 2013–for fun, for enlightenment, for immediate application to their careers–does not share either of the key attributes of digital photography: speed and cost.

Take something similarly basic to the production of digital images, and even more widely necessary in the contemporary world both in private life and in work: writing. Writing is not fast and it is not, despite word processors and spell-checkers and a host of other small embedded technologies of production, nearly so intuitive to the wetware of the human brain and body. The metaphor of the eye has all sort of misleading or deceptive application to photography but it is a good enough baseline for explaining why most people can take–or view–a picture more intuitively than they can write a letter or an essay.

You cannot write quickly and intuitively enough to experience near-simultaneous iterations of multiple examples of the same instance of writing. And therefore writing has a different kind of cost in labor time, even if it has some of the same frictionless cost of other digital culture. (E.g., the real cost in paper AND time of typing or handwriting multiple drafts is different than the cost of storing many digital files.) You can’t distribute writing to as many online audiences who have as close a consensus about what they’re seeking and the attention that reading takes is in shorter supply. (Hence: TL;DR.)

So just as with photography, questions about “what is writing”, “why write” and so on don’t self-answer, but the medium itself doesn’t even have the automatic affordances of digital photography.

Now try something as inescapably material as carpentry or emergency medicine. Here the costs of the necessary materials are very high, the process of learning them through use is very slow, and the dangers of improper or incomplete practice are extraordinary and multilayered. Or try something where the learning process is inevitably and always social, interactive and/or institutional: counseling, driving, military training. Digital or virtual processes might leaven or enrich educational experience but they can’t even begin to replace the conventional approach.

This is one of several places that the enthusiasm for MOOCs is going to founder in short order: digitization is only revolutionary in its automating possibilities for education in the exception, not the norm, and those exceptions are structured by real, physical limits long before they involve entrenched assumptions. Where digitized approaches to writing or emergency medicine or carpentry compare favorably to existing educational services, that won’t be because of the technology of digitization but because, as Clay Shirky has more or less argued recently, much existing education already sucks as much as digitized education is going to suck. But this is a very different challenge, then, from “digitize it all”: it is “make it suck much less, whatever it is and however it is delivered”.

]]>
https://blogs.swarthmore.edu/burke/blog/2013/03/04/digital-learning-is-like-a-snow-leopard-real-beautiful-rare-and-maybe-to-be-outdated-by-a-new-operating-system/feed/ 1
Getting to Wrong https://blogs.swarthmore.edu/burke/blog/2013/02/19/getting-to-wrong/ https://blogs.swarthmore.edu/burke/blog/2013/02/19/getting-to-wrong/#comments Tue, 19 Feb 2013 20:57:02 +0000 https://blogs.swarthmore.edu/burke/?p=2258 Continue reading ]]> About a month ago, I started writing an entry about Gawker Media as a model for the “new journalism”. When I started writing that, I mostly meant it as a compliment. I was thinking about Deadspin’s Manti Te’o expose (by a different Timothy Burke), about Gawker’s long-running Unemployment Stories series and about some of the longer-form essays that Gawker runs as well, about io9’s Daily Explainer and other bits of reportage they do.

My complimentary view was that at its best, this “new journalism” combines commentary, reportage and lucid interpretation in a way that’s largely unavailable in the self-important culture of what’s left of print journalism. That the best “new journalists” found across a range of sites and blogs are beginning to really define a new expressive and ethical set of norms that don’t just keep journalism alive but overcome some of the ponderous establishment status quo of late 20th Century print journalism. That some of the best of the new wave of work is in every sense better than the best of print and television journalism: more readable, more visceral, more diverse, and covering a vastly wider range of places, people and experiences.

But the last few days have reminded me of what the weaknesses of the new digital journalism are. Namely, that you can get a story really wrong and not ever feel any need to apologize for it. In fact, you can just go ahead and keep at it. Hamilton Nolan–whose Unemployment Stories are a really important document of America’s new economic realities–can also just recirculate other people’s news with just enough twisting and stripping off of context to make the story misleading or just plain wrong. And then never say anything when called on it, just hide behind the interface. (Which is sort of what I thought Nolan’s problem with Bill Keller was.) For example, suggesting that an extracurricular program at Duke University is a full department or major taken for credit. This is like finding out that there’s a group of students who meet one night every month in a campus cafe to listen to each other’s poetry and then ranting that they’re all paying tuition to get a poetry degree. A day later, Nolan gets one small aspect of an already slanted Wall Street Journal article about the new White House data on higher education right, but misses everything else in the WSJ article and in the larger story of the new White House initiative.

I get it, Gawker’s a “gossip site”. This is the usual defense of the shortcomings of digital journalism: that it’s not meant to be serious, that it has no ambitions, that it’s just ephemeral, that you have to privilege getting people’s attention more than getting it right. That all you need to do is confirm existing stereotypes and give your readers information that comforts their prejudices. Basically, Fox News only with a more generous view of anal sex and a less positive view of gun ownership. But in that case, you wonder why Gawker Media, or Slate, or Salon, or the Atlantic bloggers or anyone else out there bothers as much as they do with the writing that really does strive to be high-value reportage or original commentary. Or for that matter why any of these digital writers have the gall to complain when the mainstream media gets its facts wrong or grinds its axes.

It’s important not to turn this complaint into nostalgia for print journalism, either. Just passing along a tidbit of information created by someone else, sometimes with a bit of spurious twisting and recrafting to make it sound original while also grinding some axe, is a very well established part of traditional media practice. Most entertainment journalism for the last fifty years has consisted of the barely-artful repackaging of press releases from agents and studios. Much political journalism in the same time period has been built around reporters acting as the mouthpiece of a “confidential source”, generally repeating verbatim whatever ‘news’ they want to see on the front page. Digital journalism is different because of the volume of its repackaged information and because of the shitty wages it pays to the people who write it, not because in ye olden days there were Great Ethical Men who walked the earth and today it’s just a bunch of scumbags.

But the thing is: the pacing and the interface and the technology actually allow for correction, for updates, for getting the story right in a way that was never possible. The New York Times made a fetish of its corrections in part because its production cycle made them all a big deal, and in part because you couldn’t actually change the original item. Today? If you get it wrong (or miss something interesting) the first time and your commenters call you on it, it is an easy thing to say as much. Of course, do that enough, and people might begin to wonder: why not get it right the first time?

And once you ask that, well, yes indeed: why not?

]]>
https://blogs.swarthmore.edu/burke/blog/2013/02/19/getting-to-wrong/feed/ 1