Information Technology and Information Literacy – Easily Distracted https://blogs.swarthmore.edu/burke Culture, Politics, Academia and Other Shiny Objects Wed, 01 Feb 2017 19:39:33 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.15 Fighting for the Ancien Regime https://blogs.swarthmore.edu/burke/blog/2017/02/01/fighting-for-the-ancien-regime/ https://blogs.swarthmore.edu/burke/blog/2017/02/01/fighting-for-the-ancien-regime/#comments Wed, 01 Feb 2017 19:35:12 +0000 https://blogs.swarthmore.edu/burke/?p=3087 Continue reading ]]> Among the many things that educated progressives failed to understand about the world around them over the last twenty years–and this is not just an American story, but a global one–is that we were not marginal, not since the 1970s. Neither were we the rulers of our societies, the top of the pyramid, the dominant, the 1%. We were not marginal, not dominant. We were somewhere, however, in the center of the infrastructures that sustained our national and local systems of governance, of cultural production, of civil society. We were, and perhaps still are, part of the system. The Establishment. And that has not been a bad or shameful thing, but instead a very good thing that is now threatened.

There are many professionals with progressive, liberal, centrist or even mainstream conservative political affiliations who understood this perfectly well. There were many who didn’t. Career diplomats at the State Department, lawyers working for large urban firms, surgeons working in major hospitals, financial executives working for banks, understood it. Many professors, non-profit community organization managers, actors, and others understood it poorly. Some thought that you were only the Establishment if you were wealthy, or white, or male, or held a certain set of specific political ideologies and affiliations. But you can trace the existence and continuation of a great many jobs–and life situations–to a political economy that depended on the civic, governmental and business institutions built up in the United States and around the world after 1945. The manager of a local dance company in a Midwestern city who only makes $40,000 a year and is an African-American vegan lesbian with a BA from Reed is still linked to the Establishment. That dance company doesn’t exist without the infrastructure where small trickles of revenue flow from cities, states, and nations into such organizations, without the educated professionals who donate because they believe in the arts, without the dancers themselves who chase a life of meaning through art but who also want to get paid. It’s not that there wasn’t art–or patronage of art–in the 19th Century or the early 20th Century–but there was less of it, and it was less systemically supported, and less tied to a broad consensus at the civic and social center about the value of art and education everywhere. Some of us are very powerful in the Establishment, some of us grossly misuse and abuse the power of the Establishment, some of us are the wealthy beneficiaries of its operations and others poorer and less powerful at its edges. But even out at the edges, still linked, still reliant on the system, and still in some sense believers in much of what the Establishment entails. The Establishment has had its etiquette, its manners, its protocol, its ways of being and doing, that were as known and familiar and accessible to the progressives who fancied themselves to be marginal and excluded from power as those who accepted that they were part of the Establishment.

This all sounds like I’m working up a big egalitarian spanking about how we needed to be less arrogant and all that. Relax. Maybe we did need to be less arrogant, but we also should have known we were defending institutions that we believed in against those who for some reason or another are dead set on destroying those institutions. That speaking from the center was not a sin or a crime. One of our great weaknesses at times has been how some of us have adopted an insistence that virtue can only derive from marginality, a view that speaking from power is always a fallen and regrettable position. Because we didn’t see our ties to the establishment as virtue and we didn’t understand that our forms of power were important for defending what we had already achieved, because we had a reflexive and attachment to the idea that we were in no way powerful, that our share of the status quo could only be found in some future progress, never even partially achieved, we were unready to wake up in the year 2016 and discover that we were not only a part of an ancien regime threatened by a mob, but that we actually wanted to defend that regime rather than rush to join the mob at the barricades. It would have been better if we’d defended it that way long before this moment. But it will help even now if we recognize that this is part of what we’re doing: defending a structure of manners, of virtues, of practices, of expectations, of constraints and outcomes, against people who either don’t recognize that this structure is important for them or from people who genuinely do not benefit from that structure. That we should not be ashamed to defend our loosely shared habitus, because it really is better for the general welfare than the brutalist, arbitrary, impoverishing alternative that the populist right is pushing forward in many nations.

The first thing we do to defend the minimum necessary infrastructure of our center is simply accept that we are the center, we are the norm, we are the majority. They are the margins, the minority, the outsiders, the threat. Meaning, we retrain ourselves rhetorically and imaginatively to stop seeing marginality as a state which necessarily confers virtue on those in it, and centrality as a morally depraved state that we should always seek to move away from. That’s a non-trivial shift in consciousness and rhetoric but it’s important. Even people mistreated or excluded in relative terms by the systems which are now under attack have a better chance to make those systems function more inclusively and with greater justice than they would under the new order that is seeking to seize the high ground of the government, economy and civil society.

The second thing we do is figure out which of the grievances that is bringing some people to the barricades require some response from us other than an obdurate defense of the way things have been. Where must our ancien regime bend and change if it is not to break? That work is as important fighting to preserve what’s worth preserving. I would suggest the following as starters:

a) We need new or at least refurbished underlying narratives about pluralism, difference, diversity which forcefully explain why they’re important and what we need to do to respect that importance.
b) We need a new vision of what we want existing systems and institutions to do about violence by loosely connected small groups against the rest of us. This includes both white male mass shooters in the United States and ISIS insurgents in Iraq, Syria and elsewhere, and for that matter rogue cops who can’t seem to get behind the mission of “serve and protect”. We should start seeing these cases as related and we need to go beyond the usual conceptual frames we use (enduring, enforcing, military attack, controlling access to weaponry).
c) We need to acknowledge why the people on the barricades, some of them at least, might still be excited and pleased by the spectacle of Trump’s first days in office despite the crude brutalism of much of it, because they feel that at least something is happening, something is changing. If we’re going to defend the establishment, it needs to be an establishment that has the potential to do something, to change things, to be sudden and decisive. If we insist that the proper way to do things is always incremental, gradual, partial, procedural, the ancien regime will likely crumble under its own weight no matter what we do to shore it up.
d) We need to identify the necessary heart of our established systems and practices, whether it’s in a small non-profit, a government office, a university, or a corporate department, and be ready to mercilessly abandon the unnecessary procedures, processes and rules that have encrusted all of our lives like so many barnacles. Those of us who are in some sense part of the larger networks of the Establishment world, even at its edges, can endure the irrelevance of pointless training sessions, can patiently work through needless processes of measurement and assessment, can parse boring or generic forms of managerial prose to find the real message inside. We’ve let this kind of baroque apparatus grow up around the genuinely meaningful institutional systems and structures that we value because it seems like too much effort in most cases to object against it, and because much of this excess is a kind of stealthy job creation program that also magnifies the patronage opportunities for some individuals. But this spreading crud extends into the lives of people who are not primed to endure it, and who often end up victimized by it, and even for those of us who know our way around the system, there are serious costs to the core missions of our institutions, to clarity and transparency, and to goodwill. It’s time to make this simpler, more streamlined, more focused, without using austerity regimes or “disruption” as the primary way we accomplish that streamlining. We don’t need to get rid of people, we just need to get rid of the myriad ways we acquiese to the collection of more and more tolls on the roads we traverse in our lives and work.
e) We need to come up with heuristics that let us continue to stay connected online but that help us sort signal from noise in new ways.

]]>
https://blogs.swarthmore.edu/burke/blog/2017/02/01/fighting-for-the-ancien-regime/feed/ 6
The Instrument of Future Mischief https://blogs.swarthmore.edu/burke/blog/2016/06/30/the-instrument-of-future-mischief/ https://blogs.swarthmore.edu/burke/blog/2016/06/30/the-instrument-of-future-mischief/#comments Thu, 30 Jun 2016 19:32:59 +0000 https://blogs.swarthmore.edu/burke/?p=2994 Continue reading ]]> A friend of mine has been quoting a few thoughts about the minor classic SF film Colossus: The Forbin Project, about an AI that seizes control of the world and establishes a thoroughly totalitarian if ostensibly “humane” governance. He asks how we would know what a “friendly” AI would look like, with the obvious trailing thought that Colossus in the film is arguably “friendly” to the intentions and interests of its creators.

This is a minor subgenre of reflections on AI, including the just-finishing series Person of Interest (no spoilers! I am still planning to power through the last two seasons at some point).

I think the thing that makes “friendly” AI in these contexts a horrifying or uncanny threat is not the power of the AI, though that’s what both stories and futurists often focus on, the capacities of the AI to exert networked control over systems and infrastructure that we regard as being under our authority. Instead, it is the shock of seeing the rules and assumptions already in place in global society put into algorithmic form. The “friendly” AI is not unnerving because it is alien, or because of its child-like misunderstandings and lack of an adult conscience. It is not Anthony Fremont, wishing people into the cornfield. It is that it is a mirror. If you made many of our present systems of political reason into an algorithm, they might act much the same as they do now, and so what we explain away as either an inexorable law of human life or as a regrettable accident would be revealed as exactly what it is: a thing that we do, that we do not have to do. The AI might be terrifying simply because it accelerates what we do, and does it more to everyone. It’s the compartmentalization that comforts us, the incompetence, the slowness, not the action or the reasoning.

Take drone warfare in the “global war on terror”. Write it as an algorithm.

1. If terrorist identity = verified, kill. Provide weighting for “verified”.
2. If non-terrorist in proximity to terrorist = still kill, if verified is strongly weighted.
3. If verification was identifiably inaccurate following kill = adjust weighting.
4. Repeat.

The only reason that Americans live with the current implementation of that algorithm is that the people being killed are out of sight and are racially and culturally coded as legitimate victims. One of the weightings of our real-world actual version of the algorithm is “do this only in certain places (Syria, Yemen, Afghanistan) and do this only to certain classes of the variable ‘terrorist'”. Americans also live with it because the pace of killings is sporadic and is largely unreported. A “friendly AI” might take the algorithm and seek to do it more often and in all possible locations. Even without that ending in a classically dystopian way with genocide, you could imagine that many of the AI’s creators would find the outcome horrifying. But that imaginary AI might wonder why, considering that it’s only implementing an accelerated and intensified version of the instructions that we ourselves created.

Imagine a “friendly AI” working with the algorithm for “creative destruction” (Schumpeter) or the updated version, “disruption” (Christensen).

1. Present industries and the jobs they support should be relentlessly disfavored in comparison to not-yet-fully realized future industries and jobs.
2. If some people employed in present jobs are left permanently unemployed or underemployed due to favoring not-yet-fully realized future industries and jobs, this validates that the algorithm is functioning correctly.
3. Preference should be given to not-yet-fully realized industries and jobs being located in a different place than present industries and jobs that are being disrupted.
4. Not-yet-fully realized future industries and jobs will themselves be replaced by still more futureward industries and jobs.

The “friendly AI” would certainly seek to accelerate and expand this algorithm, always favoring what might be over what actually is, possibly to the point that only notional or hypothetical industries would be acceptable, and that any actual material implementation of an industry would make it something to be replaced instantly. The artificial stop conditions that the tech sector, among others, put on disruption might be removed. Why buy any tech right now when there will be tech tomorrow that will displace it? Why in fact actually develop or make any tech considering that we can imagine the tech that will displace the tech we are developing? Apple will simply obsolete the next iPad in a few years for the sake of disruption, so the friendly AI might go ahead and pre-obsolete it all. Again, not what anybody really wants, but it’s a reasonable interpretation of what we are already doing, or at least what some people argue we are doing or ought to do.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/06/30/the-instrument-of-future-mischief/feed/ 6
Welcome to the Skinnerdome https://blogs.swarthmore.edu/burke/blog/2016/06/10/welcome-to-the-skinnerdome/ https://blogs.swarthmore.edu/burke/blog/2016/06/10/welcome-to-the-skinnerdome/#comments Fri, 10 Jun 2016 16:03:06 +0000 https://blogs.swarthmore.edu/burke/?p=2978 Continue reading ]]> I think a tremendous amount of writing so far this election season about the Presidential race shows primarily that the effect of social media on public discourse is increasingly dire. Here’s the thing: I would characterize the majority of what I have read as arguing that the convictions people have declared are only held by them because of some form of prior social ideology or consciousness, that they are not based on anything “real” in terms of a particular candidate’s likely policies, rhetoric or record.

When we’re dealing with large-scale voting patterns, a certain amount of sociological musing is completely appropriate, because that’s what the patterns make visible–that this group of people likes a certain person more or less, etc. At that level, sociological thinking is explanatory and it is also an important part of arguing for or against particular candidates in terms of reading what they mean and what they will do.

But when we bring that into an address that’s meant to speak to particular individuals to whom we are connected via social media, it feels, first of all, reductive: as if those individuals whom we have chosen to be connected to are no more than their sociologies. More importantly, the arguments we have at this point feel straight out of B.F. Skinner: many writers in social media treat other people as if they are something to be conditioned–to be pushed this way or that way with the proper framing. With the punishment of scolding and call outs if they’re being sociologically bad, with praise and attention if they’re expressing the proper selfhood. We begin to be the master of our own little Skinner boxes rather than as human beings in rich conversation with other human beings. We begin to think of each other person in our feeds as a person to be punished and rewarded, conditioned and shaped. We stop thinking of our own reasons why we believe in a particular candidate, why we think *or* feel what we think or feel. More importantly, we stop thinking of the reasons why someone else feels or thinks that way, and stop being curious about those reasons if they’re not being shared or enunciated. The difference in their views starts to be merely exasperating, the manifestation of an enemy sociality. Disagreement starts to be like an untrained puppy making messes in our space: we give treats, we hit noses with rolled-up newspaper. If the puppy doesn’t learn, we euthanize. We start thinking of “frames”, of rhetoric as the way to run our Skinner box. We don’t persuade or explain, we push and pull.

I think there’s a reason why formal debate named “ad hominem” as a logical fallacy. It’s not that arguments are not in fact a result of the personalities or sociologies of the people making them. We’ve all had to argue with people whose arguments are motivated by spite or some other emotional defect or are defenses of their social privileges. But it is that allowing ourselves the luxury of saying so during the discussion short-circuits our capacity to engage in future conversation–it becomes the default move we make. We start to have conversations only when someone who is a shining paragon of virtue in our eyes steps forward. (And that person increasingly may become someone who is emotionally and sociologically identical to ourselves.) One by one human beings around us vanish, and so too does evidence, inquiry, curiosity. We end up in a landscape of affirmation and disgust, of reaction to stimuli–e.g., as we Skinner box others, so too are we Skinner boxed.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/06/10/welcome-to-the-skinnerdome/feed/ 3
A Chance to Show Quality https://blogs.swarthmore.edu/burke/blog/2016/03/29/a-chance-to-show-quality/ https://blogs.swarthmore.edu/burke/blog/2016/03/29/a-chance-to-show-quality/#comments Tue, 29 Mar 2016 20:30:05 +0000 https://blogs.swarthmore.edu/burke/?p=2943 Continue reading ]]> Romantic ideals of originality still remain deeply embedded in how we recognize, cultivate and reward merit in most of our selective systems of education, reputation and employment. In particular we read for the signs of that kind of authentic individuality in writing that is meant to stand in for the whole of a person. Whether it’s an essay for admission to college, a cover letter for a job, an essay for the Rhodes or Fulbright, an application for research funding from the Social Science Research Council or the National Science Foundation, we comb for the signs that the opportunity-seeker has new ideas, has a distinct sensibility, has lived a life that no one else has lived. Because how else could they be different enough from all the other worthies seeking the opportunity or honor so as to justify granting them their desires?

Oh, wait, we also want to know, almost all of the time, whether the opportunity-seeker is enough like everyone else that we can relate their talents, ideas, capabilities, plans and previous work to the systems which have produced the applicants. We want assurances that we are not handing resources, recognition and responsibility to a person so wholly a romantic original that they will not ever be accountable or predictable in their uses. We want to know that we are selecting for a greatness that we already know, a merit that we already approve of.

This has always been the seed that grows into the nightmare of institutions, that threatens to lay bare how much impersonality and distance intrudes upon decisions that require a fiction of intimacy. Modern civic institutions and businesses lay trembling hands on their bankrolls when they think, however fleetingly, that there is a chance that they’re getting played for fools. That they are dispensing cheese to mice who have figured out what levers to push. That when they read the words of a distinctive individual, they are really reading the words of committees and advisors, parents and friends. That they are Roxane swooning over Christian rather than Cyrano, or worse, that they are being catfished and conned.

The problem is that when we are making these choices, which in systems of scarcity (deliberately produced or inevitably fated) must be made, we never really decide what it is that we actually value: unlikeness or similarity, uncertainty or predictability, originality or pedigree. That indecision more than anything else is what makes it possible for people to anticipate what the keepers of a selective process will find appealing. Fundamentally, that boils down to: a person with all the qualifications that all other applicants have, and a personal experience that no one else could have had but that has miraculously left the applicant even more affirmed in their qualifications. Different in a way that doesn’t threaten their sameness.

I’ve been involved in a number of processes over the years where those of us doing the selecting worried about the clear convergence in some of the writing that candidates were doing. We took it to be a sign that some candidates had an advantage that others didn’t, whether that was a particularly aware and canny advisor or teacher, or it was some form of organized, institutional advice. I gather that there are other selective institutions, such as the Rhodes Foundation, that are even more worried, and have moved to admonish candidates (and institutions) that they may not accept advice or counsel in crafting their writing.

The thing is, whenever I’ve been in those conversations, it’s clear to me that the answer is not in the design of the prompt or exercise, and not in the constraints placed on candidates. It’s in the contradictions that selective processes hold inside themselves, and in the steering currents that tend to make them predictable in their tastes. When you try to have it all, to find the snowflake in the storm, and yet also prize the snowfall that blankets the trees and ground with an even smoothness, you are writing a human form of algorithm, you are crafting a recipe that it takes little craft to divine and follow. The fault, in this case, lies in us, and in our desires to be just so balanced in our selection, to stage-manage a process year in and year out so that we get what we want and yet also want what we get.

Maybe that was good enough in a time with less tension and anxiety about maintaining mobility and status. But I suspect the time is coming where it will not be. Not because people seek advantage, but because anything that’s predictable will be something relentlessly targeted by genuine algorithms. Unpredictability is never a problem for applicants or advisors, always for the people doing the selection or the grading or the evaluation. If you don’t want students to find a standard essay answer to a standard essay prompt, you have to use non-standard prompts. If you don’t want applicants to tell you the very moving story of the time they performed emergency neurosurgery on a child in the developing world using a sterilized safety pin and a bottle of whisky, you have to stop rewarding applicants who tell you that story in the way that has previously always gotten your approval. If what we want is genuine originality, the next person we choose has to be different from the last one. If what we want is accomplished recitation of training and skills, then we look for the most thorough testing of that training. When we want everything, it seems, we end up with performances that very precisely thread the needle that we insistently hold forth.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/03/29/a-chance-to-show-quality/feed/ 3
Opt Out https://blogs.swarthmore.edu/burke/blog/2016/02/23/opt-out/ https://blogs.swarthmore.edu/burke/blog/2016/02/23/opt-out/#comments Tue, 23 Feb 2016 19:22:45 +0000 https://blogs.swarthmore.edu/burke/?p=2939 Continue reading ]]> There is a particular kind of left position, a habitus that is sociologically and emotionally local to intellectuals, that amounts in its way to a particular kind of anti-politics machine. It’s a perspective that ends up with its nose pressed against the glass, looking in at actually-existing political struggles with a mixture of regret, desire and resignation. Inasmuch as there is any hope of a mass movement in a leftward direction in the United States, Western Europe or anywhere else on the planet, electoral or otherwise, I think it’s a loop to break, a trap to escape. Maybe this is a good time for that to happen.

Just one small example: Adam Kotsko on whether the Internet has made things worse. It’s a short piece, and consciously intended as a provocation, as much of his writing is, and full of careful qualifiers and acknowledgements to boot. But I think it’s a snapshot of this particular set of discursive moves that I am thinking of as a trap, moves that are more serious and more of a leaden weight in hands other than Kotsko’s. And to be sure, in an echo of the point I’m about to critique, this is not a new problem: to some extent this is a continuous pattern that stretches back deep into the history of Western Marxism and postmodernism.

Move #1: Things are worse now. But they were always worse.

Kotsko says this about the Internet. It seems worse but it’s also just the same. Amazon is just the Sears catalogue in a new form. Whatever is bad about the Internet is an extension, maybe an intensification, of what was systematically bad and corrupt about liberalism, modernity, capitalism, and so on. It’s neoliberal turtles all the way down. It’s not worse than a prior culture and it’s not better than a prior culture. (Kotsko has gone on to say something of the same about Trump: he seems worse but he’s just the same. The worst has already happened. But the worst is still happening.)

I noted over a decade ago the way that this move handicapped some forms of left response to the Bush Administration after 9/11. For the three decades before 9/11, especially during the Cold War, many left intellectuals in the West practiced a kind of High Chomskyianism when it came to analyzing the role of the United States in the world, viewing the United States as an imperial actor that sanctified torture, promoted illiberalism and authoritarianism, acted only for base and corrupt motives. Which meant in some sense that the post-9/11 actions of the Bush Administration were only more of the same. Meet the new boss, same as the old boss. But many left intellectuals wanted to frame those actions as a new kind of threat, as a break or betrayal of the old order. Which required saying that there was a difference between Bush’s unilateralism and open sanction of violent imperial action and the United States during the Cold War and the 1990s and that the difference was between something better and something worse. Not between something ideal and something awful, mind you: just substantively or structurally better and substantively or structurally worse.

This same loop pops up sometimes in discussions of the politics of income inequality. To argue that income inequality is so much worse today in the United States almost inevitably requires seeing the rise of the middle-class in postwar America as a vastly preferable alternative to our present neoliberal circumstances. But that middle-class was dominated by white straight men and organized around nuclear-family domesticity, which no progressive wants to see as a preferable past.

It’s a cycle visible in the structure of Howard Zinn’s famous account of American history: in almost all of Zinn’s chapters, the marginalized and the masses rise in reaction to oppression, briefly achieve some success, and then are crushed by dominant elites, again and again and again, with nothing ever really changing.

It’s not as if any of these negative views of the past are outright incorrect. The U.S. in the Cold War frequently behaved in an illiberal, undemocratic and imperial fashion, particularly in the 1980s. Middle-class life in the 1950s and 1960s was dominated by white, straight men. The problems of culture and economy that we identify with the Internet are not without predicate or precedent. But there is a difference between equivalence (“worse now, worse then”) and seeing the present as worse (or better) in some highly particular or specific way. Because the latter actually gives us something to advocate for. “Torture is bad, and because it’s bad, it is so very very bad to be trying to legitimate or legalize it.” “A security state that spies on its own people and subverts democracy is bad, and because it’s bad, it’s so much worse when it is extended and empowered by law and technology.”

When everything has always been worst, it is fairly hard to mobilize others–or even oneself–in the present. Because nothing is really any different now. It is in a funny kind of way a close pairing to the ahistoricism of some neoliberalism: that the system is the system is the system. That nothing ever really changes dramatically, that there have been in the lives and times that matter no real cleavages or breaks.

Move #2: No specific thing is good now, because the whole system is bad.

In Kotsko’s piece on the Internet, this adds up to saying that there is no single thing, no site or practice or resource, which stands as relatively better (or even meaningfully different) apart from the general badness of the Internet. Totality stands always against particularity, system stands against any of its nodes. Wikipedia is not better than Amazon, not really: they’re all connected. Relatively flat hierarchies of access to online publication or speech are not meaningful because elsewhere writers and artists are being paid nothing.

This is an even more dispiriting evacuation of any political possibility, because it moves pre-emptively against any specific project of political making, or any specific declaration of affinity or affection for a specific reform, for any institution, for any locality. Sure, something that exists already or that could exist might seem admirable or useful or generative, but what does it matter?

Move #3: It’s not fair to ask people how to get from here to a totalizing transformation of the systems we live under, because this is just a strategy used to belittle particular reforms or strategies in the present.

I find the sometimes-simultaneity of #2 and #3 the most frustrating of all the positions I see taken up by left intellectuals. I can see #2 (depressing as it is) and I can see #3 (even when it’s used to defend a really bad specific tactical or strategic move made by some group of leftists) but #2 and #3 combined are a form of turtling up against any possibility of being criticized while also reserving the right to criticize everything that anyone else is doing.

I think it’s important to have some idea about what the systematic goals are. That’s not about painting a perfect map between right now and utopia, but the lack of some consistent systematic ideas that make connections between the specific campaigns or reforms or issues that drawn attention on the left is one reason why we end up in “circular firing squads”. But I also agree that it’s unfair to argue that any specific reform or ideal is not worth taking up if it can’t explain that effort will fix everything that’s broken.

4. It’s futile to do anything, but why are you just sitting around?

E.g., this is another form of justifying a kind of supine posture for left intellectuals–a certainty that there is no good answer to the question “What is to be done?” but that the doing of nothing by others (or their preoccupation with anything but the general systematic brokenness of late capitalism) is always worth complaining about. Indeed, that the complaint against the doing-nothingness of others is a form of doing-something that exempts the complainer from the complaint.

——-

The answer, it seems to me, is to opt out of these traps wherever and whenever possible.

We should historicize always and with specificity. No, everything is not worse or was not worse. Things change, and sometimes neither for better nor worse. Take the Internet. There’s no reason to get stuck in the trap of trying to categorize or assess its totality. There are plenty of very good, rich, complex histories of digital culture and information technology that refuse to do anything of the sort. We can talk about Wikipedia or Linux, Amazon or Arpanet, Usenet or Tumblr, without having to melt them into a giant slurry that we then weigh on some abstracted scale of wretchedness or messianism.

If you flip the combination of #2 and #3 on their head so that it’s a positive rather than negative assertion, that we need systematic change and that individual initiatives are valid, then it’s an enabling rather than disabling combination. It reminds progressives to look for underlying reasons and commitments that connect struggles and ideals, but it also appreciates the least spreading motion of a rhizome as something worth undertaking.

If you reverse #4, maybe that could allow left intellectuals to work towards a more modest and forgiving sense of their own responsibilities, and a more appreciative understanding of the myriad ways that other people seek pleasure and possibility. That not everything around us is a fallen world, and that not every waking minute of every waking day needs to be judged in terms of whether it moves towards salvation.

We can’t keep saying that everything is so terrible that people have got to do something urgently, right now, but also that it’s always been terrible and that we have always failed to do something urgently, or that the urgent things we have done never amount to anything of importance. We disregard both the things that really have changed–Zinn was wrong about his cyclical vision–and the things that might become worse in a way we’ve never heretofore experienced. At those moments, we set ourselves against what people know in their bones about the lives they lived and the futures they fear. And we can’t keep setting ourselves in the center of some web of critique, ready to spin traps whenever a thread quivers with movement. Politics happens at conjunctures that magnify and intensify what we do as human beings–and offer both reward and danger as a result. It does not hover with equal anxiety and import around the buttering of toast and the gathering of angry crowds at a Trump rally.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/02/23/opt-out/feed/ 4
On the Deleting of Academia.edu and Other Sundry Affairs https://blogs.swarthmore.edu/burke/blog/2016/01/29/on-the-deleting-of-academia-edu-and-other-sundry-affairs/ https://blogs.swarthmore.edu/burke/blog/2016/01/29/on-the-deleting-of-academia-edu-and-other-sundry-affairs/#comments Fri, 29 Jan 2016 18:54:00 +0000 https://blogs.swarthmore.edu/burke/?p=2922 Continue reading ]]> Once again with feeling, a point that I think cannot be made often enough.

Social media created and operated by a for-profit company, no matter what it says when it starts off about the rights of content creators, will inevitably at some point be compelled to monetize some aspect of its operations that the content creators did not want to be monetized.

This is not a mistake, or a complaint about poor management practices. The only poor practices here are typically about communication from the company about the inevitable changes whenever they arrive, and perhaps about the aggressiveness or destructiveness of the particular form of monetization that they move towards.

The problem is not with the technology, either. Uber could have been an interface developed by a non-profit organization trying to help people who need rides to destinations poorly serviced by public transport. It could have been an open-source experiment that was maintained by a foundation, like Wikipedia, that managed any ongoing costs connected to the app and its use in that way. And that’s with something that was already a product, a service, a part of the pay economy.

Social media developed by entrepreneurs, backed by venture capital, will eventually have to find some revenue. And there are only three choices: they sell information about their users and content creators, even if that’s just access to the attention of the users via advertisements; they sell services to their users and content creators; they sell the content their creators gave to them, or at least take a huge cut of any such sales. That’s it.

And right now except for a precious few big operators, none of those choices really let the entrepreneurs operate a sustainable business. Which is why so many of of the newer entries are hoping to either threaten a big operator, get a payout and walk away with their wallet full (and fuck the users) or are hoping to amass such a huge amount of freely donated content that they can sell their archive and walk away with their wallet full (and fuck the users).

If the stakes are low, well, so be it. Ephemeral social conversation between people can perhaps safely be sold off, burned down and buried so that a few Stanford grads get to swagger with all the nouveau-richness they can muster. On the far other end, maybe that’s not such a great thing to happen to blood tests and medical procedures, though that’s more about the hideous offspring of the social media business model, aka “disruption”.

But nobody at this point should ever be giving away potentially valuable work that they’ve created to a profit-maker just because the service that hosts it seems to provide more attention, more connection, more ease of use, more exposure.

Open access is the greatest idea in academia today when it comes to make academia more socially just, more important and influential, more able to collaborate, and more able to realize its own cherished ideals. But open access is incompatible with for-profit social media business models. Not because the people who run academia.edu are out of touch with their customer base, or greedy, or incompetent. They don’t have any choice! Sooner or later they’ll have to move in the direction that created such alarm yesterday. They will either have amassed so much scholarship from so many people that future scholars will feel compelled to use the service–at which point they can charge for a boost to your scholarly attention and you’ll have to pay. Or they will need to monetize downloads and uses. Or monetize citations. Or charge on deposit of anything past the first article. Or collect big fees from professional associations to for services. Or they’ll claim limited property rights over work that hasn’t been claimed by authors after five years. Or charge a “legacy fee” to keep older work up. You name it. It will have to happen.

So just don’t. But also keep asking and dreaming and demanding all the affordances of academia.edu in a non-profit format supported by a massive consortia of academic institutions. It has been, is and remains perfectly possible that such a thing could exist. It is a matter of institutional leadership–but also of faculty collectively finally understanding their own best self-interest.

]]>
https://blogs.swarthmore.edu/burke/blog/2016/01/29/on-the-deleting-of-academia-edu-and-other-sundry-affairs/feed/ 2
All Grasshoppers, No Ants https://blogs.swarthmore.edu/burke/blog/2015/07/20/all-grasshoppers-no-ants/ https://blogs.swarthmore.edu/burke/blog/2015/07/20/all-grasshoppers-no-ants/#comments Mon, 20 Jul 2015 16:34:18 +0000 https://blogs.swarthmore.edu/burke/?p=2843 Continue reading ]]> It would be convenient to think that Gawker Media‘s flaming car-wreck failure at the end of last week was the kind of mistake of individual judgment that can be fixed by a few resignations, a few pledges to do better, a few new rules or procedures.

Or to think that the problem is just Gawker, its history and culture as an online publication. There’s something to that: Gawker writers and editors have often cultivated a particularly noxious mix of preening self-righteousness, inconsistent to nonexistent quality control, a lack of interest in independent research and verification, motiveless cruelty and gutless double-standards in the face of criticism. All of which were on display over the weekend in the tweets of Gawker writers, in the appallingly tone-deaf decision by the writing staff to make their only statement a defense of their union rights against a decision by senior managers to pull the offending article, and in the decision to bury thousands of critical comments by readers and feature a miniscule number of friendly or neutral comments.

Gawker’s writers and editors, and for that matter all of Gawker Media, are only an extreme example of a general problem that is simultaneously particular to social media and widespread through the zeitgeist of our contemporary moment. It’s a problem that appears in protests, in tweets and blogs, in political campaigns right and left, in performances and press conferences, in corporate start-ups and tiny non-profits.

All of that, all of our new world with such people in it, crackles with so much beautiful energy and invention, with the glitter of things once thought impossible and things we never knew could be. Every day makes us witness to some new truth about how life is lived by people all around the world–intimate, delicate truths full of heartbreaking wonder; terrible, blasphemous truths about evils known and unsuspected; furious truths about our failures and blindness. More voices, more possibilities, more genres and forms and styles. Even at Gawker! They’ve often published interesting writing, helped to circulate and empower passionate calls to action, and intelligently curated our viral attention.

So what is the problem? I’m tempted to call it nihilism, but that’s too self-conscious and too philosophically coherent a label. I’m tempted to call it anarchism, but then I might rather approve rather than criticize. I might call it rugged individualism, or quote Aleister Crowley about the whole of the law being do as thou wilt. And again I might rather approve than criticize.

It’s not any of that, because across the whole kaleidoscopic expanse of this tumbling moment in time, there’s not enough of any of that. I wish we had more free spirits and gonzo originals calling it like they see it, I wish we had more raging people who just want the whole corrupt mess to fall down, I wish we had more people who just want to tend their own gardens as they will and leave the rest to people who care.

What we have instead–Gawker will do as a particularly stomach-churning example, but there are so many more–is a great many people who in various contexts know how to bid for our collective attention and even how to hold it for the moments where it turns their way, but not what to do with it. Not even to want to do anything with it. What we have is an inability to build and make, or to defend what we’ve already built and made.

What we have is a reflexive attachment to arguing always from the margins, as if a proclamation of marginality is an argument, and as if that argument entitles its author to as much attention as they can claim but never to any responsibility for doing anything with that attention.

What we have is contempt for anybody trying to keep institutions running, anybody trying to defend what’s already been achieved or to maintain a steady course towards the farther horizons of a long-term future. What we have is a notion that anyone responsible for any institution or group is “powerful” and therefore always contemptible. Hence not wanting to build things or be responsible. Everyone wants to grab the steering wheel for a moment or two but no one wants to drive anywhere or look at a map, just to make vroom-vroom noises and honk the horn.

Everyone’s sure that speech acts and cultural work have power but no one wants to use power in a sustained way to create and make, because to have power persistently, in even a small measure, is to surrender the ability to shine a virtuous light on one’s own perfected exclusion from power.

Gawker writers want to hold other writers and speakers accountable for bad writing and unethical conduct. They want to scorn Reddit for its inability to hold its community to higher standards. But they don’t want to build a system for good writing, they don’t want to articulate a code of ethical conduct, they don’t want to invest their own time and care to cultivate a better community. They don’t want to be institutions. They want to sit inside a kind of panopticon that has crudely painted over its entrance, “Marginality Clubhouse”, a place from which they can always hold others accountable and never be seen themselves. Gawker writers want to always be “punching up”, mostly so they don’t have to admit what they really want is simply to punch. To hurt someone is a great way to get attention. If there’s no bleeder to lead, then make someone bleed.

It’s not just them. Did you get caught doing something wrong in the last five years? What do you do? You get up and do what Gawker Media writer Natasha Vargas-Cooper has done several times, doing it once again this weekend in a tweet: whomever you wronged deserved it anyway, you’re sorry if someone else is flawed enough to take offense, and by the way, you’re a victim or marginalized and not someone speaking from an institution or defending a profession. Tea Party members and GamerGate posters do the same thing: both of their discursive cultures are full of proclamations of marginality and persecution. The buck stops somewhere else. You don’t make or build, you don’t have hard responsibilities of your own.

You think people who do make and build and defend what’s made and built are good for one thing: bleeding when you hit them and getting you attention when you do it. They’re easy to hit because they have to stand still at the site of their making.

This could be simply a complaint about individuals failing to accept responsibility for power–even with small power comes small responsibility. But it’s more than that. In many cases, this relentless repositioning to virtuous marginality for the sake of rhetorical and argumentative advantage creates a dangerous kind of consciousness or self-perception that puts every political and social victory, small and large, at risk. In the wake of the Supreme Court’s marriage decision, a lot of the progressive conversation I saw across social media held a celebratory or thankful tone for only a short time. Then in some cases it moved on productively to the next work that needs doing with that same kind of legal and political power, to more building. But in other cases, it reset to marginality, to looking for the next outrage to spark a ten-minute Twitter frenzy about an injustice, always trying to find a way back to a virtuous outside untainted by power or responsibility, always without any specific share in or responsibility for what’s wrong in the world. If that’s acknowledged, it’s not in terms of specific things or actions that could be done right or wrong, better or worse, just in generalized and abstract invocations of “privilege” or “complicity”, of the ubiquity of sin in an always-fallen world.

On some things, we are now the center, and we have to defend what’s good in the world we have knowing that we are there in the middle of things, in that position and no other. To assume responsibility for what we value and what we do and to ensure that the benefits of what we make are shared. To invite as many under our roof as can fit and then invite some more after that. To build better and build more.

What is happening across the whole span of our zeitgeist is that we’ve lost the ability to make anything like a foundational argument that binds its maker as surely as it does others. And yet many of us want to retain the firm footing that foundations give in order to claim moral and political authority.

This is why I say nihilism would be better: at least the nihilist has jumped off into empty space to see what can be found when you no longer want to keep the ground beneath your feet. At least the anarchist is sure nothing of worth can be built on the foundations we have. At least the free spirit is dancing lightly across the floor.

So Gawker wants everyone else to have ethics, but couldn’t describe for a moment what its own ethical obligations are and why they should be so. Gawker hates the lack of compassion shown by others, but not because it has anything like a consistent view about why cruelty is wrong. Gawker thinks stories should be accurate, unless they have to do the heavy lifting to make them so.

They are in this pattern of desires typical, and it’s not a simple matter of hypocrisy. It is more a case of the relentless a la carte -ification of our lives, that we speak and demand and act based on felt commitments and beliefs that have the half-life of an element created in a particle accelerator, blooming into full life and falling apart seconds later.

To stand still for longer is to assume responsibility for power (small or large), to risk that someone will ask you to help defend the castle or raise the barn. That you might have to live and work slowly for a goal that may be for the benefit of others in the future, or for some thing that is bigger than any human being to flourish. To be bound to some ethic or code, to sometimes stand against your own desires or preferences.

Sometimes to not punch but instead to hold still while someone punches you, knowing that you’re surrounded by people who will buoy you up and heal your wounds and stand with you to hold the line, because you were there for them yesterday and you will be there with them tomorrow.

]]>
https://blogs.swarthmore.edu/burke/blog/2015/07/20/all-grasshoppers-no-ants/feed/ 8
Apples for the Teacher, Teacher is an Apple https://blogs.swarthmore.edu/burke/blog/2015/05/09/apples-for-the-teacher-teacher-is-an-apple/ https://blogs.swarthmore.edu/burke/blog/2015/05/09/apples-for-the-teacher-teacher-is-an-apple/#comments Sat, 09 May 2015 12:35:31 +0000 https://blogs.swarthmore.edu/burke/?p=2814 Continue reading ]]> Why does AltSchool, as described in this article, as well as similar kinds of tech-industry attempts to “disrupt” education, bug me so much? I’d like to be more welcoming and enthusiastic. It’s just that I don’t think there’s enough experimentation and innovation in these projects, rather than there being too much.

The problem here is that the tech folks continue to think (or at least pretend) that algorithmic culture is delivering more than it actually is in the domains where it has already succeeded. What tech has really delivered is mostly just the removal of transactional middlemen (and of course added new transactional middlemen–the network Uber has established in a really frictionless world wouldn’t need Uber, and we’d all just be monetizing our daily drives on an individual-to-individual basis).

Algorithmic culture isn’t semantically aware yet. When it seems to be, it’s largely a kind of sleight-of-hand, a leveraging and relabelling of human attention or it is computational brute-forcing of delicate tasks that our existing bodies and minds handle easily, the equivalent of trying to use a sledge hammer to open a door. Sure, it works, but you’re not using that door again, and by the way, try the doorknob with your hand next time.

I’m absolutely in agreement that children should be educated for the world they live in, developing skills that matter. I’m also in agreement that it’s a good time for radical experiments in education, many of them leveraging information technology at new ways. But the problem is that the tech industry has sold itself on the idea that what it does primarily is remove the need for labor costs in labor-intensive industries, which just isn’t true for the most part. It’s only true when it’s true for jobs that were (or still are) rote and routinized, or that were deliberate inefficiencies created by middlemen. Or that tech will solve problems that are intrinsic to the capabilities of a human being in a human body.

So at the point in the article where I see the promise that tech will overcome the divided attention of a humane teacher, I both laugh and shudder. I laugh because it’s the usual tech-sector attempt to pretend that inadequate existing tech will become superbly useful tech in the near-term future simply because we’ve identified a need for it to be (Steve Jobs reality distortion field engaged) and I shudder because I know what will happen when they keep trying.

The central scenario in the article is this: you build a relatively small class with a relatively well-trained, attentive, human teacher at the center of it. So far so good! But the tech, ah the tech. That’s there so that the teacher never has to experience the complicated decision paths that teachers presently experience even in somewhat small classes. Right now a teacher has to decide sometimes in a day which students will get the lion’s share of the attention, has to rob Peter to pay Paul. We can’t have that in a world where every student should get all the attention all the time! (If nothing else, that expectation is an absolutely crystallized example of how the new tech-industry wealthy hate public goods so very much: they do not believe that they should ever have to defer their own needs or satisfactions to someone else. The notion that sociality itself, in any society, requires deferring to the needs of others and subsuming one own needs, even for a moment, is foreign to them.)

So the article speculates: we’ll have facial recognition software videotaping the groups that the teacher isn’t working with, and the software will know which face to look at and how to compress four hours of experience into a thirty-minute summary to be reviewed later, and it will also know when there are really important individual moments that need to be reviewed at depth.

Here’s what will really happen: there will be four hours of tape made by an essentially dumb webcam and the teacher will be required to watch it all for no additional compensation. One teacher will suddenly not be teaching 9-5 and making do as humans must, being social as we must. That teacher will be asked to review and react to twelve or fourteen or sixteen hours of classroom experience just so the school can pretend that every pupil got exquisitely personal, semantically sensitive attention. The teacher will be sending clips and materials to every parent so that this pretense can be kept up. When the teacher crumbles under the strain, the review will be outsourced, and someone in a silicon sweat shop in Malaysia will be picking out random clips from the classroom feed to send to parents. Who probably won’t suspect, at least for a while, that the clips are effectively random or even nonsensical.

When the teacher isn’t physically present to engage a student, the software that’s supposed to attend to the individual development of every student will have as much individual, humane attention to students as Facebook has to me. That is to say, Facebook’s algorithms know what I do (how often I’m on, what I look at, what I tend to click on, when I respond) and it tries (oh, how it tries!) to give me more of what I seem to do. But if I were trying to learn through Facebook, what I need is not what I do but what I don’t! Facebook can only show me a mirror at best; a teacher has to show a student a door. On Facebook, the only way I could find a door is for other people–my small crowd of people–to show me one.

Which probably another way that AltSchool will pretend to be more than it can be, the same way all algorithmic culture does–to leverage a world full of knowing people in order to create the Oz-like illusion that the tools and software provided by the tech middleman are what is creating the knowledge.

Our children will not be raised by wolves in the forest, but by anonymously posted questions answered on a message board by a mixture of generous savants, bored trolls and speculative pedophiles.

]]>
https://blogs.swarthmore.edu/burke/blog/2015/05/09/apples-for-the-teacher-teacher-is-an-apple/feed/ 4
History 82 Fall 2014 Syllabus https://blogs.swarthmore.edu/burke/blog/2014/08/18/history-82-fall-2014-syllabus/ https://blogs.swarthmore.edu/burke/blog/2014/08/18/history-82-fall-2014-syllabus/#comments Mon, 18 Aug 2014 19:38:06 +0000 https://blogs.swarthmore.edu/burke/?p=2671 Continue reading ]]> Here’s the current version of the syllabus for my upcoming fall class on the history of digital media. Really excited to be teaching this.

———————

History 82
Histories of Digital Media
Fall 2014
Professor Burke

This course is an overly ambitious attempt to cover a great deal of ground, interweaving cultural histories of networks, simulations, information, computing, gaming and online communication. Students taking this course are responsible first and foremost for making their own judicious decisions about which of many strands in that weave to focus on and pursue at greater depth through a semester-long project.

The reading load for this course is heavy, but in many cases it is aimed at giving students an immersive sampler of a wide range of topics. Many of our readings are both part of the scholarship about digital culture and documents of the history of digital culture. I expect students to make a serious attempt to engage the whole of the materials assigned in a given week, but engagement in many cases should involve getting an impressionistic sense of the issues, spirit and terminology in that material, with an eye to further investigation during class discussion.

Students are encouraged to do real-time online information seeking relevant to the issues of a given class meeting during class discussion. Please do not access distracting or irrelevant material or take care of personal business unrelated to the class during a course meeting, unless you’re prepared to discuss your multitasking as a digital practice.

This course is intended to pose but not answer questions of scope and framing for students. Some of the most important that we will engage are:

*Is the history of digital culture best understood as a small and recent part of much wider histories of media, communication, mass-scale social networks, intellectual property, information management and/or simulation?

*Is the history of digital culture best understood as the accidental or unintended consequence of a modern and largely technological history of computing, information and networking?

*Is the history of digital culture best understood as a very specific cultural history that begins with the invention of the Internet and continues in the present? If so, how does the early history of digital culture shape or determine current experiences?

All students must make at least one written comment per week on the issues raised by the readings before each class session, at the latest on each Sunday by 9pm. Comments may be made either on the public weblog of the class, on the class Twitter feed, or on the class Tumblr. Students must also post at least four links, images or gifs relevant to a particular class meeting to the class Tumblr by the end of the semester. (It would be best to do that periodically rather than all four on December 2nd, but it’s up to each of you.) The class weblog will have at least one question or thought posted by the professor at the beginning of each week’s work (e.g., by Tuesday 5pm.) to direct or inform the reading of students.

Students will be responsible for developing a semester-long project on a particular question or problem in the history of digital culture. This project will include four preparatory assignments, each graded separately from the final project:

By October 17, a one-page personal meditation on a contemporary digital practice, platform, text, or problem that explains why you find this example interesting and speculates about how or whether its history might prove interesting or informative.

By November 3, a two-page personal meditation on a single item from the course’s public “meta-list” of possible, probable and interesting topics that could sustain a project. Each student writer should describe why they find this particular item or issue of interest, and what they suspect or estimate to be some of the key questions or problems surrounding this issue. This meditation should include a plan for developing the final project. All projects should include some component of historical investigation or inquiry.

By November 17, a 2-4 page bibliographic essay about important materials, sources, or documents relevant to the project.

The final project, which should be a substantive work of analysis and interpretation, is due by December 16th.

Is Digital Culture Really Digital? A Sampler of Some Other Histories

Monday September 1
Ann Blair, Too Much to Know, Introduction
Hobart and Schiffman, Information Ages, pp. 1-8
Jon Peterson, Playing at the World, pp. 212-282
*Adrian Johns, Piracy: The Intellectual Property Wars From Gutenberg to Gates, pp. 1-82
Tom Standage, The Victorian Internet, selection

Imagining a Digital Culture in an Atomic Age

Monday September 8
Arthur C. Clarke, “The Nine Billion Names of God”, http://downlode.org/Etext/nine_billion_names_of_god.html
Ted Friedman, Electric Dreams, Chapter Two and Three

Film: Desk Set
Colossus the Forbin Project (in-class)
Star Trek, “The Ultimate Computer” (in-class)

Monday September 15
Vannevar Bush, “As We May Think”, http://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/
Paul Edwards, The Closed World, Chapter 1. (Tripod ebook)
David Mindell, “Cybernetics: Knowledge Domains in Engineering Systems”, http://21stcenturywiener.org/wp-content/uploads/2013/11/Cybernetics-by-D.A.-Mindell.pdf
Fred Turner, Counterculture to Cyberculture, Chapter 1 and 2
Alex Wright, Cataloging the World: Paul Otlet and the Birth of the Information Age, selection

In the Beginning Was the Command Line: Digital Culture as Subculture

Monday September 22
*Katie Hafner, Where Wizards Stay Up Late
*Steven Levy, Hackers
Wikipedia entries on GEnie and Compuserve

Film: Tron

Monday September 29
*John Brunner, The Shockwave Rider
Ted Nelson, Dream Machines, selection
Pierre Levy, Collective Intelligence, selection
Neal Stephenson, “Mother Earth Mother Board”, Wired, http://archive.wired.com/wired/archive/4.12/ffglass_pr.html

Monday October 6
*William Gibson, Neuromancer
EFFector, Issues 0-11
Eric Raymond, “The Jargon File”, http://www.catb.org/jargon/html/index.html, Appendix B
Bruce Sterling, “The Hacker Crackdown”, Part 4, http://www.mit.edu/hacker/part4.html

Film (in-class): Sneakers
Film (in-class): War Games

FALL BREAK

Monday October 20
Consumer Guide to Usenet, http://permanent.access.gpo.gov/lps61858/www2.ed.gov/pubs/OR/ConsumerGuides/usenet.html
Julian Dibbell, “A Rape in Cyberspace”
Randal Woodland, “Queer Spaces, Modem Boys and Pagan Statues”
Laura Miller, “Women and Children First: Gender and the Settling of the Electronic Frontier”
Lisa Nakamura, “Race In/For Cyberspace”
Howard Rheingold, “A Slice of Life in My Virtual Community”
Sherry Turkle, Life on the Screen, selection

Hands-on: LambdaMOO
Hands-on: Chatbots
Hands-on: Usenet

Monday October 27

David Kushner, Masters of Doom, selection
Hands-on: Zork and Adventure

Demonstration: Ultima Online
Richard Bartle, “Hearts, Clubs, Diamonds, Spades”, http://mud.co.uk/richard/hcds.htm

Rebecca Solnit, “The Garden of Merging Paths”
Michael Wolff, Burn Rate, selection
Nina Munk, Fools Rush In, selection

Film (in-class): Ghost in the Shell
Film (in-class): The Matrix

Here Comes Everybody

Monday November 3

Claire Potter and Renee Romano, Doing Recent History, Introduction

Tim Berners-Lee, Weaving the Web, short selection
World Wide Web (journal) 1998 issues
IEEE Computing, March-April 1997
Justin Hall, links.net, https://www.youtube.com/watch?v=9zQXJqAMAsM&list=PL7FOmjMP03B5v3pJGUfC6unDS_FVmbNTb
Clay Shirky, “Power Laws, Weblogs and Inequality”
Last Night of the SFRT, http://www.dm.net/~centaur/lastsfrt.txt
Joshua Quittner, “Billions Registered”, http://archive.wired.com/wired/archive/2.10/mcdonalds_pr.html
A. Galey, “Reading the Book of Mozilla: Web Browsers and the Materiality of Digital Texts”, in The History of Reading Vol. 3

Monday November 10

Danah Boyd, It’s Complicated: The Social Life of Networked Teens
Bonnie Nardi, My Life as a Night-Elf Priest, Chapter 4

Hands-on: Twitter
Hands-on: Facebook
Meet-up in World of Warcraft (or other FTP virtual world)

Michael Wesich, “The Machine Is Us/Ing Us”, https://www.youtube.com/watch?v=NLlGopyXT_g
Ben Folds, “Ode to Merton/Chatroulette Live”, https://www.youtube.com/watch?v=0bBkuFqKsd0

Monday November 17

Eli Pariser, The Filter Bubble, selection
Steven Levy, In the Plex, selection
John Battelle, The Search, selection

Ethan Zuckerman, Rewire, Chapter 4
Linda Herrera, Revolution in the Era of Social Media: Egyptian Popular Insurrection and the Internet, selection

Monday November 24

Clay Shirky, Here Comes Everybody

Yochai Benkler, The Wealth of Networks, selection
N. Katherine Hayles, How We Think, selection
Mat Honan, “I Liked Everything I Saw on Facebook For Two Days”, http://www.wired.com/2014/08/i-liked-everything-i-saw-on-facebook-for-two-days-heres-what-it-did-to-me

Hands-on: Wikipedia
Hands-on: 500px

Monday December 1

Gabriella Coleman, Coding Freedom, selection
Gabriella Coleman, Hacker Hoaxer Whistleblower Spy, selection
Andrew Russell, Open Standards and the Digital Age, Chapter 8

Adrian Johns, Piracy, pp. 401-518

Hands-on: Wikileaks

Film: The Internet’s Own Boy

Monday December 8

Eugeny Morozov, To Save Everything, Click Here
Siva Vaidhyanathan, The Googlization of Everything, selection
Jaron Lanier, Who Owns the Future? , selection

]]>
https://blogs.swarthmore.edu/burke/blog/2014/08/18/history-82-fall-2014-syllabus/feed/ 4
Yesterday, All Our MOOC Troubles Seemed So Far Away https://blogs.swarthmore.edu/burke/blog/2013/12/19/yesterday-all-our-mooc-troubles-seemed-so-far-away/ https://blogs.swarthmore.edu/burke/blog/2013/12/19/yesterday-all-our-mooc-troubles-seemed-so-far-away/#comments Thu, 19 Dec 2013 21:40:18 +0000 https://blogs.swarthmore.edu/burke/?p=2485 Continue reading ]]> Everybody remember the expectation that a smart, professorial President would hire an equally smart, skilled staff who would prove that a well-run government can be quickly responsive to the needs of the society, efficient in the execution of its duties, and not just services to the highest bidder?

Yeah, me neither. The current Administration seems determined to help us forget. Today the President’s Council of Advisors on Science and Technology issued a report on massively online open courses (MOOCs) that not only reads as if it was written a year ago, but manages even in the frame of a year ago to take the most cravenly deferential and crudely instrumental posture available in that moment. It’s a love letter to the venture capitalists scrambling to gut open higher education, written at a time when the most thoughtful entrepreneurs and executives involved in organizing MOOCs have all but conceded that whatever their value might be, they’re not going to solve the problem of labor-intensivity in education nor are they going to serve as a primary vehicle for achieving equity of access to higher education for potential pupils.

There was a good deal of I-told-you-so-ing after Sebastian Thrun announced that Udacity would move towards offering MOOCs for something other than basic higher education, in part because Thrun had concluded that they simply couldn’t substitute for existing models of teaching. I don’t think anyone should have mocked Thrun for saying so, even though many of us did say that this is what was going to happen. Not the least because it has happened before, at each major milestone in the development of mass communication in modern societies: the new medium was eagerly held up as a chance to affordably massify education and extend its transformative potential, only to fall short. Largely because no matter what mass medium we’re talking about, this kind of education is essentially an assisted form of autodidacticism. It has worked and still works largely only for those who already know what they want to know, and who already know how to learn.

There are some people who deeply believe that new technological infrastructures can in and of themselves solve problems of cost, equity and efficacy, in higher education or anything else. But at least some of the people who were preaching the MOOC gospel a year ago, where the President’s Council just went in their time machine, did so hoping to draw a Golden Ticket in the “I Made an IPO and Broke Something Important” sweepstakes. Most of those folks seem to be moving on now. In the Silicon Valley game, you don’t have to make money, but you do need to show that you can displace and disrupt an existing service with some speed. That’s not going to happen in this case.

One of the reasons that so many faculty who are otherwise very friendly to digitally-mediated innovation and change were so annoyed with MOOCs is that the intense push by companies and investors to draw attention to MOOCs drew energy and resources away from existing projects that have been using information technology to enhance and enrich traditional modes of teaching, often called “blended learning”. Now that the craze for the MOOC is starting to fade, maybe the blended learning conversation can gain the public attention it deserves once again.

But also, maybe we can hold on to what we’ve learned about the genuinely interesting possibilities of MOOCs. So they’re not going to magically solve the economic problems of education or public goods, or for the more anti-intellectual backers, they’re not going to create a world where algorithms will replace truculent faculty. If we get lucky, they might put some of the sleazier for-profit online educators out of business. However, existing MOOCs are still a potentially terrific implementation of three possible objectives, all of which might even have market value.

1) MOOCs are a model form of new digital publication. If you read this blog, you’ve seen me say this before (and seen me say before, somewhat crossly, that I’ve been saying it for years.) But this is no longer just potential: it’s reality. Does anyone remember how many people bought “For Dummies” books? Or in recent years, how many institutions are paying for a lynda.com account? The MOOC is a BOOC: it’s an enhanced, interactive instructional guide where other readers and the authors are there to help you learn. An instructional book has never been confused for a face-to-face course in a university, but it’s also a concept that’s been in existence longer than the university itself.

2) MOOCs are learning communities. Again, this is a potential that’s been around since the WELL, but existing MOOCs are a good demonstration of mature technologies and practices that help dedicated groups learn and explore together at various levels of commitment and interest. They can’t teach calculus to a single student who is underprepared to learn calculus, but they can help a very big group of people who have diverse knowledge and a common interest in the future of higher education learn and discuss that topic together.

3) The mass response to MOOCs are evidentiary proof of the transformative potential of traditional higher education. They’ve been misused as vehicles for transforming higher education, but what they really document is that people who’ve had higher education want to have more learning experiences like that for the rest of their lives. It’s why I always feel so sad when I talk to a Swarthmore alum who just wants to talk about books and ideas and research again and who starts to think that this alone is a reason to go on to graduate school. It’s not a good reason to do that because that’s not what graduate school typically services. But look at who takes MOOCs: it’s a close overlap with people who take community college courses for enrichment, with people who join book clubs and go to lectures, with people who just want to know more and talk with people who also have that aspiration. What have MOOCs shown so far? That there are a lot of people like that. They’re busy people, so they often drop out. But I bet those people are to support for educational institutions as a public good, ready to believe in the potentialities of education for a democratic society. MOOCs might not entirely scratch the itch for lifelong learning that many people who’ve had a taste of education develop, but they’re one way to respond to that desire, and more potently, an affirmation that the desire exists.

If the White House wants to pay attention to something important, they might start there rather than embracing the hope that market forces will automagically deploy the MOOC to finally relieve the technocrats of the burden of maintaining and extending public goods.

]]>
https://blogs.swarthmore.edu/burke/blog/2013/12/19/yesterday-all-our-mooc-troubles-seemed-so-far-away/feed/ 5