Comments on: The Instrument of Future Mischief https://blogs.swarthmore.edu/burke/blog/2016/06/30/the-instrument-of-future-mischief/ Culture, Politics, Academia and Other Shiny Objects Thu, 07 Jul 2016 14:23:42 +0000 hourly 1 https://wordpress.org/?v=5.4.15 By: lemmy caution https://blogs.swarthmore.edu/burke/blog/2016/06/30/the-instrument-of-future-mischief/comment-page-1/#comment-73133 Thu, 07 Jul 2016 14:23:42 +0000 https://blogs.swarthmore.edu/burke/?p=2994#comment-73133 I just read Bostrom’s “superintellegence” which is a serious attempt to deal with the dangers of strong AI. He goes pretty far out on his assumptions so I am not sure it is going to go down that way, but he has thought it through.

“In fact, the Obama administration classifies all military-age males as militants, so that they can report a smaller number of civilian casualties without having to alter their practices. ”

that is insane.

]]>
By: Neel Krishnaswami https://blogs.swarthmore.edu/burke/blog/2016/06/30/the-instrument-of-future-mischief/comment-page-1/#comment-73131 Mon, 04 Jul 2016 15:31:15 +0000 https://blogs.swarthmore.edu/burke/?p=2994#comment-73131

Neoliberalism doesn’t even know there’s an issue with rule-following precisely because it treats rules as algorithms: procedures that operate autonomically, externally from subjectivity.

I suspect I’m fairly far to the right of you, but you absolve the present order far too easily. For example, you described the war on terror as follows:

Take drone warfare in the “global war on terror”. Write it as an algorithm. […] The only reason that Americans live with the current implementation of that algorithm is that the people being killed are out of sight and are racially and culturally coded as legitimate victims.

In fact, the Obama administration classifies all military-age males as militants, so that they can report a smaller number of civilian casualties without having to alter their practices. This is not the behaviour of a Javert — this is the behaviour of a Talleyrand.

Our leaders understand subjectivity perfectly well. Presenting their decisions as the output of mathematical algorithms lets them claim the mantle of rationality and dismiss their opponents as irrational.

Programs do what programmers make them do, and programmers do what their bosses tell them to. Our rulers are not the implementation of an algorithm — the algorithms are the implementation of their intent. (As an aside, I strongly recommend Maciej Ceglowski’s short essay The Moral Economy of Tech.)

]]>
By: Timothy Burke https://blogs.swarthmore.edu/burke/blog/2016/06/30/the-instrument-of-future-mischief/comment-page-1/#comment-73130 Fri, 01 Jul 2016 14:32:02 +0000 https://blogs.swarthmore.edu/burke/?p=2994#comment-73130 In reply to sibyl.

To me, this is a key difference between liberalism and neoliberalism that has many dimensions. I regret the extent to which many leftists have allowed a critique of neoliberalism to “swallow” liberalism. Or maybe what I think of liberalism is something more like an ungainly hybrid of romantic ideas and liberal political structures–the upshot being that neoliberalism has no room for a kind of humanism that is inconsistent or contradictory on purpose, but I think liberalism did. In fact, impulsiveness, whimsy and transformation were signs for some 19th C. liberals (the not-utilitarian kinds) that we were free, that we might not follow procedure if it seemed situationally wrong. Liberalism wrung its hands a lot about the dangers of rule-following, about becoming Inspector Javert or Captain Vere. Neoliberalism doesn’t even know there’s an issue with rule-following precisely because it treats rules as algorithms: procedures that operate autonomically, externally from subjectivity. Whereas liberal rule-following and fears thereof was all about the danger of incorporating the rules into selfhood, about the dangerous kinds of human beings who entangled their individuality with procedure.

]]>
By: sibyl https://blogs.swarthmore.edu/burke/blog/2016/06/30/the-instrument-of-future-mischief/comment-page-1/#comment-73129 Fri, 01 Jul 2016 14:24:06 +0000 https://blogs.swarthmore.edu/burke/?p=2994#comment-73129 You have articulated exactly what I am currently struggling with. Over the last couple of months I’ve come around to the view that both the liberal nation-state in general and neoliberalism in particular have failed or are in the process of failing, but I still work in the service of both of those institutions. And while I can justify the practical step of taking some time to figure out my next career move, right now the only thing I can tell myself, and the only option that is presenting itself, is: I must continue to do it and do it in the following ways, even if there’s no evidence that it actually helps with anything…

]]>
By: Timothy Burke https://blogs.swarthmore.edu/burke/blog/2016/06/30/the-instrument-of-future-mischief/comment-page-1/#comment-73128 Fri, 01 Jul 2016 12:11:55 +0000 https://blogs.swarthmore.edu/burke/?p=2994#comment-73128 In reply to sibyl.

Yes. Except that I think one of the problems with a lot of the procedural life of neoliberalism is that people resignedly say that they have no choice but to enact various procedures, even if the results are not useful or the actions are harmful. I feel like the standard form that assessment has taken in a lot of higher education is like that: well, we must do it and do it in the following ways, even if there’s no evidence that it actually helps with anything, because this is what produces data that looks like data. We are already cyborgs of a kind, and that is interfering with our faith that someone, somehow, will still have the freedom not to follow the algorithm.

]]>
By: sibyl https://blogs.swarthmore.edu/burke/blog/2016/06/30/the-instrument-of-future-mischief/comment-page-1/#comment-73127 Thu, 30 Jun 2016 20:03:31 +0000 https://blogs.swarthmore.edu/burke/?p=2994#comment-73127 For me the cinematic ur-text on AI is the 1983 classic “War Games,” and its two complementary messages. The first, the one that is discussed more often, squares with your framework: the algorithm of global thermonuclear war, as enacted by the Cold War superpowers, is an illogical game that cannot be won. The second is that there must always be exceptions to any algorithm — that just because the computer tells the human it’s time to launch the missile, the human has to have the freedom to exercise independent judgment. I agree with you that humans are comforted by rules, but I also think that they are comforted by this gap between the universal application of rules and the human freedom not to follow them. Consider the complementary approach of Isaac Asimov’s robot stories. They assumed that the Laws of Robotics could always be followed, but that AI would almost always be sufficiently advanced to realize the optimal application of the rules. Some of our fears of AI stem not just from networked power, but about the gap between rule and judgment.

]]>