The Instrument of Future Mischief

A friend of mine has been quoting a few thoughts about the minor classic SF film Colossus: The Forbin Project, about an AI that seizes control of the world and establishes a thoroughly totalitarian if ostensibly “humane” governance. He asks how we would know what a “friendly” AI would look like, with the obvious trailing thought that Colossus in the film is arguably “friendly” to the intentions and interests of its creators.

This is a minor subgenre of reflections on AI, including the just-finishing series Person of Interest (no spoilers! I am still planning to power through the last two seasons at some point).

I think the thing that makes “friendly” AI in these contexts a horrifying or uncanny threat is not the power of the AI, though that’s what both stories and futurists often focus on, the capacities of the AI to exert networked control over systems and infrastructure that we regard as being under our authority. Instead, it is the shock of seeing the rules and assumptions already in place in global society put into algorithmic form. The “friendly” AI is not unnerving because it is alien, or because of its child-like misunderstandings and lack of an adult conscience. It is not Anthony Fremont, wishing people into the cornfield. It is that it is a mirror. If you made many of our present systems of political reason into an algorithm, they might act much the same as they do now, and so what we explain away as either an inexorable law of human life or as a regrettable accident would be revealed as exactly what it is: a thing that we do, that we do not have to do. The AI might be terrifying simply because it accelerates what we do, and does it more to everyone. It’s the compartmentalization that comforts us, the incompetence, the slowness, not the action or the reasoning.

Take drone warfare in the “global war on terror”. Write it as an algorithm.

1. If terrorist identity = verified, kill. Provide weighting for “verified”.
2. If non-terrorist in proximity to terrorist = still kill, if verified is strongly weighted.
3. If verification was identifiably inaccurate following kill = adjust weighting.
4. Repeat.

The only reason that Americans live with the current implementation of that algorithm is that the people being killed are out of sight and are racially and culturally coded as legitimate victims. One of the weightings of our real-world actual version of the algorithm is “do this only in certain places (Syria, Yemen, Afghanistan) and do this only to certain classes of the variable ‘terrorist'”. Americans also live with it because the pace of killings is sporadic and is largely unreported. A “friendly AI” might take the algorithm and seek to do it more often and in all possible locations. Even without that ending in a classically dystopian way with genocide, you could imagine that many of the AI’s creators would find the outcome horrifying. But that imaginary AI might wonder why, considering that it’s only implementing an accelerated and intensified version of the instructions that we ourselves created.

Imagine a “friendly AI” working with the algorithm for “creative destruction” (Schumpeter) or the updated version, “disruption” (Christensen).

1. Present industries and the jobs they support should be relentlessly disfavored in comparison to not-yet-fully realized future industries and jobs.
2. If some people employed in present jobs are left permanently unemployed or underemployed due to favoring not-yet-fully realized future industries and jobs, this validates that the algorithm is functioning correctly.
3. Preference should be given to not-yet-fully realized industries and jobs being located in a different place than present industries and jobs that are being disrupted.
4. Not-yet-fully realized future industries and jobs will themselves be replaced by still more futureward industries and jobs.

The “friendly AI” would certainly seek to accelerate and expand this algorithm, always favoring what might be over what actually is, possibly to the point that only notional or hypothetical industries would be acceptable, and that any actual material implementation of an industry would make it something to be replaced instantly. The artificial stop conditions that the tech sector, among others, put on disruption might be removed. Why buy any tech right now when there will be tech tomorrow that will displace it? Why in fact actually develop or make any tech considering that we can imagine the tech that will displace the tech we are developing? Apple will simply obsolete the next iPad in a few years for the sake of disruption, so the friendly AI might go ahead and pre-obsolete it all. Again, not what anybody really wants, but it’s a reasonable interpretation of what we are already doing, or at least what some people argue we are doing or ought to do.

This entry was posted in Information Technology and Information Literacy. Bookmark the permalink.

6 Responses to The Instrument of Future Mischief

  1. sibyl says:

    For me the cinematic ur-text on AI is the 1983 classic “War Games,” and its two complementary messages. The first, the one that is discussed more often, squares with your framework: the algorithm of global thermonuclear war, as enacted by the Cold War superpowers, is an illogical game that cannot be won. The second is that there must always be exceptions to any algorithm — that just because the computer tells the human it’s time to launch the missile, the human has to have the freedom to exercise independent judgment. I agree with you that humans are comforted by rules, but I also think that they are comforted by this gap between the universal application of rules and the human freedom not to follow them. Consider the complementary approach of Isaac Asimov’s robot stories. They assumed that the Laws of Robotics could always be followed, but that AI would almost always be sufficiently advanced to realize the optimal application of the rules. Some of our fears of AI stem not just from networked power, but about the gap between rule and judgment.

  2. Timothy Burke says:

    Yes. Except that I think one of the problems with a lot of the procedural life of neoliberalism is that people resignedly say that they have no choice but to enact various procedures, even if the results are not useful or the actions are harmful. I feel like the standard form that assessment has taken in a lot of higher education is like that: well, we must do it and do it in the following ways, even if there’s no evidence that it actually helps with anything, because this is what produces data that looks like data. We are already cyborgs of a kind, and that is interfering with our faith that someone, somehow, will still have the freedom not to follow the algorithm.

  3. sibyl says:

    You have articulated exactly what I am currently struggling with. Over the last couple of months I’ve come around to the view that both the liberal nation-state in general and neoliberalism in particular have failed or are in the process of failing, but I still work in the service of both of those institutions. And while I can justify the practical step of taking some time to figure out my next career move, right now the only thing I can tell myself, and the only option that is presenting itself, is: I must continue to do it and do it in the following ways, even if there’s no evidence that it actually helps with anything…

  4. Timothy Burke says:

    To me, this is a key difference between liberalism and neoliberalism that has many dimensions. I regret the extent to which many leftists have allowed a critique of neoliberalism to “swallow” liberalism. Or maybe what I think of liberalism is something more like an ungainly hybrid of romantic ideas and liberal political structures–the upshot being that neoliberalism has no room for a kind of humanism that is inconsistent or contradictory on purpose, but I think liberalism did. In fact, impulsiveness, whimsy and transformation were signs for some 19th C. liberals (the not-utilitarian kinds) that we were free, that we might not follow procedure if it seemed situationally wrong. Liberalism wrung its hands a lot about the dangers of rule-following, about becoming Inspector Javert or Captain Vere. Neoliberalism doesn’t even know there’s an issue with rule-following precisely because it treats rules as algorithms: procedures that operate autonomically, externally from subjectivity. Whereas liberal rule-following and fears thereof was all about the danger of incorporating the rules into selfhood, about the dangerous kinds of human beings who entangled their individuality with procedure.

  5. Neel Krishnaswami says:

    Neoliberalism doesn’t even know there’s an issue with rule-following precisely because it treats rules as algorithms: procedures that operate autonomically, externally from subjectivity.

    I suspect I’m fairly far to the right of you, but you absolve the present order far too easily. For example, you described the war on terror as follows:

    Take drone warfare in the “global war on terror”. Write it as an algorithm. […] The only reason that Americans live with the current implementation of that algorithm is that the people being killed are out of sight and are racially and culturally coded as legitimate victims.

    In fact, the Obama administration classifies all military-age males as militants, so that they can report a smaller number of civilian casualties without having to alter their practices. This is not the behaviour of a Javert — this is the behaviour of a Talleyrand.

    Our leaders understand subjectivity perfectly well. Presenting their decisions as the output of mathematical algorithms lets them claim the mantle of rationality and dismiss their opponents as irrational.

    Programs do what programmers make them do, and programmers do what their bosses tell them to. Our rulers are not the implementation of an algorithm — the algorithms are the implementation of their intent. (As an aside, I strongly recommend Maciej Ceglowski’s short essay The Moral Economy of Tech.)

  6. lemmy caution says:

    I just read Bostrom’s “superintellegence” which is a serious attempt to deal with the dangers of strong AI. He goes pretty far out on his assumptions so I am not sure it is going to go down that way, but he has thought it through.

    “In fact, the Obama administration classifies all military-age males as militants, so that they can report a smaller number of civilian casualties without having to alter their practices. ”

    that is insane.

Comments are closed.