A friend of mine has been quoting a few thoughts about the minor classic SF film Colossus: The Forbin Project, about an AI that seizes control of the world and establishes a thoroughly totalitarian if ostensibly “humane” governance. He asks how we would know what a “friendly” AI would look like, with the obvious trailing thought that Colossus in the film is arguably “friendly” to the intentions and interests of its creators.
This is a minor subgenre of reflections on AI, including the just-finishing series Person of Interest (no spoilers! I am still planning to power through the last two seasons at some point).
I think the thing that makes “friendly” AI in these contexts a horrifying or uncanny threat is not the power of the AI, though that’s what both stories and futurists often focus on, the capacities of the AI to exert networked control over systems and infrastructure that we regard as being under our authority. Instead, it is the shock of seeing the rules and assumptions already in place in global society put into algorithmic form. The “friendly” AI is not unnerving because it is alien, or because of its child-like misunderstandings and lack of an adult conscience. It is not Anthony Fremont, wishing people into the cornfield. It is that it is a mirror. If you made many of our present systems of political reason into an algorithm, they might act much the same as they do now, and so what we explain away as either an inexorable law of human life or as a regrettable accident would be revealed as exactly what it is: a thing that we do, that we do not have to do. The AI might be terrifying simply because it accelerates what we do, and does it more to everyone. It’s the compartmentalization that comforts us, the incompetence, the slowness, not the action or the reasoning.
Take drone warfare in the “global war on terror”. Write it as an algorithm.
1. If terrorist identity = verified, kill. Provide weighting for “verified”.
2. If non-terrorist in proximity to terrorist = still kill, if verified is strongly weighted.
3. If verification was identifiably inaccurate following kill = adjust weighting.
The only reason that Americans live with the current implementation of that algorithm is that the people being killed are out of sight and are racially and culturally coded as legitimate victims. One of the weightings of our real-world actual version of the algorithm is “do this only in certain places (Syria, Yemen, Afghanistan) and do this only to certain classes of the variable ‘terrorist'”. Americans also live with it because the pace of killings is sporadic and is largely unreported. A “friendly AI” might take the algorithm and seek to do it more often and in all possible locations. Even without that ending in a classically dystopian way with genocide, you could imagine that many of the AI’s creators would find the outcome horrifying. But that imaginary AI might wonder why, considering that it’s only implementing an accelerated and intensified version of the instructions that we ourselves created.
Imagine a “friendly AI” working with the algorithm for “creative destruction” (Schumpeter) or the updated version, “disruption” (Christensen).
1. Present industries and the jobs they support should be relentlessly disfavored in comparison to not-yet-fully realized future industries and jobs.
2. If some people employed in present jobs are left permanently unemployed or underemployed due to favoring not-yet-fully realized future industries and jobs, this validates that the algorithm is functioning correctly.
3. Preference should be given to not-yet-fully realized industries and jobs being located in a different place than present industries and jobs that are being disrupted.
4. Not-yet-fully realized future industries and jobs will themselves be replaced by still more futureward industries and jobs.
The “friendly AI” would certainly seek to accelerate and expand this algorithm, always favoring what might be over what actually is, possibly to the point that only notional or hypothetical industries would be acceptable, and that any actual material implementation of an industry would make it something to be replaced instantly. The artificial stop conditions that the tech sector, among others, put on disruption might be removed. Why buy any tech right now when there will be tech tomorrow that will displace it? Why in fact actually develop or make any tech considering that we can imagine the tech that will displace the tech we are developing? Apple will simply obsolete the next iPad in a few years for the sake of disruption, so the friendly AI might go ahead and pre-obsolete it all. Again, not what anybody really wants, but it’s a reasonable interpretation of what we are already doing, or at least what some people argue we are doing or ought to do.