Everyone is prognosticating right now. I’m not immune. In those moments where genuine uncertainty enters the room in an undeniable way, we all desperately want to know what is happening, and what will happen next.
One of my private pleasures as a historian is reading old newspapers in sequence just to see pundits and prognosticators try to guess what is coming next. They almost invariably fail badly, often because they cannot imagine either just how terrible the near-future will become or how wonderful and strange some of the developments just around the corner are going to be.
This is a basic problem with professional futurism as well. However much flummery and fan-dancing it offers about its methods, it is usually mere extrapolation from the history of the previous twenty, thirty, fifty years.
Why do I like reading the wildly wrong views of past people who stand poised on the edge of some major and unguessed-at transformation? Partly because my favorite four words in the English language are, “I told you so.” But it’s also a warning to myself and anyone like me.
Nassim Nicolas Taleb’s The Black Swan is a smart book that centrally engages this basic problem. Taleb focuses on the events which we do not expect because they are improbable, our tendency to make models and theories that domesticate the empirical noise from which improbable events and sudden transformations can emerge. He tries to describe an alternative practice of prognostication, or of managing exposure to “black swans”, these unforeseen, disjunctive moments. In part, he argues, we should “worry less about advertised and sensational risks, more about the more vicious hidden ones” and to “worry less about matters people usually worry about because they are obvious worries, and more about matters that lie outside our consciousness and common discourse”. (p. 296)
I don’t think I get to some of his self-described intellectual practice through the same frameworks (in fact, I think Taleb is sometimes just as indebted to bad theory or bloated abstractions as some of the experts he targets) but I like the ambition. It’s less about trying to make the future a conventionally known object and more about exploring the possibility spaces that I can imagine resulting from the accidental or unplanned interactions of systems and agents. But it’s also being willing to believe that things can happen as a result of those interactions which are not part of my experience nor are mere replays of some known past scenario, to resist easy parallels and metaphors.
So from that angle, how do I think through the present financial crisis? It seems to me both possible to imagine that what appears to be a certain disaster could end as a kind of weird farce wherein all the various players so wrapped up in financial instruments that they themselves don’t understand find as they untangle their knots that all their many bets and insurances and fictions cancel each other out. Fictional money could evaporate, but real assets remain, and it will be only a lesson about speculation for future generations to recall like the tulips of past days.
Or we could find that however real some of the assets down deep at the bottom might be–houses and properties, buildings and factories–that these things only will have value again in a decade, two decades, but that the entire financial system and consumer culture is built on them having the value they were believed to have right this very minute. In the worst case of that direction, we might find that there isn’t enough money in all the world to recapitalize the system right here, right now. In which case, I have no idea what could happen as a consequence, only that it might well be bad at scales and intensities that few of us alive have any benchmark for.
The important thing now seems to me to not domesticate these events, to not try to stuff them back into the box of already-understood models and analogies. Instead we have to try to imagine instead the world becoming strange to us, to think the unthinkable. If enough people had been able to imagine the terror and suffering that industrial weaponry and trench warfare might produce, perhaps World War I would have seemed a much worse gamble than it did seem to those in power at the time. If more people besides Vannevar Bush had been able to see the potential shape and impact of information and computing in the decades after 1945, perhaps a whole generation of thinking about mechanization, computers, and cultural transformation would have taken on a softer, more enticing tone from the outset–and fewer people would have lived in fear of an all-powerful mainframe running human affairs. (This would have deprived Captain Kirk of one set of easy victories, however.)
This is an elaborate way of saying that I don’t know what’s going to happen, and neither do you. But you should neither be too quick to be sanguine and comfort yourself by thoughts of your own financial prudence or inoculation from the problems of others nor should you start looking for the most likely place to pitch a tent in your local Hooverville. Whatever happens, both safety and danger may lie in unexpected places. Taleb has a pretty good bit of advice about this, too: seek wide exposure to potentially positive unexpected events, and clamp down hard on any exposure to the worst of potentially negative ones. The problem, of course, is that the very worst (and best) such transformations make it clear that there is nowhere to hide, no way to be private, no island: that we are always already social, institutional and system-bound, and cannot help being so.