Teleology and the Fermi Paradox

I sometimes joke to my students that “teleology” is one of those things like “functionalism” that humanist intellectuals now instinctively recoil from or hiss at without even bothering to explain any longer to a witness who is less in-the-know what the problem is.

But if you want a sense of how there is a problem with teleology that is a meaningful impediment to thoughtful exploration and explanation of a wide range of existing intellectual problems, take a look at io9’s entry today that reports on a recent study showing that self-replicating probes from extraterrestrial intelligences could theoretically reach every solar system in the galaxy within 10 million years of an initial launch from a point of origin.

I’ve suggested before that exobiology is one of the quintessential fields of research that could benefit from keeping an eclectic range of disciplinary specialists in the room for exploratory conversations, and not just from within the sciences. To make sure that you’re not making assumptions about what life is, where or how it might be found or recognized, and so on, you really need some intellectuals who have no vested interest in existing biological science and whose own practices could open up unexpected avenues and insights into the problem, whether that’s raising philosophical and definitional questions, challenging assumptions about whether we actually could even recognize life that’s not as we know it (or whether we should want to), or offering unexpected technical or artistic strategies for seeing patterns and phenomena.

As an extension this point, look at the Fermi Paradox. Since it was first laid out in greater detail in 1975 by Michael Hart, there’s been a lot of good speculative thinking about the problem, and some of it has tread in the direction I’m about to explore. But you also can see how for much of the time, responses to the concept remain limited by certain assumptions that are especially prevalent among scientists and technologists.

At least one of those limits is an assumption about the teleology of intelligence, an assumption that intelligent life will commonly or inevitably trend towards social and technological complexity in a pattern that strongly resembles some dominant modern and Western readings of human history. While evolutionary biology has long since moved away from the assumption that life trends towards intelligence, or that human beings are the culmination of the evolution of life on Earth, some parallel speculative thinking about the larger ends or directionality of intelligent life still comes pretty easily for many, and is also common to certain kinds of sociobiological thought.

This teleology assumes that agriculture and settlement follow intelligence and tool usage, that settlement leads to larger scales of complex political and social organization, that larger scales of complex political and social organization lead to technological advancement, and that this all culminates in something like modernity as we now live it. In the context of speculative responses to the Fermi Paradox (or other attempts to imagine extraterrestrial intelligence) this produces the common view that if life is very common and intelligent life somewhat common that some intelligent life must lead to “technologically advanced civilizations” which more or less conform to our contemporary imagination of what “technological advancement” forward from our present circumstances would look like. When you add to this the observation that in some cases, this pattern must have occurred many millions of years ago in solar systems whose existence predates our own, you have Fermi’s question: where is everybody?

But this is where you really have to unpack something like the second-to-last term in the Drake Equation, which was an attempt to structure contemplation of Fermi’s question. The second-to-last term is “the fraction of civilizations that develop a technology that releases detectable signs of their existence into space”. For the purposes of the Drake Equation, the fraction of civilizations that do not develop that technology is not an interesting line of thought in its own right, except inasmuch as speculation about that fraction leads you to set the value of that term low or high. All we want to know in this sense is, “how many signals are there out there to hear?”

But if you back up and think about these questions without being driven by teleological assumptions, if you don’t just want to shortcut to the probability that there is something for SETI to hear–or to the question of why there aren’t self-replicating probes in our solar system already–you might begin to see just how much messier (but more interesting) the possibilities really are. Granted that if the number that the Drake Equation produces is very very large right up until the last two terms (up to “the fraction of planets with life that develop intelligence”) then somewhere out there almost any possibility will exist, including a species that thinks very substantially the way we do and has had a history similar to ours, but teleology (and its inherent narcissism) can inflate that probability very wildly in our imaginations and blind us to that inflation.

For example:

We’ve been notoriously poor in the two centuries since the Industrial Revolution really took hold at predicting the forward development of technological change. The common assumption at the end of the 19th Century was to extrapolate the rapid development of transportation infrastructure and assume that “advancement” always would mean that travel would steadily grow faster, cheaper, more ubiquitious. In the mid-20th Century it was common to assume that travel and residence in space would soon be common and would massively transform human societies. Virtually no one saw the personal computer or the Internet coming. And so on. The reality of 2013 should be enough to derail any assumptions about our own technological future, let alone an assumption that there will be common pathways for the technological development of other sentient life. To date, futurists have been spectacularly wrong again and again about technology in fundamental ways, often because of the reigning teleologies of the moment.

It isn’t just that we tend to foolishly extrapolate from our technological present to imagine the future. We also have very impoverished ways of imagining the causal relationship between other possible biologies of intelligent life and technosocial formations, even in speculative fiction. What technologies would an underwater intelligence develop? An intelligence that communicated complex social thoughts through touch or scent? An intelligence that commonly communicated to other members of its species with biological signals that carried over many miles as opposed to at close distances? And so on. How much of our technological histories, plural (because humanity has many more than one technological history) are premised on our particular biological history, the particular contingencies of our physical and cultural environments, and so on? Lots, I think. Even within human history, there is plenty of evidence that fundamental ideas like the wheel may not be at all inevitable. Why should we assume that there is any momentum towards the technological capabilities involved in sending self-replicating probes to other star systems or any momentum towards signalling (accidentally or purposefully)?

Equally: why should we assume that any other species would want to or ever even think of the idea? Some scientists engaging the Fermi Paradox have suggested that signalling or sending probes might prove to be dangerous and that this is why no one seems to be out there. E.g., they’ve assumed a common sort of species-independent rationality would or could guide civilizational decision-making, and so either everyone else has the common sense to be quiet or everyone who wasn’t quiet is dead because of it. But more fundamentally, it seems hard for a lot of the people who engage in this sort of speculation to see something like sending self-replicating probes for what they really might be characterized as: a gigantic art project. It’s no more inevitable than Christo draping canyons in fabric or the pharoahs building pyramids. It’s as much about aesthetics and meaning as it is technology or progress. There is no reason at all to assume that self-replicating probes are a natural or inevitable idea. We might want to at least consider the alternative: that it is a fucking strange idea that another post-industrial, post-scarcity culture of intelligences with a lot of biological similarity to us might never consider or might reject as stupid or pointless even if it occurred to them.

Anthropocentrism has died slowly by a thousand cuts rather than a single decisive strike, for all that our hagiographies of Copernicus and Galileo sometimes suggest otherwise. Modern Western people commonly accept heliocentrism, and can dutifully recite just how small we are in the universe. Until we began getting data about other solar systems, it was still fairly common to assume that the evolution of our own, with its distribution of small rocky planets and gas giants, was the “normal” solar system, which is increasingly obviously not the case. That too is not so hard to take on board. But contemporary history and anthropology provide us plenty of information to suspect that our anthropocentric (specifically modern and Eurocentric) understandings of how intelligence and technology are likely to interrelate are almost certainly equally inadequate to the reality out there.

The more speculative the conversation, the more it will benefit from a much more intellectually and methodologically diverse set of participants. Demonstrating that it’s possible to blanket the galaxy with self-replicating probes within ten million years is interesting, but if you want to know why that (apparently) didn’t happen yet, you’re going to need some philosophers, artists, historians, writers, information scientists and a bunch of other folks plugged into the discussion, and you’re going to need to work hard to avoid (or at least make transparent) any assumptions you have about the answers.

This entry was posted in Defining "Liberal Arts", Generalist's Work, Sheer Raw Geekery. Bookmark the permalink.

5 Responses to Teleology and the Fermi Paradox

  1. mike shupp says:

    Point(s) missed, I think. First of all, the notion that intelligent beings with an interest in the right sorts of technology might explore or even colonize the galaxy over the space of 10-100 million years was made back in the mid-60’s, when Carl Sagan and Philip Morison and Ron Bracewell and other people started talking about SETI. This “brand new” idea is 50 years old, in other words — almost 80 years old if you want to consider some 1930’s science fiction as serious speculation. Second, the notion that intelligent alien life might arise which doesn’t physically resemble earth-based humans, and that such alien beings might differ considerably in senses and ways of viewing reality and philosophy and history and so on and so forth is also old — it was made by biologists and anthropologists back in the 1960’s as well (and of course, even earlier in science fiction). So that’s not new either.

    So why are we paying attention to a “study” regurgitating 50 year old ideas? Is it because the people conducting the study are so ignorant? I doubt it actually. So the third point that strikes me is that THE PUBLIC IS VERY DUMB. Or, if you prefer, that the “audience” for extrabiology keeps changing as the population ages, and that newcomers begin with very little knowledge. Most people think they’re being quite sophisticated by considering the possibility of sentient beings such as Vulcans and Romulans and Ferengi. Granting “humanity”, even in theory, to something looking like a bowling ball which thinks deep philosophical thoughts in isolation from its fellows over a lifespan of millennia would be a bit of a stretch. So the paper quoted in io9 is aimed at ordinary humans rather than working exobiologists.

    The fourth point is that after you’ve properly considered all the possible variations in alien species and the many different paths towards civilization or some sort of knowledge that such beings might have followed, you’re kind of at a dead end.

    Imagine Sagan and Morison having a meeting with Lyndon Johnson in the late 1960’s to tell him that aliens might dwell elsewhere in the galaxy, and might even be exploring our solar system with their version of NASA. “What do you want me to do?” LBJ might have asked. “Will they be friendly or hostile? Will they side with us in Viet Nam? Can they talk to Congress about passing an education bill? Can they give us a cure for cancer? What will they want in return?” And if you are Carl Sagan, and think it’s just absolutely wonderful that we can almost prove mathematically that intelligent alien species are spread across the galaxy and that this almost-a-fact is so exciting it needs to be proclaimed to everyone, what would you tell LBJ?

    So yeah, all kinds of things may be possible in space. But that’s not useful knowledge. And I think in the end, we just have to shrug and tell ourselves that in a thousand years when we’ve got faster than light spaceships and a lot of money we’ll go looking for the really interesting aliens, but right now we have to settle for detecting messages from aliens who think something like us, and communicate their thoughts sort of as we do, with technology that we can understand. Yeah, it’s “anthropomorohic” and even our grad students can see we’re being embarrassingly simple-minded, but realistically what other choice do we have?

  2. Timothy Burke says:

    So I appreciate these points–as I said, I know other folks have been over this ground ever since the problem was first formulated in the 20th Century.

    But I think you are too quick to stake out a pragmatic ground and defend it as the only choice, both in terms of public policy and in terms of a research agenda that doesn’t have a particular policy objective.

    It’s true that pursuing SETI at any scale requires a kind of paring off of the possibilities that we simply can’t deal with within that framework, including the possibility that there are many intelligences in our galaxy but that almost none of them have technological infrastructures that signal to us in ways we can hear, or that almost none of them would ever want to. But it is possible, as Paul Davies’ recent book observes, that the overly quick paring down of SETI to the most convenient assumptions might miss both the classic side benefits of any kind of basic research (e.g., that trying to think more broadly and creatively about the problem might fetch up some other application that no one was aiming for in the first place) and might lead us to miss out on some feasible approaches to imagining and detecting extraterrestrial information transmission from technological infrastructures or biological intelligences very unlike our own.

    Second, I really contest the idea that speculative thinking has no value in its own right unless it can match to publically sustainable, practical research programs. This is a classic, repeated problem in modern science itself: researchers who are pushing well outside of the tools and knowledge base of the moment to engage in speculative thinking are scorned by colleagues for their extravagant and impractical visions until suddenly it turns out that they were really on to something. This is all the more the important now–we have striking new ways to collaborate and collate speculative thinking at the same time that the logics governing financial support are driving away from basic science.

    Third, a somewhat separate point: I really dislike the general proposition that scientists or other scholars have to ‘trick the public’ by continuing to serve up simplified findings that they believe work well within existing public understandings in order to gain support for worthy public projects. After fifty years of that kind of logic repeatedly blowing up in the face of scientific and technological projects, it’s time to think differently about building sustained coalitions to support scientifically-driven public policy.

  3. Timothy Burke says:

    Actually, a fourth point also occurs to me: I think you are very seriously underestimating the extent to which many scientists and hard social scientists (those concerned with SETI and those who aren’t) make the kinds of teleological assumptions I’m complaining about in this essay. Having just spent a year discussing Jonathan Haidt’s The Righteous Mind, I’m both impressed at the numbers of scholars I know (scientists and non-scientists) who were very skeptical about some of those assumptions embedded in Haidt’s work (and similar work like Jared Diamond’s) but I also saw a reasonable number of folks in our local discussion and in the national discussion of this kind of research who aren’t even aware that they’re making those kinds of assumptions as strongly as they are.

    So this is another value of keeping a speculative wariness about anthropocentric assumptions re: extraterrestrial life alive–it may help make us more alert to some similarly limiting “centrisms” in our thinking about the contemporary world.

  4. WeGotThis says:

    Your definition of “teleology” is for the most part associated with Christian re-conceptions of the Aristotelian idea. Final cause is not directed toward a specific predetermined end, as in Christianity. Telos is constrained by self-maintenance and self-organization. There are two great new books on secular teleology, Victoria N Alexander’s The Biologist’s Mistress and Terrence Deacon’s Incomplete Nature.

  5. ajay says:

    We might want to at least consider the alternative: that it is a fucking strange idea that another post-industrial, post-scarcity culture of intelligences with a lot of biological similarity to us might never consider or might reject as stupid or pointless even if it occurred to them.

    But this is actually a much less likely alternative proposal than you seem to think. You seem to be implying that “deciding to send out probes” is something like “having five fingers on your hand” – it’s a one-off decision that’s common across an entire species for the whole of its existence.
    And it is indeed silly and self centred and so forth to say “humans have five fingers, therefore all intelligent species will have five fingers”.

    But actually “sending out probes” is in a list that includes “wearing hats” and “driving steam-powered cars” as well as “building pyramids”. It’s something that some members of a species might do at some point. And it’s a massive assumption that no members of a species will ever decide to do that at any point in their history. Look at our own history; look at the massive variety of cultures and activities that humans have had, and we’re only at the dawn of human history. Are we really all so similar that an idea that a French farmer in 1950 would reject as stupid and pointless would definitely also be rejected by a Japanese nobleman in 200 AD or a Sumerian merchant in 2500 BC or a New Yorker in 3500 AD?

    Would you really be prepared to bet that there’s a single alien species that will _never_ build a pyramid? That in the, I don’t know, five million years from the first Vogon striking sparks from a rock, to the climax of interplanetary Vogon civilisation and on to its inevitable decline and extinction from a disease contracted from a dirty telephone, no Vogon of the trillions of Vogons who have lived will ever make or even consider making a building that’s smaller at the top than it is at the bottom? A culture doesn’t reject ideas on a once-only basis.

Comments are closed.