Matchmaker, Matchmaker, Make Me a Match: Artificial Societies vs. Virtual Worlds
Timothy Burke
Swarthmore College, Department of History
May 2005
For DiGRA 2005
The concept of “emergence†and associated ideas about autonomous agents, complex adaptive systems, artificial life and complexity theory are important underpinnings in two discrete academic projects, work on “artificial societies†on one hand and the study of “virtual worlds†on the other. The two research programs seemingly share a good deal in common but presently have almost no contact or overlap with each other. This is partly because the coalescing of both groups of researchers is relatively recent, partly because the two groups are coming out of radically different disciplinary histories and contexts, but also partly because the two groups have so far have different experiences of research and the place of concepts like emergence within it. In this paper, I argue that both groups potentially have a great deal to learn from one another.
Artificial Societies
Scholarship on “artificial societies†primarily derives out of traditions of computer-mediated modeling in economics and political science, but also has been influenced by approaches to simulation in the natural sciences and by developments in computer science, particularly work on cellular automata and autonomous-agent and multiagent programming. The work of evolutionary economics and game theory, particularly the work of Robert Axelrod [1], has also been centrally important to the development of artificial societies scholarship.
The work of Robert Axtell and Joshua Epstein as described in Growing Artificial Societies provides the clearest summary description of this field of research, though other prominent publications such as Nigel Gilbert and Jim Doran’s anthology Simulating Societies, Gilbert and Rosaria Conte’s anthology Artificial Societies, and the Journal of Artificial Societies and Social Simulation give some sense of the breadth and depth of similar approaches to social simulation. [2] Important centers of activity include the Santa Fe Institute, the University of Michigan’s Center for the Study of Complex Systems, the Brookings Institute and most recently the New Ties project, begun in September 2004. [3]
Epstein and Axtell observe that while simulations and models have long been essential to the “hard†social sciences, such model-making (computer-mediated or not) has for reasons both practical and epistemological always involved compartmentalizations or simplifications of social reality which then serve as the basis for the reapplication of the model to the real world. In Epstein and Axtell’s view, this modeling process is a poor basis for satisfying the aspiration of the hard social sciences to achieve rigorous empirical reliability, resulting in the creation of models whose simplifications and single-variable focus render them unsuitable for understanding and intervening in real-world social and economic phenomena.
Conventional strategies, they argue, are “top-downâ€: they start with a somewhat arbitrary or heuristic compartmentalization of real social phenomena selected primarily to demonstrate an underlying hypothesis about the more complex reality, which a top-down simulation then tautologically demonstrates. Such model-making also frequently ignores or at least suppresses time-dynamical aspects of the phenomena, and drives for equilibria everywhere in order to avoid the difficulty of engaging non-equilibria processes.
Epstein and Axtell argue that emergence-based approaches to simulation, those that draw on insights from the study of complex adaptive systems and use computers and software to handle the computational intricacy of such simulations, are capable of becoming instruments of inquiry that may make the hard social sciences into a meaningfully experimental, empirical and scientific form of inquiry. They suggest that work on artificial societies, using agent-based approaches, may permit experimentalists to create social simulations which approach real-world complexity from the “bottom-upâ€. Rather than the researcher tautologically cutting his model to fit his hypothesis, the researcher may be able to observe and hypothesize about forms of complex behavior which emerge organically from simple initial conditions within a simulation, to grow societies “in silicaâ€.
In concrete terms, most of the work in this burgeoning field has yet to approach anything close to this aspiration, which most researchers in this area would be the first to admit. Most, though not all, of the existing artificial societies research involves the creation of simulation environments that are fairly similar to Epstein and Axtell’s “Sugarscapeâ€, involving the interaction of several classes or types of agents in relation to relatively simple competitive or cooperative behaviors. Even as such, this work has already succeeded both in adding some fundamentally new tools to modeling in the social sciences and in raising some significant new questions about established concepts and approaches in a number of fields.
Work in artificial societies is especially impressive in the ways it adds a temporal element to even fairly conventional kinds of modeling. As Epstein and Axtell argue, agent-based artificial societies models are inherently historical. Artificial societies research has already proven to be extremely useful for modeling particular types of real-world complexity that seem to closely but manageably correspond to the central tenets of complexity studies and evolutionary economics, for example, the spread of epidemics or the range of bargaining behaviors employed by agents in economic transactions. Epstein and Axtell’s Sugarscape experiments and some work by other scholars in this field has also demonstrated one of the central features of emergent systems as they are observed in other contexts, namely, the capacity to produce results which initially seem counter-intuitive or surprising to human observers. Agent-based social simulations do not always or even often produce the kinds of complex behaviors that more conventional kinds of modeling produce.
Virtual Worlds
Virtual worlds are computer-mediated interactive environments in which human actors control one or more characters or avatars, software agents meant to represent them, in persistent-state but synchronous software environments where changes to the environment or the agent are permanently recorded and recalled from session to session. Virtual worlds have a strongly temporal character differing from other computer-mediated entertainment or simulation environments where there is no persistent record of interactions from session to session.
As a specific form of new media, virtual worlds have their origins in academic and applied computer science, most centrally in the early development of MUDs (multi-user dungeons). The first MUD was created in 1978 by a British undergraduate, Roy Trubshaw, and elaborated upon by Richard Bartle and Nigel Roberts. [4] Bartle went on to become a key early progenitor of both the phenomenon of virtual worlds and one of the key academic proponents of the scholarly study of virtual worlds. In fact, for many MUD designers, their worlds were both an expressive media form and an instrument for conducting systematic research in social psychology and social science.
During the 1980s, MUDs were made available to their users largely free of charge, maintained within university or research-institute facilities, experienced by relatively small numbers of people. Most were entirely “text-basedâ€, meaning that the environment was represented almost entirely by words (occasionally some MUDs used letters to create simple ASCII-style graphics like maps) and action within the world was enacted with the use of a text parser. Many of these early worlds and virtual communities were designed to test either specific applications, such as LambdaMOO, which XeroxPARC research Pavel Curtis hoped could be a test bed for text-based synchronous conferencing systems [5] , or were designed by academic researchers as thought-experiments of a sort, to somewhat playfully test particular ideas about social or psychological behaviors[6]. Over time, the number of MUDs grew and increasingly large numbers were designed and maintained by hobbyists or as a new media form that served creative and cultural purposes. Many of them, in either their academic or aesthetic manifestation, drew significantly on a genre of non-computer mediated games which slightly preceded the advent of MUDs, such as Dungeons and Dragons, which in turn drew on postwar fantasy literature, particularly Lord of the Rings. As the numbers of MUDs grew, the variety of underlying structures in their persistent worlds also multiplied: virtual worlds built around killing monsters and accumulating resources, virtual worlds built around social relationships, virtual worlds built around collective storytelling
In the mid-1990s, the first major pay-to-play virtual worlds began to appear. Some were text-based, but starting in 1996, commercially run graphical virtual worlds began to appear, with a number of them becoming substantially profitable. Everquest at its peak had as many as 500,000 subscribers in the United States and Europe. Several games in East Asia, particularly Lineage, have garnered very large numbers of users, though the nature of the local market makes the numbers very difficult to compare to those of other MMOGs. Most recently, World of Warcraft has achieved unprecedented global success and popularity, far outstripping Everquest at its peak.
The growth of commercially successful virtual worlds has spurred existing scholarly interest in the study of virtual worlds and diversified the range of disciplines involved in this research. Economists, psychologists, political scientists, ethnographers and legal scholars are now active in the field, not to mention scholars who define themselves as working within nascent disciplines of game studies or ludology. Unlike the scholars working on artificial societies and simulations, these scholars are studying a social and cultural phenomenon that is exterior to their own efforts: commercial virtual worlds are largely not a research-driven attempt to model social reality. However, one of the interesting aspects of scholarship on virtual worlds is that it often includes or involves some of the key designers or practicioners within scholarly debate, and the tradition of direct scholarly participation in virtual world design also remains strong even if the requirements for producing full-scale graphical virtual worlds are now well beyond the means of even the best-funded scholars (they cost many millions of dollars to design and maintain).
The chief interests of scholars working on virtual worlds largely divide into three major areas:
a) The internal economies of virtual worlds and their interface with real-world economic value. Most of the major commercial worlds have a substantial internal economies and the gameplay they offer customers is centrally driven by accumulative dynamics. Players invest labor to become more powerful in order to extract resources more effectively from monsters and the virtual world itself, which allows them to become more powerful and therefore extract resources more effectively still, and so on—a dynamic that some players appropriately call “the treadmillâ€. Much of the wealth created by this virtual labor is internal to the player’s avatar and cannot be abstracted from it, but in many virtual worlds, the player’s power is also amplified through the acquisition of virtual items which can be traded between players. A substantial secondary market on eBay and other auction sites has grown in which players sell off both virtual items and the avatars themselves for considerable sums. The economist Edward Castronova has become the key scholarly figure in the study of both the internal economies of virtual worlds and their connections to real-world economies, and was the first to rigorously quantify the “exchange rate†between value in one of these worlds and value outside of it.[7]
b) The psychological questions raised by the relationship between players and their avatars: are avatars expressive of psychology of their human controllers, and in what ways? Do relations between avatars or the action of gameplay have a psychological impact on real-world players? Do virtual worlds create a novel context for psychological expressiveness? What difference does the visual interface make in the psychological expressiveness or consequences that follow on participation in a virtual world? (Some virtual worlds use an isometric 3rd-person perspective, while others use a 1st-person perspective, and still others allow players to switch between differing orientations.) What do formations and practices of identity in virtual worlds tell us about the history and practice of identity-forms in the real world? This field is especially dominated by questions about gender and sexual play in virtual worlds, but also struggles with a major evidentiary problem: comprehensive data about the demographics and social identity of players is closely guarded by the owners of major virtual worlds, and fine-grained studies of individual psychology and behavior are also made difficult by concerns about anonymity and typicality.
c) Questions about the social, political and legal structures governing virtual worlds and their evolution over time. Here there are both empirical questions about the social structures within games—what they are, how they came to be, how they change within a given virtual world and between virtual worlds, but also even more pressingly, questions about the relationship between real-world social and political practices and structures and the virtual world. Is a virtual world a way to compactly examine the particular existing character of contemporary societies (or some social subset of them) or is it a way to isolate and simplify some fairly universal dynamics governing human sociality and politics? Is it a model or a mirror?
Emergence As Aspiration and Constraint
“Emergence†is a difficult concept to grasp and employ. In some formulations, it comes close to being a truism, or so broad as to be virtually meaningless. In both artificial societies and virtual world research, however, the concept tends to be more specifically defined and used. The basic core of the idea, that emergence is defined by the formation of complex patterns or systems from simple initial conditions without any governing or controlling blueprint or design, is generally coupled with an interest specifically in cases of emergence that involve rule-driven agents that act independently and simultaneously within and upon an environment which is distinct from the agents.
For researchers working on artificial societies, the concept of emergence and related ideas is explicitly invoked and foundational to the distinction between artificial societies simulations and other kinds of social-science modeling. Emergence is taken both as a sign of the resemblance between artificial society simulations and the real world, and as a protection against tautological manipulation of the simulation. If a given simulation can produce complex behaviors or patterns from simple agent-based starting conditions, many artificial society researchers take that as a reasonable confirmation that real-world complexities of a similar kind have followed a similar process of evolutionary development.
In virtual worlds research, however, the use of the idea of emergence is much more implicit, rarely invoked in detail. Richard Bartle, for example, describes the content of virtual worlds that arises “from the natural actions of players†as “emergent or self-generatingâ€, which he distinguishes from content explicitly or intentionally designed by either developers or players. [8] Nevertheless, many researchers and virtual world designers are conversant with the concept: at the Game Developers Conference in September 2004, developer Warren Spector based his keynote speech around the term, noting both its ability to incisively describe familiar patterns of gameplay and virtual world structure and its applied possibilities for solving long-standing problems of implementation and design. [9] In many cases, I would argue, developers and researchers interested in virtual worlds who make no explicit or deliberate use of the term or the body of research related to it nevertheless write in terms which recognizably invoke these concepts and ideas in some fashion.
Artificial society simulations encounter a number of issues in their bottom-up, emergence-driven approach. First, emergence-based artificial society simulations to date tend to have a lack of new outcomes traceable to later events in a system’s evolution. The end-state or later complexity of the simulation tends to directly derive from the initial condition. Most such simulations tend to settle into relatively stable self-organizing states which can only be stimulated to new development or change through user intervention. Yet if emergence applies to real human social or cultural evolution, then in this respect the simulations are very poor models indeed, as complex structures or patterns which arise from particular simple antecedents tend to generate in turn yet more patterns or novel structures, each as potentially unpredictable or surprising in their own way as the initial flowering of self-organizing patterns might have been. Moreover, there is at least some legitimate reason in the context of real human social dynamics and history to think that some of this “emergence from emergence†is contingent, that re-running the “tape of history†would not produce the same results, as Stephen Jay Gould memorably suggested in his book Wonderful Life. [10] Agent-based “bottom-up†artificial society simulations have a limited ability to demonstrate similar contingency.
This leads to a second problem with emergence-based approaches in social simulation, that results or end-states can be very hard to quantify and rigorously describe. Emergent systems are quintessentially process-driven and dynamical in their form: you have to see them in motion in order to fully understand them. Seeing any single static representation of such a system, whether simulation or real-life example, tells you relatively little. Various graphings or representations of the dynamic evolution of such a simulation can provide usefully compressed information about their histories, and the artificial society approach does permit researchers to study non-equilibria dynamics in ways that other social science instruments find prohibitively difficult. [11] To some extent, dealing with the first issue I noted makes this problem far worse. It is not that it is technically impossible to simulate emergence at multiple scales or levels, but that the more levels of emergent processes that an artificial society contains, the more difficult it is to make a meaningful connection between changes in the initial conditions of the simulation and its dynamic behavior over time, the more difficult it becomes to offer any rigorous or quantitative statement about what happened in a particular iteration of the simulation.
The issue of multiple scales or levels also relates to the epistemological problem of methodological individualism in an agent-based or cellular automata approach to artificial society simulation. An agent-based approach represents social, cultural or economic rules within each agent, and the unplanned or uncoordinated interactions between such agents and between agents and their environments generate complexity and organization over time. The problem here is that inasmuch as these simulations aspire to be artificial societies, they need to move beyond methodological individualism at some point, to be able to represent the emergent consequences of agent interactions as persistent or permanent in their own right, even as higher-level agents in their own right. In most such simulations, agents interact with each other through alterations in the environment, in a process called stigmergy, a term originally coined by the biologist Pierre-Paul Grasse to describe processes by which social insects coordinate activities like nest-building without any internal model of the construction process or any external controlling or directing agents. Even models in which agents also interact with each other directly (say, simulations in which agents can kill each other or share “genetic†information of some kind in successive generations, or as in the New Ties project, agents that are permitted to communicate information directly to each other, not merely through stigmergy) still has the issue of methodological individualism. Whatever patterns or structures appear within the simulation environment, they exist only as an expression of the rulesets within the agents, not as persistent structures which relate to the rest of the environment or the agents independently. Yet this is precisely what social institutions like law or material structures like buildings become at some point: agents in their own right, able to interact with and alter both environment and agents even without the continuing animating force of the agents that gave rise to them. Epstein and Axtell suggest that this issue might be dealt with by placing artificial agents within physically realistic environmental models, where the stigmergic alterations of the environment would have physical permanence and the effects that followed from that. [12]
Most of these issues involve problems of mimesis or representation: to what extent must an artificial society ultimately resemble the real world in order to tell us anything? What does an artificial society in which the agents are driven by rules dramatically unlike any we might conceptualize for humans tell us? On the other hand, what does a simulation which is closely designed to approximate the actual historical or lived constraints and character of particular human agents tell us that we do not already know? [13] Epstein and Axtell’s strategy is to start with the simplest and most universal environments; other artificial society researchers set out to make fairly precise and detailed models of particular situations or problems [14] , but in both cases, the question of resemblance or connection between the lived and simulated world is a continuing source of difficulty.
In fact, the deepest issue with artificial society simulation might lie in the ambitions most clearly articulated by Epstein and Axtell but echoed by other researchers in the field. They dream of a bottom-up social science which would no longer have to substract real-world complexity in order to model or experiment with social processes. However, if you could create an artificial society which contained all the complexity of real-world sociality, what would you learn from it that you cannot learn from simply observing the real world? You can’t subdivide the results of a hugely complex social simulation very easily and thus cannot really have a “god’s eye view†of such a simulation, any more than you can with real life. The more attractively complex artificial societies become, the less tractable they are as instruments. Yet stopping short at some point in the process of adding complexity leaves the artificial society researcher where he or she began in the first place, doing exactly what Epstein and Axtell accuse traditional social science modeling of doing: attempting to explore a hypothesis about the real world by reductively truncating its actual complexity. Traditional social science has plenty of heuristics available for justifying that kind of reductionism, but artificial society modeling by its aspirations may deny itself some of the same justifications.
Virtual worlds research and design carries almost none of this baggage because its ambitions are so radically different. For contemporary developers of virtual, persistent-state worlds or games, the first priority is commercial success. It is not the only priority: a number of contemporary massively-multiplayer online games (MMOGs) continue to invoke the old experimental ethos of MUD design. The game A Tale in the Desert, for example, is centrally built around playing with hypotheses of social and political behavior and institution-building; the virtual world Second Life is designed to have extensive material, economic and social plasticity. Even developers working with games with less of a boutique sensibility, such as Star Wars: Galaxies, may harbor ambitions to experimentally play with social dynamics. Some would in fact argue that this is the dividing line between MMOGs which aspire to be “virtual worlds†and those which strive simply to be games. The distinction is often made with some sharpness: the developers of World of Warcraft commented pointedly, “MMO is a game, not a social experiment…MMOs shouldn’t be about a designer playing god and seeing what all his little ants do in his digital ant farmâ€. [15]
Still, even developers who envision what they do as the making of games or “theme parks†have to deal with emergent forms of sociality and behavior among players, and as I noted earlier, many designers like Warren Spector look to concept of emergence as a technical strategy for future design. It is understood that the more complex these virtual worlds grow, the less possible it is for even an extremely well-funded development team to directly author or create all aspects of the virtual environment, and even in relatively simple virtual worlds, the interactions of players with that environment must be automated. Events where a human “imagineer†directly manipulates the gameworld to provide an unscripted experience for players are extremely popular but pose essentially unsurmountable practical problems. Strategies which turn on emergence are seen as an important answer to these issues, as a way to provide a responsive virtual world which changes dynamically in response to the actions of users without the direct or controlling intervention of human authors. The most sophisticated attempt to explore this design strategy to date may be Michael Mateas and Andrew Stern’s Façade, an interactive conversational drama which uses emergent strategies to auto-generate novel, dynamic but dramatically coherent responses to player input. [16]
For virtual world researchers, the observation and analysis of emergent sociality within MMOGs and similar computer-mediated environments is the central substance of their work. Unlike artificial society researchers, virtual world scholars are not exploring hypotheses through a process of controlled simulation design. They are commenting on social dynamics which take place within relatively uncontrolled contexts. Yet like artificial society simulators, these researchers are dealing with a case of agent-based emergence that is tantalizingly comprehensible because of its relative simplicity in comparison with the real world. It is not merely that virtual worlds are defined and constrained by the code used to make them, but that real human beings within such environments are essentially turned into rule-based and relatively simplistic agents. Virtual worlds rest on the Turing Test in reverse, the truncation of the complexity of human individuals into manageably simple rule-constrained software-expressed agents.
Just as artificial society simulations in theory may permit a researcher to hypothesize a cause-and-effect relationship between some set of simple initial conditions and some complex pattern or system, virtual worlds allow scholars to argue that intricate patterns of social practices and institutionalized behavior evolving over time within those worlds derive from basic rules governing player actions within the game environment. Virtual worlds have a real initial condition, a moment where they are uninhabited by agents; they have histories which can be observed, recorded, traced. Concrete, specific changes are made to their rules and their environments whose propagating social effects can be observed and described.
There are now a great many specific examples of emergent dynamics known to researchers from MMOGs, MUDs and other virtual worlds. Andrew Leonard, for example, has described the cascade of social and environmental consequences from the introduction of a Barney the Purple Dinosaur bot into a text-based virtual world called Point MOOt in 1993 . Players who initiated violent action against Barney caused the bot to replicate; each Barney bot wandered freely throughout the virtual environment. It would not have been that hard to guess that this particular design feature (violence leading to replication) would lead to massive increases in the number of Barneys. This in turn led designers to add a new wrinkle to their economic model, which paid players in development resources for adding new elements to the world but also allowed players to go on a form of welfare. Barney-hunting became one of the welfare-work assignments players could receive. As Leonard observes, this in turn drew more player attention to the Barneys, and they discovered other ways to make them replicate even further (such as typing the command “feedâ€). The result: population growth outstripped the ability (or interest level) of any Barney-hunter to abate it. [17]
This is a relatively simple instance in an environment designed as an experimental one. Commercial graphically-based MMOGs offer plenty of less controlled cases. Many of the best known ones involve patterns of economic accumulation and extraction. For example, in the game Asheron’s Call, players discovered relatively early in the history of the game that certain spots within the virtual landscape allowed a player who possessed a ranged attack to shoot enemies with impunity, as none of the enemies could reach the player’s location, a behavior that became known as perching. Many of the players who discovered perches attempted to keep their locations secret, but sufficient numbers of other players were able through simple observation to divine what was going on. In a fairly short period of time, competition for a number of perching spots grew very intense, while other players attempted to frustrate the accumulation of perchers by attacking their targets and preventing the percher from garnering the rewards of combat. Perching was one of the chief forces driving the rapid spread of hunting “macros†throughout the gameworld, where players automated the actions of their characters so that the character could repetitively extract resources 24 hours a day using a safe perching spot. This in turn had rippling economic effects throughout the rest of the world, driving inflation, making macroing a more and more constant feature of gameplay, and so on. Developers were forced to spend time identifying and eliminating perching spots within the gameworld terrain and eventually banning macroing itself, though that came at a point where most of the players who objecting to macroing had long since left the game. This is a fairly representative example of how emergence works in virtual worlds: a small feature turns out to have unexpected and unintended consequences within the environment. Players use the feature in increasing numbers, changing the distribution of types of player-agents in the gameworld, the pace and nature of accumulative activities, the structure of the virtual economy, the qualitative experience of social life in the game environment, and the nature of the “social contract†between players and developers.
As in most cases of emergent phenomena, it might be hard to predict in advance that the small or obscure feature in question could lead to such large systemic results. Early in the game’s history, I recall passing by one of the earliest popular “perches†in Asheron’s Call from which one could shoot creatures called olthoi with impunity. For at least the first month of gameplay, no one was ever there and I took no note of the perch (an upraised rock promontory with a flat top in the middle of a high mountain valley). Then a few people started to be there, and then more, and then suddenly it was a major focus of player activity. But just as in other cases of emergence, once a complex system or pattern appears, it is relatively easy to understand the dynamic connection between some initial condition or rule and the complex consequences of that rule.
There are many other examples of similar dynamics. In the game City of Heroes, for example, creatures called warwolves initially had no ability to retaliate against a flying character who was attacking from a distance. In fairly short order, flying and hovering superheroes could be found clustering in parts of the virtual world populated with warwolves, defeating warwolves as fast as they spawned. The developers eventually gave warwolves a ranged attack, ending this strategy. Since City of Heroes has a very simple economy, the only long-term structural consequences were a brief oversupply of a particular kind of superhero and a slight deformation of the expected rate of experience acquisition.
On the other hand, the early evolution of Star Wars: Galaxies was marked by the rapid discovery by players of unintended design features, ranging from rapid rates of monetary accumulation from simple repeatable missions in the first two weeks after the game went live to the discovery that one type of powerful enemy, the baz nitch, could not fight back against players if the player went inside the creature’s randomly spawning lair. The latter feature took players about a month to discover, but once they did, the economic consequences were so enormous (baz nitch kill missions netted huge sums) that this one feature alone propagated structural effects through the gameworld economy that were felt for months and months afterward, indeed, arguably were still present within the gameworld up until the recent major redesign of spring 2005.
Emergence may look good to developers as a tool for the automated generation of complex content, but emergent effects of gameplay on the economies, social practices and institutional formations of virtual worlds are also a managerial nightmare which no live management team has been wholly successful in coping with. Such effects spring from environmental features and rules unpredictably, and can propagate through a gameworld with stunning speed and ferocity. By the time a development team is aware of the complex structural consequences, some of the effects may have a permanence that could only be undone by erasing the entirety of the gameworld’s history from the initial condition of discovery forward. Moreover, because players often benefit from such discoveries, they frequently seek to conceal information from developers about such practices. Left alone long enough, such effects become a determining feature of player sociality and cultural life: even if the rule or feature which gave rise to new patterns is changed, players may transmit and disseminate new patterns of practice to each other as if the rule or feature remained within the game.
Far more importantly, in many cases, players are highly motivated to conceal their discovery of or knowledge about emergent transformations of gameplay, and at least some of the propagation of such transformations takes place in social frameworks outside the virtual world itself, in the real-world sociality of players. This becomes one of the basic problems confronting scholars studying virtual worlds: they are not self-contained. Players bring to them a whole range of predicates: their real-world identities and ways of thinking, their prior experience of playing games and acting within online environments, the economies of access and time that govern their ability to play. Past experience trains players to expect and quickly identify certain kinds of emergent effects from recurrent or common design features. In effect, players strain continuously against the attempt to reduce them to software agents governed by tractable rules: their actual human agency subverts, bends, evades and expands upon those constraints. In most virtual worlds, the vast majority of players behave in economic terms like the classic utility-maximizers of neoclassical economics, but it is fundamentally difficult to say whether that is a consequences of the rules-based constraints on players, the ways they are defined as agents by the software, or their real-world human nature. Artificial society simulationists have a problem deciding at what point their ambition for resemblance to the real world ends and the heuristic constraints of manageable reductionism takes over; virtual world researchers have a problem with deciding when gameworld sociality and practice are the product of the game’s own initial conditions and when they are the product of real world complexity interweaving into and giving rise to gameworld complexity.
Connections and Possibliities
Anyone primarily interested in emergence and self-organization as phenomena should find the strong parallels and relationships between these two fields of inquiry of real interest. However, I think researchers in both these fields would also benefit from active dialogue and comparison of methodological and theoretical strategies, particularly in terms of the way emergent effects manifest as both promise and problems in both their domains. The technical affinities and the substantive overlap of interests and skills in the two communities are I think obvious. More compellingly, in certain cases, one field of study offers insights or solutions into known problems in the other field of study.
For example, because virtual world environments have a tangibility and physicality that artificial society simulation environments do not, they often achieve some of the effects that Epstein and Axtell predict may be seen in “hybrid modelsâ€. Many MUDs and MMOGs have precisely the kind of multi-tiered levels of complexity that artificial society models mostly to date lack, where stigmergic effects no longer require constant reinforcement by the repetitious activity of agents and achieve some kind of autonomous status within the environment. In the early history of the MMOG Shadowbane, for example, once a large guild of players located in certain key resource-rich places within the gameworld, that guild not only tended to naturally dominate all others, but it also tended to become a self-reinforcing organization, i.e., players in that guild needed to invest minimally in active recruitment or social maintenance, as the inflow of players desiring to be members was constant simply because of where the guild city was located. Clusters of player-placed buildings in Star Wars: Galaxies have tended to become self-reinforcing patterns in a similar manner: by altering where and how players received and disposed of resources and tools, they become a permanent expectation, a new geography that directs movement and activity. Even if one building disappears from such a location, it tends to be rapidly replaced by another.
Virtual worlds tend also to have an organicism to them, an unplanned character, that even the richest and most complex artificial society models still lack. The sheer complexity of the underlying code, the proliferation of often contradictory rules and properties, the commercial imperative behind most such worlds, and the intertwining of real-world players and agent-type constraints on action make the mirroring between virtual worlds and the real human world (and the role of emergent systems in both) often far more compelling if also far less tractable in terms of “hard†social science instruments. Virtual worlds researchers are almost invariably driven towards forms of ethnographic or qualitative study because of the nature of their data, and as a result often glimpse dynamic phenomena that most artificial society work strains to capture. Agents in virtual worlds interact with one another, modify each other’s rules and behavior, change their own interpretations or instantiations of their rules, communicate and negotiate with other agents about goals, intentions and desires. On some servers in the MMOG World of Warcraft, players on two opposing sides were intended not to be able to communicate, but players rapidly worked out ASCII-based ways to type text messages in-game. When the designers stepped in to prevent this, many players turned to software tools outside the game, such as Teamspeak, to facilitate communication. Others use sign language and emotes allowed by the game to negotiate or devise elaborate modes of etiquette about combat and coexistence. The emergence of complex systems and patterns in virtual worlds is much more akin to the continuous and contingent ways that emergence functions in human societies, but remains simple and constrained enough to be described, observed and tracked in ways that the richest ethnographic work in the real world strains to achieve.
Artificial societies research, on the other hand, studies phenomena which are repeatable, iterative and highly reconfigurable. It is possible to endlessly tweak and play with artificial society simulations and to run many instances of the same simulation. MMOGs and even relatively “experimental†MUDs, in contrast, tend to have a temporality rather similar to real-world societies, a single sequential unfolding of the virtual world from a single set of initial conditions. A Tale in the Desert and Meridian 59 are the only significant commercial MMOGs in recent years to have experienced full restarts, and only A Tale in the Desert has done so by design, with the conscious intent to iteratively explore the possibility spaces allowed by its basic foundations. This leads virtual world researchers and developers on very thin ice when they attempt to make strong arguments about the particular causal foundations of any complex behavior pattern or social institutions within a gameworld. I’ve argued that the economic behavior of players in Star Wars: Galaxies is substantially a consequence of the design of that virtual world; one of the game’s designers has suggested strongly that their behavior is a consequence of desires and motivations that the players brought to the game with them, and has little to do with the design of the world itself. I think I’m right, but without repeatability, the debate largely comes down to a matter of faith and ethnographic insight. Artificial society research has repeatability, testability and tractability that virtual worlds lack, and it is easy to see how virtual world studies and virtual world design could both benefit from more iterative and repeatable approaches. Test servers are used by most MMOG designers to explore the consequences of contemplated changes but often hastily and unreflectively, for which developers often pay a substantial managerial price at a later date.
To some extent, both fields suffer from what I consider to be an essential epistemological problem that comes with emergence and complexity. Emergence insists that there is a causal relationship between simple initial conditions and self-organizing complex patterns or structures which appear from or out of those simle conditions. The relationship is not a symmetrical relationship: the initial conditions and the subsequent complexities are not the same thing at different scales, but are instead fundamentally different from one another. The appeal of the concept is that it is deeply anti-reductionist without insisting that existing complexity is impossible to relate causally or temporally to simpler precursors or initial causes. But because emergence is also asymmetrical, it is sometimes epistemologically impossible in any artificial society simulation or virtual world that has a meaningful and interesting size, number of elements, density of behaviors and rules, and so on, to successfully hypothesize about the relationship between any single initial condition and any resulting condition or state of complexity. In virtual worlds, researchers, players and developers all frequently argue with one another about cause and effect relationships, and on some level, such arguments are intrinsically irresolvable. The same goes for artificial societies simulations which go beyond rudimentary, single-variable or single-rule environments and agents. As Epstein and Axtell’s Sugarscape adds elements and variability, cause-and-effect become by their nature less and less confidently specified or understood.
This is not a problem for those who take an interpretative or hermeneutical perspective in either field . [18] It is a problem for those who want either artificial societies or virtual worlds to serve as instrumental, utilitarian platforms for the testing or construction of better policy or as guides to human action—or for that matter, even more simply for those game developers who want to build a better mousetrap. It’s possible with more iterative, experimental approaches to programming and testing to have a better or at least richer sense of the consequences of particular virtual world rulesets and design, but the epistemological veil that emergence raises between cause and effect will eventually intercede no matter how much movement is made in that direction.
The best intellectual traditions of artificial societies research maintain that the main usefulness of this form of inquiry is ultimately not as an empirical instrument that accurately models the world as it is. Instead, as Gilbert and Conte argue, artificial society models allow researchers to ask “what are the sufficient conditions for a given result to be obtained?â€[19] Artificial society simulations are a kind of hypothesis-generating device, an exploration of the possibility space of explanation. A productive simulation proves nothing about what “really happenedâ€, but it may allow us to conceptualize possible explanations of what happened that had never previously occurred to us, or demonstrate that a more conventionally argued hypothesis previously seen as authoritative makes unwarranted assumptions about sufficient and sufficient causality. Epstein and Axtell have already demonstrated, for example, that even in quite simple agent-based simulations of trading activity, the assumptions and predictions of equilibrium economics somewhat shaky. Thomas Schelling’s segregation model helps suggest that it is at least possible that arguments about real-world cases of spatial segregation which insist on the explanatory necessity of top-down external constraints or coordinated planning of segregation are wrong, that a more emergent scenario is sufficient to generate spatial segregation. [20]
As a kind of gendanken experiment, the biologist Paul Grobstein and I once proposed using the simulation environment NetLogo as a tool for exploring the difference that human agency or consciousness makes in emergent processes, by asking human beings to substitute for NetLogo agents in a simulation. [21] We envisioned allowing human beings to first have only one vector of choice that NetLogo agents (called “turtlesâ€) do not have. In this case, the choice of which direction to move in a given time step, with no information about the environment save a one-pixel radius about their agent and no ability to communicate with other agents. In the second iteration, we considered that the humans posing as “turtles†would still only be able to choose the direction of their next move, but would have a global view of the total environment. In the third, we envisioned that all “turtles†in the environment would be allowed to communicate in real-time as well as have view of the total environment. The thought here is to look a bit at how degrees of agency and information affect what happens in an emergent system.
This is an elaborate strategy for investigating something that virtual worlds already provide: rules-constrained agent-based environments where the plasticity and inventiveness of actual human intentions, beliefs and desires lies within each agent. New artificial society projects like Net Ties aim to add a density of agent-to-agent interactions and communications which have long been advocated by researchers, but virtual worlds messily and accidentally have long since achieved this goal. Time after time, human players have discovered hidden or accidental properties of the physical world in MUDs and MMOGs. They have invented novel social instruments and institutions. They have given themselves new goals and purposes and changed the meaning of play. Even though both virtual worlds researchers and consumers often adopt a world-weary, seen-it-all perspective on the object of their interest, the capacity of existing virtual worlds to surprise, to produce novel behavior and patterns, remains profound.
Both fields deal with some similar concepts, particularly emergence, in ways that I think are mutually illuminating. But the deepest source of their useful complementarity, and a prime reason I think researchers in both fields would benefit from collaboration and discussion, may be what I have just described. Artificial society simulations map out the possibility space of explanation and causality in uniquely productive ways. The evolution of virtual worlds is guided by agents whose constrained definition within the environment is simple enough to be concretely described but whose animating spirit is the fully unmanageable, mutable and endlessly creative consciousness of human beings themselves. The rigorous exploration of concrete explanation combined with the organicism and messiness of real-world sociality seems a fruitful and potentially potent combination.
———————
1. Robert Axelrod, The Evolution of Cooperation. Basic Books, 1984.
2. Robert Axtell and Joshua Epstein, Growing Artificial Societies. Cambridge, Mass: MIT Press, 1996; Nigel Gilbert and Jim Doran, eds., Simulating Societies. London, UCL Press, 1994; Nigel Gilbert and Rosaria Conte, Artificial Societies. London, UCL Press, 1995; Journal of Artificial Societies and Social Simulation.
3. See the New Ties project website.
4. Richard Bartle, Designing Virtual Worlds. New York: New Riders Publishing, pp. 4-7.
5. See Julian Dibble, My Tiny Life: Crime and Passion in a Virtual World. New York: Henry Holt and Company, 1998.
6. See Bartle, Designing Virtual Worlds.
7. Edward Castronova, “Virtual Worlds: A First-Hand Account of Market and Society on the Cyberian Frontier” (December 2001). CESifo Working Paper Series No. 618
8. Bartle, Designing Virtual Worlds, p. 456.
9. Warren Spector, “The Emerging Emergenceâ€, Game Developers Conference, September 2004.
10. Stephen Jay Gould, Wonderful Life: The Burgess Shale and the Nature of History. New York: WW Norton, 1989.
11. See Epstein and Axtell, Growing Artificial Societies, p.16.
12. Epstein and Axtell, Growing Artificial Societies, p. 164.
13. For some thoughts on these questions, see Michael Agar, “Agents in Living Colors: Towards Emic Agent-Based Modelsâ€, Journal of Artificial Societies and Social Simulation. 8:1, 2005,
14. See for example Ravi Bavnani, “Adaptive Agents, Political Institutions and Civic Traditions in Modern Italyâ€. Journal of Artificial Societies and Social Simulation, 6:4, 2003.
15. Allen Rausch, “World of Warcraft Preview Part Iâ€, Gamespy, Jan 14 2004.
16. See InteractiveStory.net.
17. Andrew Leonard, Bots: The Origin of New Species. Penguin Books, 1997, pp. 1-18.
18. For a compelling vision of this perspective that I think is potentially very applicable to virtual worlds research as well as social simulation, see Michael Drennan, “The Human Science of Simulation: A Robust Hermeneutics for Artificial Societiesâ€, Journal of Artificial Societies and Social Simulation. 8:1, 2005.
19. Gilbert and Conte, Artificial Societies, p. 3.
20. T.C. Schelling, “On the Ecology of Micromotivesâ€. The Public Interest, 25: 1971.
21.
Paul Grobstein and Timothy Burke, “Emergence and Contingency/Purpose/Agency: Pilot Project Elaborationsâ€
Hmm it seems like your blog ate my first comment (it was super long) so I guess I’ll just sum it up what I wrote and say, I’m thoroughly enjoying your blog.
I too am an aspiring blog writer but I’m still new to everything. Do you have any points for beginner blog writers? I’d really appreciate it.