Irreparable Complexity, Game and World

I’m interested in the kind of complexity that arises through emergent processes, in which relatively simple rules governing the action of autonomous agents within a given environment can give rise to permanent structures or changes within the environment which then change the way that the agents express their rules. Unplanned systems, but often highly functional in their own way.

However, there is also complexity by design, in which a system which is consciously intended to have certain restricted purposes or functions becomes more and more elaborate over time, and more and more of its mechanisms become obscure and hidden in their inputs and outputs. I think maybe there are some natural examples of this kind of movement towards baroque complexity. But baroque complexity dies out when it becomes actively dysfunctional within some kind of fitness landscape.

Human systems can achieve this kind of opacity by accident and by intent. Accidental drift towards a system where no one really understands how cause and effect work within the system happens in institutional life all the time. Stakeholders in individual parts or aspects of a system are inclined to expand the influence or size of their mechanism. New forces or powers outside an institution are often accommodated by being incorporated within it. Procedures or heuristics used by an institution in its everyday business sometimes take on a life of their own, especially when they are incorporated into technological infrastructure and automated in some respect. Histories of past practices accumulate and become binding traditions.

Baroque complexity happens by intent when human agents with some degree of authority over an institutional system want to block off direct access or control to some of its inner workings as a safeguard against easy tampering. It also happens when someone with an interest in a particular system believes that secrecy and confusion will instrumentally advance that interest. I think there are quite a few examples of authorities who set out to make it hard for an outsider to understand how a system or process works only to find that in making it hard for outsiders to understand, they’ve made it hard for everyone, that even people in control who thought that secrecy would conceal selectively have found that it conceals indiscriminately.


I’ve found that virtual worlds, massively-multiplayer online games (MMOGs) have provided some great examples of this kind of Rube-Goldberg complexity-by-design, and have also demonstrated why this phenomenon can be a source of so much trouble, that you can end up with systems which are painfully indispensible and permanently dysfunctional, beyond the ability of any agent or interest to repair.

The underlying code of any contemporary large software application is approaching a threshold of complexity where no human agent could ever hope to understand all the possible interactions between the code, the hardware and the user. Even if a programmer can understand why a particular failure or negative event happened, they often cannot hope to understand how to reliably stop it from happening in all possible intersections of code, hardware and user without perturbing some other part of the codebase with unexpected consequences. Pull on one thread, and another may unravel.

This is especially true with virtual worlds, where the size and intricacy of the software is enormous and the practices of users are remarkably diverse and often rivalrous. Developers of a virtual world now start with established code libraries of some kind for managing the visual and interactive components of their product, but they also have to deal with and accomodate histories of user expectation and practice in previous virtual worlds.

Virtual world designers end up with baroque complexity both because their design imperatives drift naturally in that direction and in some cases because they’re trying to veil or protect some of the underlying mechanisms and code of a game from the users. Arguably in some cases, I think they may even be trying to protect themselves from knowing too much about how the world works precisely because they’re trying to keep the processes and procedures that players must follow somewhat opaque, because a lot of virtual world player behavior is about seeking opportunities to arbitrage.

This kind of complexity gets designers into trouble when there is some major aspect of their world whose dysfunctionality is driving players away, where there is some desire to fix or change the game’s systems. Baroque complexity taken too far is irreparable: you can literally get to a point where there is no adjustment of one subsystem that will not cause another subsystem to fail or produce unexpected negative consequences.

A lot of my previous analysis of the early history of the game Star Wars: Galaxies centered on this kind of problem. So much of the underlying design had a kind of Rube Goldberg feel to it, with systems and properties tethered to one another at varying levels of code and design, from how information was stored in the game’s databases to how crafting, the environment and the economy were functionally intermingled in ways that were not always how they were intended to be intermingled. I came to feel that there were many cases where the designers literally had no way out of certain problems, that fixing one aspect of the design would produce problems elsewhere, sometimes problems that could not be anticipated in advance of implementing the change. Characters advanced through developing skills within loosely structured classes, but the game design had almost no way to differentiate between the role or value of some of those classes. At launch, most classes had skills that had little value or that were simply not implemented. Fixing one skill generally broke another, or failed because other skills in other professions that were needed to properly support the fixed skill were not working correctly. The developers of Star Wars: Galaxies eventually came to the conclusion that they would just have to gut out most of the game’s design and start again. They did so in a disastrous manner, but I’m not sure they were wrong about the basic insight.

To some extent, I think the developers of the current virtual world Warhammer Online are in the same kind of pickle. In this case, one of the serious issues in the game’s design is that it is almost impossible for players to understand how to achieve victory for their faction. There are two major factions in the game which fight to control certain parts of the game environment at varying stages of the progression of the player-characters. In the endgame, both factions try to accomplish a series of difficult challenges that will allow them to attack and control the major city of their rival faction. At the moment, it is very hard to tell exactly how these systems work, and I think that is not because the players have yet to figure the system out, but because the interaction of many diverse elements in the game design is so messy that it is impossible to figure it out, possibly even for the designers.

The designers have a vested interest in keeping the system opaque. If players understand very clearly what they need to do, they may discover that the system is easy to exploit, or that one side has a structural advantage. But at some point, making a system appear opaque and making a system actually so difficult to understand that it is genuinely opaque even to its creators are actions which shade into one another.

Far more importantly, the system may simply come to seem mechanical and lacking in adaptability. Once players understand exactly what it is that they must do, how they must do it, and when they must do it, they are likely to find competition to be boring and repetitive. I think this is a major reason that baroque complexity is added by design to many human systems, games and otherwise: because they are systems which need to simulate adaptability, portability, flexibility, which need to mimic the organicism and mutability of life itself. In a way, that’s what successful art in all its forms actually accomplishes: the deliberate creation of mystery, of a work which supercedes the narrow intent of its maker. But a system which requires ongoing use, even the mechanics of an online game, needs a functionality that art does not.


In a limited way, I think the dilemma that some game developers have encountered echoes the vastly more consequential problems of the current global financial system. For both instrumental and accidental reasons, I think the financial system has acquired this same kind of baroque complexity, this same kind of disconnect between the top level that believes it has control over the system’s workings and numerous veiled or incomprehensible mechanisms that have been churning away busily well beyond that control. Like a virtual world whose design has functionally become impossible to easily control, the financial system may now be too complex to repair. Changing one feature may lead to undesirable and unpredictable consequences elsewhere in the system. Pulling on one thread may cause another part of the tapestry to unravel.

And like virtual worlds, there are stakeholders who have a continuing interest in the parts of the Rube Goldberg machine to which they have adapted themselves. In a virtual world that has gone badly wrong, where many players are fleeing its failure, there will always be a few players who have become adroit at using one or more of its broken subsystems. They will be the ones who complain most strenuously at any changes. The emptier the world, the louder their complaints will sound.

Players can leave all their virtual worlds for good: their ludic desires can find other expression, other opportunities. A developer who guts out everything inside of a broken virtual world to replace it with some simpler, cleaner design can hope to bring back all the lost customers, but we know very well that players who quit a virtual world almost never come back. So sometimes you stick with whatever remnant you’ve got left, no matter how dysfunctional the complexities of the design, and ride with them right out to the thinnest margins of profit before closing for good.

The difference between a game and the real world is that the capital which can move away from the broken complexities of the financial system can’t just stop circulating altogether. It needs to go somewhere, wants to go somewhere. The choice may be similar, however. Listen to the actors who’ve adapted to the dysfunctionality of the system, who’ve adapted to live on some cog of the broken machinery, and they won’t want a change. Neither will people who work within some fragment of the system that works pretty well, because they know that a fix to what’s broken has a decent chance to break what works. Gut out the whole system to try and start anew? That’s rarely possible in real life. (So far it’s never really worked with games, either.) Sometimes the best answer is to build a simple, elegant alternative to run alongside the old clanking complexity, to have the System 2.0, and hope that over time, there’s a migration from the old to the new.

This entry was posted in Games and Gaming, Miscellany, Politics. Bookmark the permalink.

8 Responses to Irreparable Complexity, Game and World

  1. moldbug says:

    Professor Burke,

    A thoroughly admirable post. As one who knows a thing or two about system software, I can tell you that your aesthetic understanding of irreducible complexity in large software systems, including but not limited to virtual worlds, is quite accurate. (I don’t think any technical expertise is required, for example, to read Foote and Yoder’s Big Ball of Mud essay.)

    It would certainly be interesting to hear how you apply this line of thinking to the massively-multiplayer virtual world known as “Washington.” You know, the one in which one “manipulates procedural outcomes.” I suspect that, as usual, our results are very different.

  2. jedharris says:

    Many fascinating observations here.

    Regarding the theme that complexity inevitably becomes intractable, this has historically been true, but I think has been rendered untrue in many cases by the development of refactoring skills. Linux, for example, has been able to avoid the “heat death” that tended to pull down previous operating systems by aggressive refactoring.

    Maybe the methods for doing this have been discovered / invented recently, or maybe it has always been possible. In the commercial environments where I worked most of my life, there were never time or resources to do it, something we (but not those who set priorities) recognized as a serious problem.

    Refactoring engages the sort of craft ethos that leads to spending “excessive” amounts of time on making wooden furniture that’s beautiful as well as functional, etc.

    Regarding the growth of baroque (or even rococo?) complexity to avoid the experience of game play as mechanical: I’m haven’t played these games, so I really can’t comment from the inside. But my guess is none of the games are “deep” in the sense that Go or one of the major Shakespeare plays is — where out of the same finite material one can keep unfolding new possibilities and then exploring them.

    I don’t see that this is inevitable, or even that some genius is required to create games that are deep in this way. Sim City and the Sims approach being deep, in different ways. I think Will Wright is a very smart guy, but not Shakespeare and probably not someone who could invent Go. I think he wanted to create games that offered these possibilities, and for some reason other game designers didn’t, for reasons that aren’t clear to me. His games were extremely successful, but didn’t spawn a genre.

    There is a problem here: We’ve never created games that are deep and that can still sustain a very large set of players without undergoing periodic crises, and sometimes crashing so hard they take centuries to regain their momentum (think of the middle ages).

    I do think we might be able to build and manage such games though, and if we can then refactoring is the key. The various fixes to the financial system (e.g. SEC, FDIC, etc.) are somewhere between patches and refactoring. The growth of financial reporting standards and norms that severely punish attempts to evade them are pretty much refactoring of business culture.

    However just as with software we’ll always face assertions that we “can’t afford” to constrain innovation, spend time on reporting, accept the risks of being transparent, etc. etc. I have no idea what it will take for us to collectively decide that we can’t afford to skip these forms of organizational hygiene.

    One possibility (and here I am being utopian) is that game culture could lead. If players can take a big enough role in (re)defining the game play, a craft-oriented community will form that balances pragmatism and sophisticated analysis of failures and possible fixes. Over time a workable culture and ethos of social engineering could crystallize.

  3. Cobb says:

    There is a fundamental problem with building efficiencies into systems. There are essentially, with regard to white collar productivity, two schools of thought, Deming and Hammer. Deming for constant quality improvement and Hammer for business process re-engineering.

    The Deming approach requires a certain understanding of the process in which all workers are engaged. From the standpoint of a software designer, it can all be conceptulized in terms of a workflow. Person A has tasks 1,2,3 Person B with function X has tasks 4,5. Task 4 requires 2, etc on down the line. The willingness of software engineers to absorb the details of workflows in business is highly limited – and the ability of managers of such processes to communicate their complexity and possible efficiencies and chokepoints, etc. is also highly limited. Furthermore the willingness for software engineers and business managers to deal with each other on a long-term basis is slight. Both see their time more profitably spent elsewhere. This is especially costly to the Deming scenario, and so what generally happens is a Hammer implementation.

    This does damage to the process by obviating some complexity. So you actually have more complex and new systems shoved into a process which is actually dumbed down because of the communications limitations of managers and engineers. Then what ends up happening is that those employees in the process chain make up the difference by partially working with, and partially working around the new system in ways they have no incentive to explain, and often in ways their managers are unaware of. The outputs of these systems as well as their inputs may be manipulated because those processes to which information technology is applied are rarely end to end systems. So there is a complex adaptive dynamism at work and in the end describing these systems are like describing the flow of molecules of water down a stream. They are chaotic, somewhat random, extraordinarily complex, yet at a macro level comprehensible and predictable.

    Once a system becomes predictable at a macro level it gets frozen into place as the practice of the business with all of the inefficiencies and mysteries locked in.

    And then the boss changes and all of the internal stresses on the system lose particular incentives.

    Systems engineers generally don’t have patience for such matters as conventions of using three employees to do a job that one person with an improved system could do. In the process of conceptualizing the system they are responsible for improving through information technology, they will be introduced to more information about the process than those people we call ‘functional people’ aka staff. This generally causes tension and a resistance for staffers to be forthcoming about those skills which be rendered obsolete.

    Furthermore systems engineers have incentives to build generally applicable systems. Laziness is good quality, that is to say that re-usable systems which might be applicable to mulitple businesses and their processes are always seen as more valuable to one-off custom systems.

    Business managers generally don’t have patience or tolerance for such systems which make transparent the operations of their staff. There are often common sense improvements that can be made which exposes them or their superiors to an embarrassment.

    In general, business process improvement / re-engineering projects and deliverables benefit most from being transparent and self-explanatory. You want God powers immediately and you want control over as many aspects as possible. However once these aspects of the system are established you want them to respond to the changing nature of the business. There is a fundamental conflict between thoroughness, time to implement and adaptability. Pick two.

    Gaming environments on the other hand tend to be hermetic, even in sandboxes. There are certain aspects of the system which gives it a specific feel which must be maintained in order to keep it interesting and within a genre. The value of a game is in having a play value that keeps someone progressing towards a God level which must be entertainingly hidden. Emergent behavior in games is entirely desirable. In business systems it is discourage.

  4. Timothy Burke says:


    One of the things I like about agent-based emergence is that you can end up with something I’ve come to think of as “inefficient efficiencies”. Meaning that if you were to give each agent an energy budget to spend on carrying out its ruleset, you’d find that there would be much more efficient ways to build the structure or system that they end up building. In an emergent system, there is a lot of repetition of action, a lot of excess or junk activity. But on the other hand, sometimes building a system which accomplishes the same task in the most efficient, top-down, designed manner is surprisingly difficult or time-consuming. Moreover, it’s inflexible. Now you have a very efficient system for doing one thing, whereas the emergent approach, you can have different outcomes if you just change the ruleset of the agents slightly. Maybe not optimal outcomes, or outcomes that have anything to do with your needs, however.

    Does this resemble (for example) the way that Smith talks about markets and human agency? Yes, certainly. And I don’t think it’s an either/or sort of thing, that either resoundingly endorses laissez-faire minarchism or demonstrates its flaws. The key insight I get, though, is that efficiency per se is very much the wrong objective in any design, at any level, for any purpose. And yet it was the absolute obsession of high-modernist designers and political philosophers (both left and right).

  5. peter55 says:

    Cobb —

    I write as someone with software development experience. Your discussion of business processes in software development only applies for those s/w development projects where the required tasks can be readily defined in advance — ie, for those projects which are either very simple or, if not simple, have been undertaken before. Since most contemporary software development projects involve activities which are novel, this task-decomposition phase of project planning is simply impossible to be undertaken accurately or with any great confidence. Even for the design of a new system of the same TYPE as previously completed systems (eg, when say Microsoft writes a new operating system), where the requirements are likely to be articulated accurately, it is usually not possible, even for experienced project planners and developers, to forecast in any detail what tasks will need to be done, in what order, how long these tasks will take to complete, and what other tasks they will impact.

    This failure has almost nothing to do with the motivations or incentives or alleged novelty-seeking personalities of the project planners or their development staffs. It has ALL to do with the fact that large-scale software development of novel applications is one of the most difficult activities known to man. Building large-scale software is NOT like building houses (since clients cannot specify their own requirements accurately, and developers cannot estimate the resources needed to fulfil such requirements accurately), and it is NOT like manufacturing widgets with mass-production processes (since projects are almost always one-off, novel, customized, and must usually cope with unique legacy environments). No good is done anybody by persisting with these inappropriate analogies.

    For more on the challenges and complexities of large-scale software development, I suggest reading the various reports completed in recent years by the British Computer Society, the Royal Society of Engineering and the UK Department of Trade and Industry (by Cliff and Bullock) on the inherent complexity of large-scale software systems. The last report is here:

  6. Love it. And this isn’t (also) posted to Terra Nova because…? 😉

  7. hestal says:

    I have noticed that there are people who do a good job of designing and programming systems and those who don’t. And the toolkit for developing games is not very good, so it makes the task more difficult than it needs to be. And the “need to mimic the organicism and mutability of life itself,” only adds to the difficulty.

    But there are systems which have to actually operate in the real world and which must react to the “mutabilty of life itself,” and they have done so with great success. Some of these systems have been in operation for more than forty years and are accessed by millions of people each and every day. The changes in the details of the system have increased in complexity and volume and are still increasing. But the toolkits used to develop these systems were superior to those game designers are forced to use, and facing real life instead of imitating it, in my opinion, increases the intensity of the designer and programmer and unleashes levels of creativity that are absent from computer games. Designing a large-scale, complex system to support a national institution that affects the daily lives of millions of people in very important ways is a real kick and far exceeds the enjoyment one gets from developing games. Perhaps then the situation is evolutionary. Perhaps the designers and programmers who can do the hard and rewarding work of serving the real world seek out that challenge leaving those who can’t to find work in design and programming of games.

    But then I am probably wrong.

  8. Fats Durston says:

    Have you been following the recent (and forthcoming) changes in the CoX economy lately, by chance?

Comments are closed.