首页    期刊浏览 2024年09月20日 星期五
登录注册

文章基本信息

  • 标题:Simple words about the new science of complexity; a talk with George Cowan
  • 作者:Dan Tyler
  • 期刊名称:Whole Earth: access to tools, ideas, and practices
  • 印刷版ISSN:1097-5268
  • 出版年度:1989
  • 卷号:Summer 1989
  • 出版社:Point Foundation

Simple words about the new science of complexity; a talk with George Cowan

Dan Tyler

A Talk with George Cowan

There has to be wider recognition of the fact that science bas reached the point where it can no longer simplify problems so that they don't resemble the real problem. As systems become complex they develop properties of their own. We have yet to successfully apply the reductionist technique in explaining complex systems in terms of their simpler subsystems. That's a profound and ongoing debate in science between the reductionist view, which says you can always find a shorter way to describe properties in the system in terms of its different parts, and the people who say you're going to run out of that capability the moment you challenge it against the real world, against life systems and social systems and so forth.

Nobody has yet been able to do a really good job of taking an even modestly complex system and describing it in terms of its subsystems and showing how all its properties arise from the interaction of all its parts.

Ultimately, isn't that what the Santa Fe Institute is trying to do?

Well, it would like to define the general elements of the science of complexity, which would permit you to do that, at least to some extent. But it can't assert that one side is correct because the debate is ongoing and evenly matched with people who insist that some part of the properties of a complex system are in effect controlled by something external to the boundaries of the system. It's a religious issue because when you assert that a complex system will fail to be totally autonomous, that there will be some property derived from something external to the system, people will say you're talking about God or some aspect of nature that imposes order on complexity. Or does all the order arise internally?. That's a very profound debate, one which I would like to avoid.

I was just about to ask you not to avoid it. Well, as a scientist my only choice is to say that we'll press for reductionism as far as it will go. But I would be agnostic to the extent that I will not be too surprised if I find that I've got to adopt Johnny von Neumann's pragmatic description of complexity. He avoids saying where the properties come from, and says that a truly complex system is best characterized by describing the system's properties. That's a shorter message than describing the subsystems, but you don't find any magic simplicity. Herb Simon calls it pragmatic holism.

Doesn't that undo the supposition that you can find laws that link all complex systems together?

I would like to believe that we'll find a general set of elements that indicate why complex systems behave the way they do, although we may never be able to use them in a predictively useful fashion. In fact, complex systems generally have many possible states, all of which look stable on some time scale. But they're all always metastable, far from equilibrium. If they are in true equilibrium, they're no longer complex, they're stable and probably have elegantly simple properties, and in fact they're dead. Complex systems usually imply a living system, something that's dynamic.

If you treat a complex system as many economists do, as a set of equilibria, it's no longer dynamic and it's of less interest. The Santa Fe Institute is attempting to bring an uncommon paradigm to economics. Economists tend to use the physical-science paradigm and look for equilibria on some surface, possibly at some lowest, most stable state. But I think it's obvious that economics operates out of equilibrium. You shouldn't look for stable states, you should look for transitions and for the laws that govern them.

Is this the first time economics has been approached from this direction?

No. Economic texts usually pay attention to nonlinear dynamics, particularly in econometrics. You'll find a chapter in the book [The Economy As An Evolving Complex System, 1988, Santa Fe Institute] that says that this may be a more realistic way of looking at economics, then it gives up rather quickly because it gets into mathematics that most people don't deal with, except in a rather elementary way. And the consequences of nonlinear dynamics in economics are so awesome that nobody can pursue them very far.

The basic premise in neoclassical economics is that you will achieve an equilibrium. But change the time scale and that's not necessarily the case. You achieve punctuation points, transitions and hesitations. If you want to talk about it from minute to minute - and that's important in the stock market - it may have long plateaus. Of course, by sensing those plateaus you might make a lot of money. But on the other hand, changing the time scale, if you're a long-term trader, it may look totally devoid of any plateaus. So, in fact, the measure of complexity must have an element of time built into it. You always have hierarchy in the reductionist scheme. I suppose in any scheme you have to talk about complexity arising from simple parts that aggregate into more complex parts, which in turn aggregate into even more complex parts, and so forth. The time scale changes at every level.

That tendency to aggregate applies to any complex system, from subatomic phenomena on up?

In physical science, you start with quarks, which may be made up of simpler parts. With quarks you build protons and neutrons and so forth. And when you go from the nuclear interactions we're most familiar with, you go from let's say 10-10 secands to atoms and orbital electrons interacting with each other, to a time scale of maybe 10-10 or 10-11 secands. Then when those atoms interact with one another and become complex proteins, you start to talk about biological reactions and change the time scale again, usually by several orders of magnitude. In mental processes, a typical relaxation time is 10-1 secands.

When you aggregate all of these things into cells and organisms the time scale changes again. Now you have circulation and other mechanisms. Even neural impulses from the brain to the toe - these things can change again by orders of magnitude. You have action at a distance, so to speak, both mechanical and electrical, conveyed in ways that take time. And then you have social structures that aggregate living species - for people, the aggregations, of course, are families that aggregate into communities, communities aggregate into nations, and so forth. In the economic structure you have offices and branches and corporations and working aggregations of corporations and global economies. The whole question of evolution deals with still another time scale.

The relationship of the time scales is fundamental to how a complex system works?

I suppose that one way to talk about complexity is to talk about characteristic relaxation times. At every level of complexity things interact with one another on about the same time scale. But they also see things happening below them, which essentially bias the system. They look like noise; they average out because they're operating so fast that they look like a DC signal. Above them, they see things that are operating so much more slowly than they do that they serve as parameters. So on every level in complexity you're living on a common time scale horizontally, looking at something that operates much faster below you and at something that operates much slower above you. A lot of that has to do with the size of the system and the length of time it takes to convey information, the bit rate, or the entropy if you know how to measure it.

Something important that's happening now is that modern technology is screwing it up by speeding up the bit rate. Information is being transmitted from large units to smaller units, and vice versa, much faster than ever before - TV and 24-hour global satellite communication and visual infor- mation. We're saturating social organizations that are geared to processing information at a much slower rate. We haven't invented the structures that can manage that very well.

How has that increased bit rate affected science? It's permitted us to begin developing the science of complexity. You need large computers to do numerical simulations of high-dimensional problems. They don't have elegant solutions, and the larger the computer, the larger the ability to process information and acquire the data bases you need to do a numerical simulation that resembles reality. We still don't really know how to do it, but it's going to happen. Twenty years from now people will have really sophisticated means for handling large data bases and for letting the parts of highly complex systems interact with one another in ways that may resemble what actually happens. I suspect if we find general elements of a science of complexity, they'll emphasize feedback loops set up among the various parts, both amplifying and damping. Metastability can occur when these tend to balance out.

One of the most obvious loops is in economics memory. People remember history and anticipate the future. If they feel that the future is going to be like the past, that's a negative feedback loop. They tend to retain the past history. If they feel for any reason the future is going to be different from the past, they behave in such a way as to affect the present. And that's positive feedback. They pop out of whatever basin or plateau they may be on. They may panic, they may become excessively greedy. If they become greedy you may have a boom, if they become panicky you have a crash.

How do you decide which loops to examine when you study a complex system?

When we talk about high levels of complexity we arbitrarily draw the boundaries. There's no natural boundary. Where you draw the boundaries of the system depends on how simple a view you want to take. You can keep enlarging the boundaries all the time. After drawing the boundaries, we usually say, well, we can't handle that, so let's constrain the system some more and see whether we can understand a simpler part of it. But when you reduce the boundaries, at least you know who your neighbors are. And if you start with a simple system you don't even know that. If you examine a larger system than the one you're eventually going to study, at least you know the neighborhood and you can hook up to it in an average way. But you shouldn't remain ignorant of the larger neighborhood, and you particularly shouldn't make a virtue of your ignorance of it, which is what a lot of people do.

Constraining the big picture to look at connections between particular neighbors seems like it would be tricky.

Well, that's why people don't generally do it well. We're talking about a whole new science and you don't achieve it by contemplating your navel overnight. What you try to do is define its general content, and then a lot of good people start to study it. If over the next fifty or hundred years it develops into something useful, it will be because of one hell of a lot of work. If anything I've said implies, "Eureka, we're going to understand complexity," that isn't so. What we're trying to do is establish complexity as a new science worth studying.

You've talked a lot about economics. Is that the most fruitful or promising topic for the study of complexity?

Well, I think it's possibly overly ambitious because if we stuck to the things that we know the most about, like fluid dynamics or cellular automata with fluctuations and errors or interactions with a stochastic external environment that pumps energy into them, they would represent the shortest extrapolations from what we think we know to what we don't know. And we would stop with simple protein dynamics, which is a reasonably well-thought-out, accepted field of research now, but not one people necessarily know much about.

And I suppose we should stop there. But the temptation to see in these life processes analogies with economic and social processes is very great. In fact once you start studying that kind of complexity you begin to see resemblances to larger organizations. And so very good economists are coming here, and we've begun to speculate together about economic processes.

There's a world out there which responds to the notion of new ideas about economics more quickly than to any of the other notions we're kicking around. So there are people who are prepared to support this new effort, and the Institute has moved more rapidly in that direction than caution might have indicated. It's a region in which we're all profoundly ignorant but one in which the payoffs could be big.

Also I have to admit I started out to be an economist so I have certain biases. As an undergraduate I paid more attention to those courses and less attention to physical science. I just found them in some ways more interesting, and I'm still kind of a closet economist.

What's the relationship between complexity and the age-old attempts in physics to explain existence simply?

It's very interesting. If quanturn fluctuations began the universe, it seems that rather than going to real simplicity, you re-enter a realm of complexity. I don't know whether you read the article by a Russian named Andrei Linde in Phys. Rev. last September called "Inflationary Cosmology." He says that there are many moments of beginning. The process can create a very large number of different universes. If, in fact, the expanding universe in that first moment can bifurcate many times and move into some part of an arbitrary, possibly infinite phase-space, you're back to the same philosophical question - what is the deep truth or is there any.7 So that's why I say I'm agnostic. It's not clear to me you can ever get back to a deep simplicity.

That reminds me of an ancient Hindu idea that the universe itself pulses, that it awakens and goes to sleep, and with every awakening there are new energies, forces, and natural laws, completely different from those of the previous awake state.

That's a theme that constantly recurs in philosophy. The major rationale of grand unification is that indeed you will grab the brass ring of simplicity. If that escapes people ... in the end I don't know whether we'll find fundamental truth or have to admit that reality is a series of endless loops, that there's no bottom level in these hierarchic levels of complexity.

Meanwhile the physical scientist holds to the faith, that simple components will be found at the bottom. And I don't think that's so bad. I mean, everybody needs a working hypothesis.

Several decades ago, the perennially neglected philosophy of holism was polished up and dubbed cybernetics.'Insights from cybernetics eventually spawned this magazine. Yet, as a legitimate science, cybernetics was quietly abandoned when its speculations withered of dry-rot, They were untestable.

The problem was: systems large enough to supply answers were indispensable enough to avoid meddling with. The risks of large-scale messing around with ecology, or the economy, or the climate, or a person's mind, reduced empirical knowledge of big indivisible systems to anecdotal observations. When a whole system failed it was outright disaster rather than the stuff of advances. And there the study languished, until computers.

Computer science gave cybernetics a new life and name: complexity theory, Computer circuitry vanished into eversmaller, self-contained, tweakable systems. As chips shrank in miniaturization, software code ballooned to cybernetic size - acquiring a will Rf its own. A computer's inner architecture soon could mirror the full-sized complications of, say, genetic algorithms, or

financial markets. The rejuvenation of studying whole systems lay in the power of the computer to adjust its complexity lo imitate other complexities. Software programs could tell how to model small primitive ecologies, crude neural networks, and elemental flows of water and air. Not enough to believe in, but enough to actually experiment with, You could have all the failures you wanted.

Like cybernetics, most of the talk about complexity has been just that, talk. About new paradigms. About holistic solutions. The one place that has actually put money where its mind is has been the Santa Fe Institute in New Mexico. For the past several years it has sponsored the most electrifying scientific programs in the country. Participating scientists come away with more than their minds changed; they change careers, The Institute asks wonderfully big questions, spends adequately big money, invites serious working scientists with wildly divergent interests (Nobel Laureates galore!), mixes them together, and listens. And then publishes in earnest.

The Santa Fe Institute views its task as understanding complex adaptive systems." Adaptive and complex as in: The First Artificial Life Conference; Evolutionary Paths of the Global Economy; and Complex Systems Summer School, some samples of race it workshops, I don't think they've held a lame gathering yet. The Bulletin of the Santa Fe Institute (free from The Santa Fe Institute, 1120 Canyon Road, Santa Fe, NM 87501) is a consistently prime source of new notions in the field of non-linear anything.

In a recent issue of the Bulletin, George Cowan, President of the Institute and former member of the White House Science Council, was interviewed by Dan Tyler a writer in the International Technology Division at Los Alamos National Laboratory, on the genesis and guiding concepts of the Institute. The following excerpt from their conversation tackles one version of complexity - the complex problem of Putting it simply. -Kevin Kelly

COPYRIGHT 1989 Point Foundation
COPYRIGHT 2004 Gale Group

联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有