Original transcript and video available at humanenergy.io.
Okay, so Francis, I often reflect upon how much has happened in the history of ideas since the second half of the twentieth century which couldn’t have happened before. I think that’s true for evolutionary biology (my specialty) and also for complexity (which is your specialty), and which, among other things, require just the advent of widespread computing before we had the tools to even think about it or model complexity. And because you go way back on this topic, I’d love for you to begin with just your own personal story as to how you became interested in complex systems, and against the background of really historic development in ideas. In some ways, I think, the Enlightenment is still in progress. And so please just tell us your story against the background of these big ideas.
Well, when I was in high school, actually, I already started thinking about these things, and I had even formulated what I called the generalized principle of natural selection, which is that evolution to variation selection is not limited to biological species, but that any system—physical, psychological, cultural—undergoes variation selection. So I had this very broad train of interest, but I didn’t know very well what to study. And I settled on studying physics basically for the reason that it was kind of the most difficult discipline. So I thought if I can do this at the university, I can take all this on my own. And that is more or less what happened. So I studied theoretical physics, which is pretty hard, but then afterwards I was interested in psychology or in biology. I could do it on my own.
But at the same time when I started studying physics, I did it with a kind of critical attitude in the sense that I was from the beginning skeptical about the traditional reductionist, deterministic, mechanistic worldview—it’s what I call the Newtonian worldview. So at that moment in physics there wasn’t yet anything like complexity science, so I started looking outside of physics. And what I discovered that was closest to the things I knew was systems theory and cybernetics. So in systems theory and cybernetics, there was an attempt to make a more mathematical theory using some methods from physics, but without the assumption that everything is reducible to particles moving in space, the way the traditional physical worldview is. So it looked at systems—and these systems could be biological systems, social systems, cultural systems—that were connected via inputs and outputs (let’s say the systems theory point of view), and then cybernetics added to that the idea that these systems could be goal-directed, that they were kind of like agencies; that they weren’t just passively receiving some input and transforming it into an output, that they were trying to reach certain goals.
So that was a framework that, to me, sounded very natural; in which I recognized myself. And I also came into contact with a couple of colleagues working in that, who actually (in cybernetics) were longer at that time than I was. And in particular I came in contact with two guys, Valentin Turchin and Cliff Joslyn, who had started something—it was in the very beginnings—that they called it the Principia Cybernetica Project. So cybernetics and systems theory, they have all these great ideas. In principle, these great ideas allow us to unify science—at least that was the idea of general systems theory: it should be a theory that applies to any kinds of systems. But in practice the domain is anything but unified. It’s kind of a mishmash of all kind of different approaches, different kinds of theories.
So we wanted to—it was of course very ambitious; was a little bit like the Principia Mathematica of Russell and Whitehead, where they tried to create a foundation for mathematics. So we wanted to create a foundation for cybernetics. We called this the Principia Cybernetica Project, and let’s say we made some quite good progress in the sense that we made a website where we formulated the basic concept, the basic principles, and we organized a little bit like a semantic network. And it was one of the first serious websites in the world—it’s 1993, actually. So it had a lot of influence. A lot of people learned about these things from our website. But that’s not yet what is conventionally known as complexity theory.
So systems theory and cybernetics, in principle, are about complex systems. But in practice, without computers—well, the models you could make mathematically, they had to be pretty simple. So, it’s early in the 1980s that, with the Santa Fe Institute, people started to model complex systems with computers. And that made me also see that the traditional cybernetics and systems paradigm lacked something. The traditional cybernetics and systems paradigm is kind of: you have a well-defined system, and you have a control mechanism on top of that system. And on top of that control mechanism you can have another control mechanism, so you build hierarchies of systems within systems. All very fine. But the assumption is that the system has a clear boundary; that there is this one system, and that system has particular subsystems.
But what we now saw in this new paradigm—which is best called complex adaptive systems, and I think the guy who was most responsible for that is John Holland, who was also associated with the Santa Fe Institute—is that systems consist of agents, and the agents themselves are pretty simple. But if you put lots of agents together they have all these kinds of very complex behavior. There are self-organizations, there are positive feedbacks, there are negative feedbacks, there are evolutionary processes going on. And that led to domains like artificial life, where you try to simulate a whole ecosystem of evolving organisms. So it was a very interesting approach to complexity that, I would say, complemented the one of cybernetics and general systems theory.
Let me come in here. This is all great. I just wanted to drill down on a certain point before you continue, which is the fact that formal analytical models are about enabling and constraining. On the one hand, of course, they’re so much more precise than verbal modeling. So they’re a huge advance over verbal modeling, but they have to make so many simplifying assumptions that they end up being a de facto denial of complexity. And when I talk about this, the difference between the two-body problem and the three-body problem in physics—you know, you hit this complexity wall beyond which formal analytical models can’t go. So this embracive mathematics, formal mathematical models, for example, in the whole field of economics—which is: if you can’t model it formally, then it just doesn’t have much of an impact—that’s the sense in which computer simulation models were able to go beyond the complexity wall. And yet they were definitely frowned upon. James Gleick (in Chaos, I think) made this point: that there is such a kind of a prestige and hubris associated with formal proofs that computer simulation models initially were looked down upon as a kind of a second-class citizen in this. And then finally, now, we’ve realized that there’s no alternative to that. So that’s my interpretation. I just wondered if you agree or if you’d like to elaborate upon the account that I just gave.
Well, I agree, but I want to elaborate—which is that the computer simulations themselves are also reductionistic. So one of the reasons (as I said) why I started to do physics but with a critical stance was because I kind of intuitively knew that these reductionistic, deterministic models, that they weren’t right. They couldn’t describe something like, let’s say, living organisms or societies. And even in physics itself—I mean, a large part of the history of physics in the twentieth century is showing that all these deterministic assumptions which we thought we had at the end of the nineteenth century don’t work. Quantum mechanics with the uncertainty principle. Relativity theory saying that even space and time are not absolute. Chaos theory, of course, showing that the slightest non-linearity in your system and all kinds of thing become, in practice, unpredictable. Like, you said the three-body problem. What physics basically has been doing in a way was to prove that it couldn’t solve a lot of problems. And that was what went on together with other impossibility proofs, like the famous Gödel theorem, or the halting problem in computer science. So the mathematical models themselves, they kind of show that they were incomplete. So computer simulation then was possibly an alternative. But for the computer simulations you also have to make a model, that means you have to make a simplification.
Now, the great innovation for me of this domain of complex adaptive systems—I prefer to use that term rather than complexity science, because in complexity science different people attempt to look at different aspects, and some people will speak about chaos theory as complexity science, while chaos theory is not really complexity science in my view. So complex adaptive systems—the greatest insight for me was that we can model things by having these agents. Agents are kind of little bits of programs that are programmed with what is called a condition- action rule: “in this condition, the agent does that, in this condition, it does something else.” And now you just throw a whole bunch of agents together. Each agent perceives some condition, performs an action, that action changes the conditions not only for itself but for all the other agents. So you now get all this kind of direct and indirect interactions between the agents. And that leads you to all kinds of very interesting results.
I already gave the examples of ecosystems. In economics, of course, where the agents then are people who buy and sell things: depending on what one agent sells, another one may buy, the prices may increase and you may have all these kinds of non-linear effects on the stock exchange. That insight is a very interesting paradigm that illuminates a lot. But of course it depends on: what do you choose as your element? What are your agents? And what do you program your agents to do? And what people typically do is: they have some kind of a simplistic view of what the system is. They program their agents. They see that maybe the simulation doesn’t do what they want, so they play a little with the rules until they get something that looks nice. And then they say, “Well, look: we have proven that this and this will happen.” But actually they have just proven that this simulation of these agents with these rules does that thing.
Let me take my turn, Francis. I’m really enjoying this conversation, and so here is my turn. I’d like to spend more time on the concept of complex adaptive systems and actually distinguish a number of varieties. Let’s begin with the Game of Life; Conway’s famous Game of Life. In this case, they’re actually not agents, they’re just positions on a grid which can exist in an “on” or an “off” state. And then we put in simple rules (as to for that on and off state) basically based on neighboring cells and their on and off state. So you put in those simple rules, and then out comes this amazing diversity of outcomes. So there’s your point about, basically, agents with simple rules producing this amazing diversity of outcomes. But there’s nothing adaptive about those rules. I mean, those aren’t evolutionary rules. These agents aren’t organisms in any sense of the word. So we can make the point about agents with simple rules of behavior, and then that leads to all of these amazing emergent properties. We should not use the word “adaptive” yet, because there’s nothing about those agents that’s adaptive. That’s just an arbitrary set of rules that were applied by the modeler.
It depends. I mean, the Game of Life is, of course, the most deterministic kind of system. So a lot of simulations have been using so-called cellular automata, of which the Game of Life is one variety. And I am personally very skeptical about whether cellular automata can teach us much about the real world, because they are so simple and deterministic that, really, the only thing they prove is that, even with simple deterministic goals, you can get very complicated things. That was an important insight. But once you’re there, you need to go further.
But once you start to speak about agents, you can have different types of agents. Of course you can have adaptive agents—that means agents that, for example, undergo variation selection. That’s the classical work of John Holland. He has rules that are formalized as things of ones and zeroes, and then he randomly changes some of these ones and zeroes, and he has a selection criterion, and then the agents with the right rules may survive and the other ones get eliminated. That’s one way to make them adaptive. But actually, you can have adaptive processes even when your agents themselves are not adaptive. And that—for me, personally—that’s the most inspiring idea: you put a number of agents in an environment, the agents can change the state of the environment, they can do things. For example, suppose you have an environment and there are different sources of food in there, and an agent can either eat a piece of food or transform it into a different piece of food or produce a piece of food, but the adaptivity is in effect that each agent has its own rules deterministically, but it doesn’t know what the other agents are doing. And the other agents have their own deterministic rules. But because they are in the same environment, what the one does affects the others, but you don’t know a priori how that will happen. And I think the novelty comes from the fact that you have independent agents, each of which is deterministic, but their interaction is not deterministic. Their interaction has to adapt to what all the other agents are doing. So it’s a kind of a model of coevolution. Each agent does whatever it’s programmed to do, but to do that, it has to take into account what the other agents do. And that changes all the time. So you actually get the very adaptive systems. That is why I think the term complex adaptive system is not badly chosen.
There’s two meanings, and one meaning is that the agents within the system are following adaptive strategies, and the other meaning is that the system as a whole is adaptive in some sense. And so I’d like to get your opinion on that distinction. So if we’re going to use the word “adaptive”—that basically, adaptation can exist at two levels: the agents within the system or the system as a whole. And in any practical sense, we’re trying to create systems that are adaptive as whole systems. That’s any kind of policy objective: is to create whole systems that are adaptive—economic systems, environmental systems, social systems. So how do we get from systems composed of agents following adaptive strategies (their own adaptive strategies) to systems that are adaptive as whole systems?
Well, as I said, you don’t even need to have adaptive agents to have an adaptive system, because the adaptive system is the whole of all the agents evolving and mutually adapting. And one strategy that helped this whole adaptation—and that is, I think, one of the works that I’m most proud of; I have written several papers about it, I didn’t invent it—but it is the concept of stigmergy. What is stigmergy? Stigmergy means that an agent does something that leaves a trace in the environment. That environment is shared with the other agents. These other agents can, if you wish, “read” the traces left by previous agents, and then they will build on that, they will react on that.
In another conversation we spoke about Wikipedia working according to this principle. The different agents here are people who each write a little bit of texts in Wikipedia. These traces remain in Wikipedia. They are public and they incite other people to add something to that. But the same applies at all these levels, even with very simple stupid agents. And the classic example are the ants that leave pheromone traces. And these pheromone traces then tell other ants what to do. Or the even simpler case that was the one that gave the name of stigmergy are the termite hills. Termites, when they build a cathedral-like termite hill, they just start by dropping a little bit of mud here or there. The mud contains pheromones which attracts other termites, and thus termites tend to add their mud where there is already the most termites. So you get a positive feedback: the higher a part with mud is, the more termites it attracts. And it’s, again, this principle of sfigmergy: the trace left by the activity of one agent, one termite, incites other agents to add to that. Adding in this case is pretty obvious; it’s just doing more of the same. But in the case of Wikipedia the adding means: changing, correcting, whatever it is. The trace in a shared environment now provides a kind of a template that is constantly being changed by the other agents. And like that we have a self-organizing system that can be very adaptive and very coordinated even though the agents themselves are stupid.
Yeah. And if we were to understand the mechanisms of development of a single organism in enough detail, we’d find something just like that. And this brings up: the concept of the organism is really a central concept and metaphor. Because with an organism we have something which is manifestly adaptive as a whole system, and in which everything that makes it up is, of course, contributing to that adaptive whole, but in a way that is very distributed. As you’ve written, even the brain is very distributed. There’s no homunculus within the brain. It’s all distributed. And so what we want, of course is to—what we’re working towards—especially with the Human Energy Project—is a concept of the noösphere, and ultimately the whole Earth as some kind of superorganism. But before we can talk about superorganisms, we really should get our idea straight about organisms. That’s our master kind of a concept. So let’s talk about organisms. And what you just said—you said this whole system can be adaptive without the agents being adaptive. Is that true for an organism? Can you make that statement for an organism? In what sense are the elements that make up an organism (the cells or the organs) not adaptive in their contribution to the functioning of the whole organism?
Well, of course, in an organism, the way we know it in biological organisms, they are the result of many different evolutionary transitions where each time you started with a system that had some degree of adaptivity. So I’m not going to claim that the multicellular organism is only adaptive at the multicellular level. The cells themselves obviously are also adaptive. But let’s make a caricature or simplification: let’s assume that each cell is purely programmed by its DNA. So it has a DNA program that tells it: if these molecules enter the cell, then you produce these other molecules to deal with it. In principle, that should be sufficient for the whole of this cell to coordinate. And in my view the critical term here is coordination. Each agent is capable of adaptive action. The agent itself does not need to be adaptive in the sense that it changes its rules, but the agent is adaptive in the sense that if something changes in the environment, the agent will perform some action that makes the action that the agent appropriate to the environment.
So the problem is: if you have lots of agents that each perform his actions, how do you coordinate them? And I distinguished two forms of coordination. The one that I already described, that’s the stigmergy one. The stigmergy one is just: you have this common medium, you drop signals in that medium, everybody can read them, and whoever wants to react to what you have put in the medium can do so. The equivalent in the multicellular organism would be hormones. Like, a cell has some kind of a problem, it is programmed in this form of stress to release a certain type of hormone, that hormone goes to the bloodstream, other cells that are specialized in reading that kind of hormone now will react by maybe releasing some other chemicals, or maybe they will change some of the activities—maybe your heart rate will go up or you will start sweating—all because of this one hormone that was deposited in the bloodstream. That’s one way of coordinating the activities. The other way—and that’s the more sophisticated way—is the one we find in the brain. In the brain, one neuron (if it has, let’s say, some kind of a problem), instead of depositing just a neurotransmitter that everybody can read, it will send a signal to particular other neurons it’s connected with. Then you get the network, and in the network, of course, you wanted that the right signals go to the right agents. And then you get this difficult problem that people in neural network theory are trying to solve: how do you get that network to self-organize so that it becomes effective in the sense that all the neurons together, collectively, are solving these difficult problems that single neurons cannot solve? So we know a number of algorithms that do that, of which the most basic one is reinforcing whichever connection worked, meaning that it did something that you wanted. Let’s say these are roughly the two prototypes of coordination: either stigmergy (you just leave a message in a medium that everybody can read) or neural networks (you send a particular message to one or more particular agents, and if that’s the right agents, the connection is reinforced). The next time, they are more likely to get this message.
Let’s bring this to the noösphere; the concept of the noösphere. Maybe you could just define the noösphere for us, and also kind of update Teilhard. There was his vision, and then there’s where we are today. And it doesn’t have to be exactly the same. Nobody is clairvoyant. And so what was Teilhard’s vision of the noösphere, and then how would you describe it in modern terms, noting both the similarities and the differences?
Maybe I can give a little bit more comment about my own history in how I got to these concepts. In Belgium, let’s say the ideas of Teilhard have never been far apart. It’s not that it was a mainstream philosophy, but I heard about him. I never studied him really, but it was in the background. By the way, the books of Teilhard were published by a Belgian, Max Wildiers, after his death. So there was a certain tradition of maybe taking Teilhard seriously. But then I said that I came into contact with this cybernetician, Valentin Turchin, who was a Russian that emigrated to the US. And he had written a book that was called The Phenomenon of Science, obviously inspired by Teilhard’s The Phenomenon of Man—which is not the best translation, since the original French (Le Phénoméne Humain) would rather be “The Human Phenomenon.”
Turchin was inspired by this, and he was inspired particularly by this idea of the superorganism which he called the super-being. But the idea was that you have these different cybernetic levels of higher level control, higher level intelligence. As you go through the different transitions, you come to the human level. The human level he described as the ability to think, meaning that you can reflect about things that you are not immediately confronted with, that you are not immediately experiencing. Like, animals can only react to the things they experience. But then he felt a need to say: yes, but that thinking—is there some kind of a control level beyond? Another control level? Well, it had to be something like a noösphere. He didn’t call it like that, but we had different names for it. I mean, we called it the super-being or the superorganism.
And that was about the time when the world wide web was appearing. So in our little group—Principia Cybernetica: me, Cliff Joslyn, Valentin Turchin—we were already anticipating that we would use the Internet to communicate. And we were looking in particular at some kind of a network-like, hypertext-like system which didn’t exist yet at the time in 1991. But then I discovered almost by chance that Tim Berners-Lee had this great idea that just implemented what we wanted. Of course, we were even planning to maybe make some prototypes ourselves. So suddenly there is this world wide web. And very nice: this world wide web has a structure that is kind of brain-like. Because what do you have in the brain? You have neurons connected by synapses. Or, if you look at it at a higher level, you have concepts that are collected by associations. That was also part of the inspiration of Tim Berners-Lee: hyperlinks are associations between texts. So the web is a little bit kind of like a nervous system.
So what I did was: I immediately connected that to the idea of Valentin Turchin of the super-being or the superorganism, and came to the idea: well, actually, this world wide web may turn into what I eventually called the global brain. So there you get this very nice idea of some kind of a supra-human structure that interconnects all the people, that has a brain-like structure, but that actually also is not just the brain, there’s also an anatomy, a physiology in there. So the superorganism idea then became more concrete. And—like several people have done—one of the obvious inspirations was the Living Systems Theory of James Grier Miller, who had this notion of all living systems (by which he meant not only biological systems, but also social systems) have these different kinds of sub-systems or critical functions—critical functions like digestion, storage, memory, transport, distribution, et cetera—and these have obvious analogs in society. So the superorganism idea became pretty clear to me. The global brain idea was then the information-processing part of the superorganism. But at that moment, I wasn’t using the concept of noösphere, because, yeah, the global brain seemed to be enough, and it seemed to be actually a more powerful metaphor, maybe, than the noösphere, because a noösphere is just a sphere of thought, but it doesn’t say how it functions.
But then, later—that was the work I had been doing with Shima [Beigi]—I came to the conclusion that, within this global brain, there are kind of two levels. There is, you might say, the anatomical level: that all the different computers that are connected via links, and information is sent from the one to the other, very brain-like. But then there is also the more [???] term of ideas that circulate. And the way I interpreted the noösphere—but I know that’s only part of what Teilhard meant—it’s more this space in which ideas circulate. And that gets me more to the stigmergic paradigm than to the neural network paradigm. So the neural network paradigm is one thing is sent from A to B, and from B to C, and it’s the right thing that needs to be sent from the right agent to the right agent. It’s like: I send an email to David to forward it, maybe, to Terry. I don’t want that mail to be read by anybody. It’s targeted. But if we now look at things like Wikipedia or social media, they are no longer targeted. You post something publicly and people can see it, and they can react on it or they can ignore it or they can publish it further. There you have much more of this kind of stigmergic type of organization where the particular linking structure doesn’t matter that much. And that creates a very different dynamics. The dynamics that is better in some respects, worse in others.
Let me try to play that back and expand upon it, Francis. What you said, I think, is that if we look at, for example, non-human organisms and superorganisms, you see two different kinds of organization at play: stigmergy and neural network. So they both contribute to the organism functioning as a whole. Now, if you look at the human case, we should see the same thing. We should see something as brain-like and we should see something that’s stigmergic-like. And they both have the effect of causing the whole system to function as a whole. Did I understand that correctly?
Yeah. They both help the system to coordinate activities. They’re both the kind of a communication medium through which activities can become more synergetic.
Yeah. And so now I would like to make a new point, which is that although the Internet (and the Internet age, you might say) is the current chapter of this, these ideas remain just as important as we go back in history and we look at such things as all the major events in history—roads, technology, institutions, bureaucracies—if you look at the cultural evolution of societies at progressively larger scales, then you’ll find the equivalence. You’ll find both stigmergy and nervous system processes, systems of regulation, and so on. And so the concept of a superorganism or a noösphere exists at intermediate scales. And actually we might say—I do, certainly—it does not yet exist at the global scale. We want it to. It might be on its way. But I would think in most respects, except in some very special cases (such as the International Space Station or global efforts at solving the pandemic problem, which are very feeble) that cooperation and coordination does not exist at the global scale, but it does exist at various intermediate scales. If you look at the social systems that work the best—the best functioning nations, the best functioning corporations, the best functioning religions—there you will see some good examples of brain-like processes and stigmergic-like kinds of processes at intermediate scales. For the most part not yet at the global scale. That’s what we need to create. What are your thoughts on that?
Yeah, I agree. I mean, what the Internet has done is made these things, in a sense, much more visible, because they happen so fast and because we have some ideas of the algorithms and the linking structures. It’s easier to see. But actually, as you said, any well-organized social system—whether it’s a government or an army or a firm—has these internal channels of communications that are brain-like, and there are quite a number of authors who have been making that analogy. For example, Stafford Beer, one of the founders of management cybernetics, speaks about the brain of the firm or the firm as a brain. Herbert Spencer, who was an evolutionary thinker and a father of sociology, was looking at society as a superorganism—though he noted at that moment that there wasn’t yet the equivalent of a brain there, because he couldn’t quite imagine something more world wide web like. So lots of people had been making that analogy, and that analogy is indeed correct.
At the global level, on the other hand, I think maybe you are too pessimistic in the sense that, when we hear about the things that go on on the global level, there is what I call bad news bias. That is that what is reported in the media—all the things that go on: the wars, the terrorist attacks, the hurricanes, the pandemics—and then each time there is a tendency to blame. Like: this hurricane wasn’t dealt well with because they had saved some money on maybe on dikes and protections. What people don’t see is all the problems that do get solved—locally or internationally. And in terms of global coordination, I think the best example is science. It’s not just the pandemic at this moment, but the whole of science (already since at least half a century) is fully global, fully international. There isn’t something like a Chinese science, and a Russian science, and an American science. There is just science. So I think there is a lot of coordination happening. But when the coordination functions the way you want it to, nobody notices it. So I would be inclined to say we tend to focus on all the things that don’t go well, like attempts to tackle global warming or the problem of the Taliban in Afghanistan. But all the other things…?
The United Nations have these human development reports, where they each have some objective measures of how things progress. In each year, practically each thing has progressed: people live longer, people have become richer, people have gotten better educated. And a lot of that is because of international aid in the poorest countries, or simply because of the economic system which is also a coordination mechanism. I mean, we all know the shortcomings of the market mechanism, but the market mechanism, the invisible hand, is one of these self-organizing coordination mechanisms that is highly distributed and is highly globalized and can do quite a number of impressive things.
Yeah, well, here’s my take on that, Francis, which enables me to be both optimistic and pessimistic at the same time. And it comes right back to this distinction between a complex system that’s adaptive as a system as opposed to a complex system composed of agents following their respective adaptive strategies. And let me state it in two steps. One step, I think, is so obvious that everyone I tell just nods their head because it’s so obviously true, namely: every systems engineer knows that you do not optimize a complex system by separately optimizing its parts. No. You have to have the functioning of the whole complex system in mind, and then you have to then, for the most part, experiment in order to get the parts of the system to coordinate. So the systems engineering rule: you cannot optimize a complex system by separately optimizing the parts, because that ignores the interactions. Is that something that you can easily agree with? I would hope so.
Well, I agree with the idea that you cannot optimize the whole by optimizing the parts. And actually, one of the things that I remember from our Principia Cybernetica Project was: we made dictionary of cybernetics and systems (which was not so much our work, but we collected different definitions), and one of them is called the principle of sub-optimization. And it’s not very well-known, but it is exactly that: optimizing a subsystem in general does not optimize the whole system. But I want to take issue with what you seem to imply, which is that the engineer needs to optimize the whole. That means that the engineer would have some kind of what is called the utility function for the whole and would start from that. The problem is that the whole is usually much too complex to optimize in this way. So you need mechanisms of self-organization. And for me, the most effective mechanism of self-organization is still local. So the idea of self-organization, the definition of self-organization that is mostly commonly used is: global order from local interactions.
So the global—all that means you get some kind of a coordination at the global level. It’s not necessarily an optimization because you can’t really optimize anything that is non-trivial. But it is the creation of a regime that functions pretty well at a global level from local interaction—and local interaction is this idea of the different agents mutually adapting to each other. That’s also one of the lessons of the complex adaptive systems: that no agent knows the whole system, but the agent knows its local environment, it knows that if it performs certain actions in that local environment it will get in trouble with other agents, and therefore it experiments until it finds a way that doesn’t create trouble in its local environment. But the same applies to all the other agents in that environment. They all try to find a way of dealing that doesn’t get them in trouble with their local environment. But if you take the local environments of all these agents, you have the whole environment because you go from agent to agent. And what’s my neighbor is for you your second neighbor, and is for another one a third neighbor. But in the end, if all the neighbors adapt to each other, the whole system is adapted. Now, I don’t think that this mechanism will always avoid the global problems, but it’s a mechanism that is most commonly used in nature; this local adaptation that then propagates to the global level.
Yeah. So again, I agree with you, Francis. We’re sill on the same page. The idea that we can’t just optimize the complex system in some top-down controlling engineering sense. And the way I would put the way to do it would be: constant experimentation. Because the whole system is so complex, basically, you have to try out experimenting with the component interactions. But then you always have to be assessing it at the whole-system level. So there’s some process in which your selection of the lower-level processes has to be based on its effects on the whole system. There has to be some system-level criteria for what you select or for what gets selected at the lower levels. And that’s, in fact, what takes place in a practical sense with any kind of complex systems engineering. You’re always modeling the system, or you’re actually experimenting with the system. But what you end up adopting or failing to adopt is based at the consequences of the whole-system level. How can it be otherwise? So, system-level adaptation requires system-level selection. And of course that involves a variation selection at the level of the components of the system. Would you agree with that characterization?
I am not following you completely there. I’m not sure that it is always possible or desirable to compel local adaptations to the effect on the global level. Because in most systems, in nature, it’s very difficult to know what’s the effect at the global level. So I do think there is a lot of power in these local adaptations. I mean, the idea of the invisible hand is a very beautiful idea. We know that it has its shortcomings (and I can tell you if want precisely where these shortcomings come from), but the idea of the invisible hand is: somewhere there is a lack of a certain thing, supply is not sufficient for the demands, then those who can supply the thing that is lacking will produce more of it because they know that if there is more demand than supply, they will get a higher price. So they are motivated to produce more of the thing that’s lacking. They are not optimizing at the global level. They are just seeing wherever there is a demand, they will try to provide the supply. So I think most processes work like that. There is a local demand for something, and you produce something that provides that local demand, and all these local demands mutually reinforce each other. So there will be cases—like in the case of global warming—where the system doesn’t work that well, but in many cases I think it works on this really local level.
Yeah. So Francis, I think we’re actually zoning in on an important difference of opinion between us, which is a good thing because I think we (now having identified it) can maybe come to a resolution on it—if not in this conversation then over the longer term. But first let me speak as a biologist, which is my home territory, and then state the equivalent for human systems. I think that, in nature, the concept of an organism, adaptation (or superorganism) is actually quite limited. We know that organisms are organisms. We know that beehives are organisms. We know that some ecosystems are organisms, such as microbiomes. But very quickly you enter a region—in terms of such things as ecological communities or even social systems—that I do not qualify as organisms or superorganisms. For example, you get primate societies that are just plain despotic in human terms. The bad guys won and they just bully everyone, and that’s the way it is. Okay? Or an ecosystem that’s just a basin of attraction. It’s a configuration of species that’s stable. It does not work well at the ecosystem level. Just stable. So a lot of nature is like that. It does not deserve the term “organism.” Nothing adaptive is taking place except the war of all against all in some sense. Suffering is everywhere. The Buddha was right: life is suffering and suffering is caused by craving and desire. So to be able to see the disorder in nature in addition to the order of nature is something that’s very, very important.
Now, when we move to human life, I think we see much the same thing. We see definitely organization, and that organization does extend above the individual level for sure. And so we do have societal superorganisms, but we pass a point in which, at threshold, what we see does not deserve the term “superorganism” or “adaptation” at that level. We see agents conflicting and competing with each other. We see despotism. We see basically a world of suffering caused by cravings and desires colliding with each other. And there’s no invisible hand to save the day there. No invisible hand. The invisible hand is profoundly untrue—that agents pursuing their separate adaptive strategies somehow miraculously function well as a whole system. My claim is that system-level selection is required just as with systems-level engineering. And that, when you get that, then the invisible hand does apply: that when a system has been selected as a system, it is indeed the case that the elements within that system do not have to have the welfare of the whole system in mind, and the whole thing miraculously comes together as if led by an invisible hand. But that invisible hand was a process of system-level selection. That’s my position. And so we’re not going to resolve it during this meeting, but I’m really happy to have clarified it. And if we disagree, then I look forward to continuing the conversation.
Actually, by you stating it like that, I thought maybe of a solution. I wanted to introduce the research we are doing now that is funded by the Templeton Foundation which is called The Origins of Goal-Directedness. Because I just had an idea how this approach in this project may bridge the gap between what you say—whole-system selection and local selection. The staring point of the project is that, if you have a number of reactions—and reactions are, you might say. simple agents; very simple agents. It’s where an agent sees something that, if this is a case, then you do this action. If this is the case, then you do this action. If you model that like chemical reactions (but they don’t need to be chemical, it can be anything that gives you an A plus B gives us C plus D), it turns out that if you throw enough of these reactions together they tend to form a so-called chemical organization which is a self-maintaining whole. “Self-maintaining” means that anything that is consumed is produced again. And that leads us to the important concept of autopoiesis.
So for me, of all the definitions of “life” or “organism” that I’ve read, I think the autopoiesis one is still the best one. An autopoietic system is a system that produces itself, and in this way closes itself off (to some degree) from the environment. It creates its own boundary by having its internal logic of reactions producing reactions producing reactions that close in on themselves. So it’s kind of a generalized cycle. The A produces B, that in turn produces A. It’s a cycle: it’s the thing producing itself. So that is one way to define an organism.
But now, the closure—and now we come to the tricky part—the closure is what distinguishes the inside of the organism from the outside; that means everything happening in the environment. The closure is not a thermodynamical closure. Obviously, to have a living organism, some food must get in and some waste must get out. And then the question is: where are you going to draw the boundary? Now, in chemical organization theory, which is a mathematical formalism, you can define that mathematically. You can say: this is an organization, this is the boundary, these are the things that are outside it, these are the things that are inside it. But organizations can be contained in other organizations. So you can go to a bigger organization, to a smaller organization. And you can go, let’s say, first from a smaller organization to a bigger one that encompasses it, and then from the bigger one again to another smaller one, which is not the same as the one before. So, in principle, you can have an almost continuous evolution, step by step, where you go from one organization that contains certain components to another one that contains almost the same components, but not quite, to another one which contains almost the same components, but not quite. And if you do this for several steps, you have process of evolution that may lead you to something that’s very different from what you started out with.
But notice that this is a process of evolution in which nothing gets killed. So the traditional view in biology of natural selection is: you need to generate a number of organisms, you kill off all the ones that are not adapted, and you keep some. The view in chemical organization theory—or I might say more generally in self-organization—is: you create an organization, you perturb it in such a way that it can’t go on in the same way. But instead of killing it (which means erasing it completely and starting anew), you turn it into a different one that is almost the same, but not quite. If that new one is better adapted, fine. But if it turns out that this new one is sill not very stable, so a new perturbation comes. Again, it has to change. It’s not killed—
Ah, but you killed it. That’s how you killed it. You killed the old one.
You didn’t really kill it.
There is death there.
No, not really. It’s more like the difference between societies. I mean, if, in Belgium, a new government is elected to a different party, you could say: well, the system has changed. But nobody will say that Belgium has died and now it’s a new Belgium. In organizations you constantly have these—
Well, so Francis, we might be splitting hairs.
What I mean—it’s selective, but it’s not killing off the whole thing. You select by removing the regime that didn’t work, replacing it by a different regime. But a different regime is not a blank slate. You don’t start from nothing.
Well, but that’s not true in nature.
You start from something that has most of the same ingredients, but slightly adjusted.
Yeah, so Francis, we might be splitting hairs at this point. And I want to do a check with Alan as to whether there are important points. I think this has been a great conversation. But just to take business, I’m really impressed. I have a lot of fun reading the business and management literature. I don’t know if you read that literature, Francis. But one of the things I’ve discovered is that most innovation in the business world takes place not by companies changing as companies, but by them going under and being replaced by new startup companies. And so there’s real death there. I mean, legal corporations just cease to exist, and they’re replaced by other corporations that were born in some sense, and they’re different. And of course they incorporated elements of other cultures and so on. But if that’s not a process of death, I don’t know what would be. It’s legal, social entities no longer existing. So, I think that there’s a sense in which cultural evolution of course requires the replacement of worse practices with better practices. And those worse practices, we hope, cease to exist. So I’m not sure that there’s a fundamental distinction to be made with the basic process of selection—cultural or genetic—being a process of differential survival and reproduction.
So Francis, we’ve been talking at a very high theoretical and scientific plane, but we’re part of a project called The Human Energy Project, which is very much based on Teilhard, and what we call a third story, basically—that there’s something about this which can be understood by many and really captivating to many. So there’s a narrative end of this, which is very much centered on Teilhard per se. You described Teilhard as someone that you encountered early on and was kind of part of the air over there in Belgium, and then you re-encountered and so on. But he wasn’t really center stage, especially not in a narrative sense along with the concept of the noösphere. So let’s come back (or let’s finish up, actually) with the Human Energy Project, and the idea that Teilhard and the noösphere can become part of a third story that could be widely understood and valued by everyone. So let’s finish up that way.
Well, the third story is also something that I have been working on without calling it like that for ages. The research center where I work is called Center Leo Apostel. And Leo Apostel was a famous Belgian philosopher who had kind of similar aims as Teilhard. I think he probably even was inspired to some degree—though his background was different. And he wanted what he called an integrated worldview. The idea is: science is kind of fragmented into disciplines and sub-disciplines, that all the worldview from the origins don’t give answers anymore to our big questions. So what we need is a worldview that gives meaning to life, that tells us what our position is here in the universe. So Leo Apostel defined the concept of a worldview with six components—I’m not going to give them here, but in essence are about creating meaningful life.
And creating meaningful life—why do we need a third story? It’s because the second story of science, it’s not meaningful in the sense that it tells you that the universe just follows some laws that just happened to be there. We don’t know why. And it’s all like a clockwork: it’s just running and you don’t have anything to do with it, it doesn’t go anywhere particular. Or, in the newer versions of the scientific worldview, it’s all very random and unpredictable and chaotic. So there is no sense of direction. And what the third story should do is give us this sense of direction. And the sense of direction—we don’t want to go back to the first story where the first story is the traditional religious story where God has some kind of purpose for the universe, and the story is that we are just fulfilling God’s design. We don’t want to go to that either. We want to go to a worldview where there’s both this sense of direction but there’s no predetermination. And if you look at more than the scientific world—for example, this worldview of evolutionary theory, the worldview of complex adaptive systems, the worldview of self-organization—you see that processes have directions, but they are not deterministic. We cannot predict what evolution will give, but we can predict that some things will not happen, and some things will end badly, and some things may work out well.
So for me, the third story is this idea that there is a directionality to evolution. And Teilhard formulated that in a pretty simple way that that is his law of complexity-consciousness. It says that, during evolution, complexity increases—something which I think most people would kind of intuitively agree with, although some evolutionary biologists are skeptical about that. But the second one is even more interesting: it is that consciousness will increase. Then, of course, you need to define: what is this consciousness? And just recently I read a paper by a biologist called Michael Levin, who formulates an idea that’s actually very similar to my own idea in that respect. It is that, if you go up in evolution to more and more sophisticated organisms, and then you look at the horizon in space and time—that’s to say: the things that they can either remember or imagine or perceive to be things that have happened or could be happening—as you go from a bacterium to a multicellular organism, to a simple animal, to a human, that horizon expands. So we become more and more aware of things that are happening or maybe happening not in our immediate spatial-temporal neighborhood. We become more and more aware of things that may be far removed in time and space. And that could be a simple interpretation of this law of growth of complexity. The field of complexity—that means that the things you can be conscious of—tends to increase with evolution. And there’s a good reason why that should be. That is that evolution is based on natural selection. Natural selection means fitness, means the ability to deal with all kinds of problems and challenges. And the wider your horizon is, the better you can see any possible challenges coming, the better you will be ready to deal with the difficult ones and to protect yourself from them, and the better you will also be ready to deal with a positive ones; to exploit the opportunities.
So I think this view of the superorganism is telling us that we should expand our horizon of consciousness from the individual human to the level of humanity as a whole. That will create complexity, because a superorganism is more complex than a simple organism. But that will also increase our consciousness. For example, now we are conscious of what’s happening in Afghanistan. A hundred years ago, that would never have happened. Now, we should of course be careful not to just say that the wider the spatial range, the more conscious. We can be very much aware of things happening in Afghanistan that are not at all relevant to the future of humanity. So there’s of course also the quality of the consciousness, which is something I have also been doing research upon, which is a little bit more difficult to explain in a few words. But I think we can unmistakably say that evolution goes together with this increase of complexity; that we go from organism to superorganism, and the increasing consciousness in the sense that we become aware of more and more things that are not immediately in our personal neighborhood.
Yeah, that’s great, Francis. That was very eloquent. And let me add my own perspective on that from the standpoint of evolution. Going all the way back, basically, to pre-Darwinian notions of evolution such as Herbert Spencer, who you already mentioned, were of course progressive. And this is part of the Enlightenment movement. We get people like Auguste Comte with his religion of man, and Herbert Spencer, and these were all more or less secular worldviews that were very value-laden, were functioning as worldviews, were functioning in the same capacity as religion. And then came Darwin and Wallace, and this amazingly simple concept of variation, selection, and replication as the mechanism of evolution. And with the rediscovery of Mendel’s work, that led to a period of evolutionary biology and theory which we can look back upon as being amazingly constricted. Constricted. That all the purpose was drained away from evolutionary theory. Now it was just the organisms vary and only the immediate environment does the selecting. And so the whole study of evolution went away from Teilhard’s vision. Teilhard’s vision was one in which he saw it as a metamorphosis of the Christian religion; evolution is a metamorphosis of the Christian religion.
And then, what we can say now which I think is so exciting and amazing: that by going back to basics, as I put it, and at seeing Darwinism as any process that combines the ingredients of variation, selection, and replication (which I think is how you began this conversation), and then that brings back in cultural evolution, and has ample room for a conscious component of cultural evolution. All of a sudden, we’re back to where evolution can provide a narrative which is similar in spirit to Spencer and very similar to Teilhard. And it can be a science-based worldview that can inform us, along with complexity. So evolution and complexity: the new foundation for our third story. Although I’m sure that portions of our conversation were going to be over the heads of a lot of the people listening to this, but the third story is actually something that can be deeply intuitive, commonsensical, and anyone can understand. And so I’m so happy to be a part of that to the best of my ability.
I wanted to maybe add an argument for this progressive evolution. It’s an argument that I developed in a paper more than twenty years ago which was quite well-cited by way of anecdotes. My collaborator, Shima, she discovered that paper when she was making her PhD and that’s what brought her to come and work with me. So the paper is called The Growth of Complexity During Evolution, and that was kind of a controversial idea at the time because you had this approach among evolutionists, among which is Stephen Jay Gould, that evolution is just adaptation to local circumstances. So like: there’s an ice age. In order to survive, you need to have thick fur, so you evolve into something like a mammoth. And then the ice age goes and it becomes warmer, so the mammoth needs to evolve into something with thin skin. But then the climate goes cold again, so the thick skin goes again. It’s just random adaptation to whatever vagaries that are in the environment.
So the argument I made is that: yes, it’s good to show you’re adapted to your environment. But since the environment anyway is variable, the more adaptability you have at the moment, the better you are. So there is a tendency to evolve into systems that can adapt to a wide range of things. If they can adapt only to a particular range of things, they are likely to be eliminated by the next change in the environment. So this increase in the range of things we can deal with, that’s something that can be expressed even in the form of a cybernetic law. It’s called the Law of Requisite Variety. The cyberneticist Ashby says: if you have to deal with various perturbations, the more variety of actions that you can do, the better you will be able to deal with all these problems.
So there is, you might say, a selective pressure for living systems to increase the variety of things they can do. And increasing the things they can do, that means on the one hand getting the physiological anatomical features to do these things—which means increasing complexity—but it also means a kind of a cognitive increase: an increase in the number of things that they can sense and the amount of knowledge they have to know “How do I deal with these circumstances?” So the selective pressure is to increase the range of challenges you can deal with, and that is a progressive evolution. There is no doubt about that. The example I gave supposed that you have two systems, one that can deal with situation A, B, C, and the other one can deal with situations A, B, C, D. Well, as long as the situation is A, B, or C, they’re both equally good. But the moment that the situation changes into D, the first one gets killed off and the second one remains. So therefore, there is a selective pressure to be able to deal with a wider range of things. And that implies all this progressive evolution towards more intelligence, more consciousness, more flexibility, more complexity, et cetera. So the third story is really that evolution is progressive. We learn to deal with a wider range of situations. We expand our consciousness to a wider range of challenges and opportunities.
Yeah. That’s so great. And the story that’s emerging for human evolution is that it did take place in a period of extreme climatic instability—very prescient for our current times. But that evolution of flexibility, Francis, does cut both ways. Because if it turns out that the environment does become stable, and it’s always A, B, C, but not D, then plasticity that includes D is irrelevant and pretty soon is not going to extend in the D direction. So the degree of plasticity is going to respond to the degree of environmental variability. But that said, I think the story that’s emerging is that, actually, during this period of climatic instability, all mammals, their brain size increased over evolutionary time. So they all became more plastic. But humans, of course, became plastic in a new way: their capacity for cultural evolution—which, by the way, is inherently cooperative. So human cooperation and human adaptability, I think, are joined at the hip.
But Francis, I wanted to finish up with one final point, which I think that so much integration is taking place between—and needs to take place—between evolutionary theory and complexity theory. Those two bodies of thought are only now growing together. And a major difference, I think, that I encounter again and again has to do with whether the emergence of the noösphere, the global superorganism, is something that’s just going to happen bottom-up, or whether there needs to be some more deliberative process of selection—which is not the standard engineering kind; we both agree upon that. There’s some sense that, if we want the Earth to function as a whole system, we really have to be very deliberative about what we select with the whole system in mind, as opposed to that that’s not needed so much and this is something that will just happen. And I’d love to know your thoughts upon that as, perhaps, our final topic that we consider.
Well, even though I mentioned the invisible hand, I’m not really a believer in laissez-faire economics, and the same applies to governing society as a whole. You need to leave space for all these bottom-up processes of self-organization. But in many cases it’s necessary or at least it’s more efficient to also have some top-down processes. The point is that these top-down processes, as you’ve said before, you have to be able to experiment. You should not assume, like: “We know best. That’s the way we are going to organize society. That’s the way it’s going to work.” No, you need from the top-down also experiment and say, “Let’s try this policy,” for example. Now we have, for example, the problem of: what do you do with people who don’t want to get vaccinated against Covid? At the global level, it’s best that everybody should get vaccinated, but there are some quite big groups that, for various reasons, don’t want to get vaccinated. And then you can experiment. You could, for example, say: we will make it obligatory that you need to have a vaccination to work in certain places, or we will create privileges for people who are vaccinated that they can do more things, or we can try to make it easier for people to get vaccinated. All of these are strategies that, from a top-down point of view, may work. But you don’t really know whether they will work until you apply them. And I think that is also one of the approaches. We need to think about this. Yes, the top-down thing is useful. It’s important. But you need to think and to experiment with it because there is no obvious way to do it.
That’s great, Francis. That embodies Prosocial’s philosophy, which we call bottom-up meets enlightened top-down. So that’s the way that we put it. Okay. Well, I think we’re done. I think this has been a great conversation. I’ve learned a lot from it, Francis, and it’s been really one of the nice things about this project with Ben, is that my opportunity to get to know you and your work much better.
Yeah. I had already known about your work, but not as much details as I should have. So I’m also happy that I get an opportunity to get to know your work better. And I hope that, within the project, we may be able to come closer. I think we were kind of coming closer. The whole issue is the selection at the global level versus the selection at the local level; how much of each. I think we can come to some solution.