The Global Brain as an Emergent Structure from the Worldwide Computing Network

September 4, 1994

We propose that the existence of a globally and tightly connected network of computer workstations such as the Internet can lead to the emergence of a globally self-organized structure which we refer to as the Global Brain.

Published 1995 in The Information Society, Volume 11, № 1.

Topics
Mentions

Abstract

We propose that the existence of a globally and tightly connected network of computer workstations such as the Internet can lead to the emergence of a globally self-organized structure which we refer to as the Global Brain.

Associated with that structure would be the capability for higher levels of information processing which can be harnessed to build new kinds of models that are more rapid and appropriate in their responses to ever-changing, non-equilibrium situations. To gain insight into possible characteristics of a Global Brain, we examine some functional properties of biological brains and discuss how their functional analog could be manifested in a Global Brain.

We then explore the implications of a Global Brain for simulation modelling, and present an eight-step process for constructing models which utilize the Global Brain. We review some of the tools that are currently available to make the information and simulation resources of the global Internet system accessible to researchers wishing to participate in the development of this type of model. We discuss some potential applications to regional crisis management that might result from this approach. We conclude with a consideration of some of the implications of a Global Brain.

1

Introduction

We know from synergetics and the nascent field of complexity science that a highly interconnected set of “units” or “nodes” can give rise to emergent structures and behaviors that the individual units alone do not exhibit—that is, the behavior of the “whole” can be different from the sum of the behaviors of the “parts”. Thus we consider here the possibility for the emergence of a “Global Brain” as a consequence of rapidly growing worldwide computer networks such as the Internet. By Global Brain we mean a self-organized information processing system where the collective activity of the many individuals and models working on the system yields higher levels of information processing than we ourselves as individuals can attain, just as biological brains manifest globally-distributed, high-level cognitive abilities from the complex, highly interconnected web of lower-level information-processing cellular units.

Russell proposed in The Global Brain that a Global Brain might emerge from a worldwide network of humans who were highly connected through communications1. He based his argument on the observation that throughout evolution qualitative transitions to a new level of organization have been observed to occur in several instances where a system attains approximately 10 billion (1010) units that are tightly but flexibly coupled. Examples include the number of atoms in a simple cell (such as E. Coli)2, and the number of cells in the cortex of the human brain (the area responsible for cognitive functions such as perception and thinking). Since the world population (5.7 billion, 1994) is within an order of magnitude of 10 billion and growing, the threshold for a new level of organization, by his arguments, could be reached soon3. Thus Russell saw the network of interconnected humans forming a Global Brain; we expand the concept to include computers—not only as communication links between humans, as Russell used them, but as active information-processors alongside humans.

Thus our concept of the Global Brain is a network of interconnected humans and computers from which higher levels of information processing can emerge.

What would a Global Brain “look like”? What could it do? Would we be able to detect its emergence? To address these questions, we turn in Section 2 to an examination of biological brains, in the hopes that they may provide clues as to the Global Brain’s characteristics. Admittedly the analogy between the Global Brain and biological brains is not perfect, but it is the only basis available at present, and we feel the possibility for gaining new insights outweighs the limitations of the analogy. Then in Section 3 we discuss the implications of the Global Brain for simulation modelling, and present an eight-step process for creating models which utilize the capabilities of the Global Brain. In Section 4 we apply this modelling technique to a real-world example—the Balkan crisis. We close in Section 5 with a consideration of some of the implications of the Global Brain.

2

Characteristics of Biological Brains Which May Provide Clues to the Global Brain

We examine biological brains at two levels: the cellular level, and the level of cell populations.

At the cellular level we recall that there are approximately 1010 neurons in the ocrtex of the human brain, that part of the brain from which the higher levels of information processing emerge, such as learning, memory, thinking, and problem-solving. The brain’s nerve cells form a network that communicates by a combination of electromagnetic and biochemical processes. These neurons vary in morphology and neurochemistry, but in general share the same three structural components:

  1. highly-branching dendrites, which receive input from 104 to 105 other neurons and carry it to the cell body;

  2. the cell body, which contains the trigger zone at which the input signals are integrated and output pulses are initiated; and

  3. the axon, which carries the output of up to 103 other neurons.

Communication between neurons takes place at specialized regions known as synapses. Some neurons have short dendrites and axons so that their input and output are local, whereas other neurons have processes that extend into other regions of the brain, so that their interactions are more distributed. The rapid (exponential) dropoff in effective coupling strength with distance enables localized processing for some kinds of information, whereas tight coupling to more distant areas of the brain enables highly directed global processing of other kinds of information.

The coupling is so tight that between any two arbitrary neurons there are not more than 3–5 synapses. Thus the cortex of the human brain represents a network of 1010 nerve cells interconnected by means of up to 1018 synaptic connections, with the coupling being a complex mixture of local (nearest neighbor) and global coupling.

At the level of cell populations, we can speak about several different kinds of cell groupings. First, there is the cell assembly, which is a group of cells whose synapses have been modified by learning so that those cells, or a subset of those cells, will respond jointly to a particular stimulus; cell assemblies are believed to be the basic unit of learning. Secondly, neurons may be grouped according to the type of connections they manifest—whether excitatory or inhibitory, for example, or feedforward or feedback. Thirdly, the term “cell population” can refer to a cluster of neurons whose summed activity is recorded on a single electrode. All three types of cell populations can exhibit collective behaviors or properties which are quite different from the behaviors manifested by individual cells.

For example, two characteristics of the electrical activity of brain cell populations as measured by the EEG during perception are:

  1. broadband, 1/f-type, aperiodic activity,

  2. interspersed with periods of brief (on the order of 100 ms), synchronized oscillations (Figure 1).

Figure 1
Figure 1: Sample EEG from the rabbit ofactory bulb, indicating the broadband activity interspersed with brief periods of oscillation (A), and the coherence across the array during these oscillatory periods (B). Broadband activity and synchronous oscillations are also found in other animals, including humans, and in other sensory areas such as the visual and somatosensory cortices. However, the low-frequency respiratory wave evident in (A) is not present in other sensory systems, and the occurrence of synchronous oscillations is not as regular as in the rabbit olfactory bulb data. A) Unfiltered rabbit olfactory bulb EEG, about 1 second long and approximately ±800 microvolts, showing the high-frequency (65–80 Hz) bursts superimposed on the lower-frequency respiratory wave. (Courtesy of Walter J. Freeman). B) Brief EEG segments from 60 electrodes (4 × 7 nm) on the rabbit olfactory bulb, taken during one of the oscillatory burst periods, and illustrating the widespread coherence of the oscillation. The data were filtered at 10 and 1000 Hz, the x-axis range is 100 msec, and the y-axis range is +400 microvolts with negative upward.

The broadband aperiodic activity is considered an indication that the brain activity is chaotic and that the brain utilizes this chaos in its information processing; the brief, synchronized oscillations are considered critical for “binding together” the information from a wide variety of cells into a coherent perception of a stimulus. For example, from experiments in the cat visual system where the subject is presented with a bar-shaped stimulus, it is known that some neurons in the visual system respond to the orientation of the bars whereas other neurons respond to the direction of movement of the bars; yet according to the current “binding hypothesis”, it takes the synchronous activity of all the participating neural groups to create the “perception” of a complex object such as a moving train. Oscillations coherent between different brain areas, e.g. the visual and somatosensory domains, can also be observed, for example, in experiments of associative conditioning: Our brain can learn rapidly that red light is associated with an electrical stimulus—those events are bound together by coherent oscillations. Since these characteristics of cell populations in the brain (broadband 1/f-type activity, and synchronous oscillations) are associated with developmental maturity and perceptual information-processing, these characteristics may be relevant for detecting the emergence of a Global Brain, as well as for utilizing its information-processing power4

In summary, then:

  1. there are many different types of neurons in the brain;

  2. the brain has many different regions, characterized by distinct cell organization and interconnectivity;

  3. these variations in cell type, organization, and connectivity are considered critical to the brain’s capacity for complex information processing.

We now make the analogy between the human brain and the Global Brain. Since the two systems differ greatly physically, we formulate the analogy in terms of functions that are performed, rather than in terms of the physical units themselves.

  • The central information processing unit in the Global Brain is the user and his/her workstation, analogous to the neuronal cell body with its trigger zone.

  • The input-output links in the Global Brain are the Internet “lines”, which include not only the cables, but also the satellite/microwave links, etc.; they function as the pathways for input and output to the user at the workstation on the Internet, just as the axons and dendrites function as the input and output pathways for the neuronal cell body with its trigger zone.

  • The basic unit of communication, or exchange, in the Global Brain is a message (packet) sent between two information-processing units on the Internet, analogous to a message (spike-train-initiated chemical or electrical signal) sent across a synapse between two neurons in the brain.

Thus we take the connectionist viewpoint. In its most basic, abstract sense, we can say that we are looking at a system of information processing units, or nodes, linked to each other with connections which transmit input to and output from the processing units. Invoking concepts from complexity science, we can view both the cognitive abilities of the biological brain and the problem-solving capabilities of the Global Brain as other levels of capability which emerge from this interconnected system once the system is “sufficiently complex”—for example, once there are enough information processing nodes and the connection/node ratio is sufficient, once the connections are “the right kind”, and once the level of activity in the network is sufficient.

Given this analogy, which is admittedly not perfect in all details but which nonetheless contains some strong resonances, what can we predict about the characteristics of a Global Brain, and its emergence? Based on our understanding of biological brains we would predict that its structure and activity would include the following elements.

  1. Local as well as global connections: Local connections could be those within a department or institution, for example, and global connections could be the worldwide access offered by the Internet.

  2. Cell assemblies: The Internet equivalent to cell assemblies would be groups of users formed in response to events or topics occurring within the network or impinging on the network from without. For example, in response to input from regional-crisis-reporting nodes on the network, certain groups of users (often personally related to the crisis region) would become active when some new event or critical development was reported; the group or network of users who became active at this point would represent the cell-assembly equivalent.

  3. Different regions specialized for different functions: “Regions” here denotes location in physical, geographical space as well as the super-cyberspace represented by this network.

  4. Use of multiple modalities for information transfer and processing: Just as the brain uses electrical and biochemical modalities to effect information transfer and processing, so can multiple modalities be utilized on the network. For example, auditory and visual signals can be integrated on the network through the link-up of TV, video, and computer, or all five sensory modalities could be integrated with the network through virtual reality link-ups to the Internet system.

  5. Brief synchronicity: In analogy to process in the brain, it is not only sufficient to retrieve vast amounts of information—the information must be integrated to form a “perception” or “insight”, which is not possible by mere retrieval. This process of binding together different features to form one consistent perception of a complex object is believed to occur in the human brain through a short event of coherent oscillation among a large number of spatially distributed neurons, as described above. We therefore predict that brief periods of synchronous activity will appear in the network as the Global Brain develops.

  6. Associative memory: Key to our brain’s processing abilities is associative memory, whereby a large mass of distributed information can be retrieved by a small and relatively unspecific trigger. For example, a simple odor can trigger the memory of complex previous events. Translated into a physiological context this means that the stimulation of a small number of neurons can cause the excitation of large areas in the cortex that have been associated with the given stimulus through learning. This property is facilitated by the large connectivity of neurons in the human brain. If each neuron is connected to 103–104 other neurons it takes (under the zeroth order assumption of uniformity) about 2–4 connections (synapses) to reach any arbitrary other neuron in the brain. There exists empirical evidence that supports this figure (see Braitenberg & Schuez, Anatomy of the cortex: statistics and geometry for details). In the Global Brain context we can think of a question of which we know that there exists somewhere in this world one person who knows the answer (a typical situation in a fairy-tale). If we only assume that everyone knows about ten to a hundred people then we could search for that person by choosing among our acquaintances the one who might know someone who might know someone who… might know that person. If we call that person and ask her/him for the phone number of the person that (s)he knows who might help us, etc., it could take us as little as five to ten phone calls to find that unique person (see also Milgram: The small world problem). (This assumes that there are no disconnected network islands like, for example, in academic disciplines: the magic person who knows the answer to our question might be in the office down the hall, but we do not know about it since we might not talk to engineers if we are a physicist, or to psychologists if we are an economist.)

    One prerequisite of associative memory in a computer network is global, real-time5 access to information which is distributed not only geographically but also across disciplines. This access—and thus this associative-memory-type capability—becomes possible through non-local coupling of computer and information resources in global networks, such as the Internet. Network access can allow the process of collecting data and performing experiments to be integrated with modeling.

    A second prerequisite for associative memory on the global network is a highly-interconnected structure of knowledge links. This feature is currently ported over to the Internet via the World Wide Web (WWW). For example, an expert is to a certain degree characterized by his/her professional connections.6 Or in other words, if we want to become an expert in a field, we can start by reading the most recent technical article. We will find statements that assume prior knowledge of other publications in the references. If we read the book/paper in the references, the same situation might occur for several levels of recursion. It can be expected, however, that we would not have to penetrate more than five levels of references before we can become a reasonable expert in any field, at least locally as it is related to a specific problem. In the context of the WWW, research institutions as well as individual researchers start now to create hyper-media documents that describe the work (publications, data, simulation results), but almost more importantly they typically contain access to the private links to other nodes that this person or institution finds important. This process creates a linked structure which is analogous to the associative network of biological brains or the telephone network of experts mentioned above.

  7. Integrated processing: We can consider the effective connectivity in the Internet by using the Unix command “ping” to measure the round-trip time of a packet of data7. Within a local area network we have a latency of only a few milliseconds; within the US the current latency typically is below 100 ms (e.g., Urbana–Berkeley: 70 ms). To a typical site in Germany (e.g., mailserv.zdv.uni-tuebingen.de) we have a latency of 300 ms, and to Japan (e.g., keio.ac.jp) the latency currently is 500 ms, but to Australia (e.g., anu.edu.au) only about 250 ms. Currently the connection between Japan and Germany is via the US and the latencies are additive8. These global values seem to improve rapidly and we can anticipate the time when the global latency is approaching the theoretical limit from the finite speed of light (about 150 ms). If we are approaching latencies comparable to those in the brain (and correspondingly high data transfer rates) then we have a truly shared situation, where the location of the data or the execution of the program does not matter anymore. Instead of sending a floppy disk with data and a program and instructions about how to install it, we can leave both program and data in place and access both remotely and generate a synchronous problem-solving approach, not on the level of parallel computation but on the integrated activities of humans and computers across the global network.

  8. Dreaming: A frequently asked question in discussions about the global network is related to “information overload”: How can we process all these terabytes of information that are accessible through the Internet? We can look at the natural brain that is exposed to terabytes of data every day, through sight and sound, for example. Roughly we could assume one megabyte for each image that we see and if we assume a temporal resolution of thirty images per second the we would perceive about one terabyte of visual information if we spend ten hours with open eyes. We know that the brain has developed many efficient ways to pre-process this information before it gets stored at a much lower rate. In addition we have evidence that one function of dreaming may be to further reduce the information that we want to keep stored in that part of our memory that is easily accessible to us, e.e., memories that we can remember voluntarily. According to current theories (see, for example, van Hemmen et al., Hebbian unlearning of spatio-temporal patterns) neural nets learn many spurious patterns that can be unlearned during a procedure—this can be interpreted as dreaming. Thereby the memory will be restructured in a way that the relative size of the basin of attraction of stored patterns will be modified and we will increase the chance of remembering the more essential patterns and forget most of the other ones9.

    One can come up with many analogies of the functional role of dreams in the context of the Global Brain. We suggest considering computer simulations of models about the world as fulfilling some of these functions; simulations could identify models that produce outcomes that are obviously wrong and reject them. The implication would be that the focus is concentrated on a smaller set of more reasonable models. None of these models would have to be discarded but the time and cost of retrieval would be greatly modified. Maybe this is a procedure that has already happened in the history of science where valid ideas in books or papers in some inaccessible archive have been forgotten by the world for centuries until someone discovers them and their value in a new global context. In the context of current global information flows and hyper-media documents this role of global dreaming might become increasingly relevant.

    Another function of dreams—to compose new realities by putting together familiar elements into novel constructs—can also be realized in computer simulations and in virtual reality environments. As in dreams, this can be done without any constraints by physical or other laws. Modern simulation interfaces will also allow several individuals to participate in the same simulation across the Internet. One complex example of how this might look in a geographically-oriented simulation is the multi-player version of SimCity that runs under X-windows on Unix workstations.

In addition to the above-listed attributes, we would predict from our understanding of biological brains that the capabilities of a Global Brain would include the ability to learn, to adapt, and to habituate. We would expect it to solve problems in a decentralized manner and in a changing environment. We would expect it to be robust in the face of many perturbations. It would be capable of developing automated forms of responses for:

  1. routine tasks that are vital to the life of the system, in analogy to the autonomic system in the brain which controls breathing, blood pressure, etc.; and for

  2. some kinds of external threats, in analogy to the blood-brain barrier which prevents toxins from passing into the brain from the blood, or to the microglia cells which scavenge damaged tissue in the brain.

We would expect the Global Brain to develop an alerting mechanism, analogous to the P300 alert signal seen in the brain’s evoked potential. The Global Brain would develop a response to the external environment that goes beyond simple reflex capabilities to active shaping of the environment and choice of responses. Thus, a Global Brain derived from a complex information and communications network composed of people and computers would be able to sense and respond to the world outside that network as well as within that network, with abilities that would be analogous to our brain’s abilities but which would surpass the abilities of our own brains.

Under what conditions might we expect such an entity to emerge? In accordance with concepts from complexity science, we predict that the worldwide information and communications network needs to surpass a certain threshold for a Global Brain to emerge, and that the passing of this threshold will produce a sudden state change, or transition, from the non-global-brain state to the global-brain state10. We predict that this threshold is determined by:

  1. the number of nodes in the system;

  2. the connection density (# connections/node);

  3. the type of connections involved;

  4. the level of activity in the system.

The criteria for emergence of a Global Brain depend upon what kind of brain one is discussing (a sea slug versus human brain, for example, or a newborn versus adult brain). We therefore expect that the Global Brain will exhibit a similar evolution from diffuse nerve nets to small ganglia to a more centralized, brain-like organization, with a concomitant evolution towards higher and higher levels of information-processing abilities11. Figure 2 depicts changes in biological brain connectivity during evolution; Figures 3 and 4 depict current computer network connectivity worldwide and within the US, respectively.

Figure 2
Figure 2: The organization of the brain during evolution (A–F), (see text) compared with the organization of current large-scale computer networks (see Figures 3 and 4). (Scales of the figures vary.)
Figure 3
Figure 3: Total News flow on USENET. Line width is proportional to directional effective flow volume (DECWRL netmap-2.1 by Brian Reid at Thu Apr 2 01:41:31 1992, Stereographic Projection, Image resolution 300/in., stroke limit 5 pixels).
Figure 4
Figure 4: The organization of the NSFNET T1 backbone in the Untied States, which is also part of the Internet. (Colors indicate inbound traffic measured in billions of bytes in September 1991, ranging from 0 bytes (purple) to 100 million bytes (white). Reprinted by permission of the authors, Donna Cox and Robert Patterson at the NCSA, University of Illinois at Urbana-Champaign, 1993); the data were collected by Merit Network, Inc.

Brain evolution proceeded from simple diffuse nerve nets (Figs. 2A, B), to collections of nerves in the form of nerve strands and one or more ganglia (Fig. 2C), to nerve cords, multiple ganglia, and a small brain (Fig. 2D), and finally, to the human system (Figs. 2E, F), with its greatly enlarged brain, a spinal cord, and complex peripheral nervous system. Current large-scale computer networks, from a worldwide view (Fig. 3), are concentrated in the USA and Europe, analogous in appearance to ganglia or even a two-lobed brain. An alternative view would be that the system is in a much earlier phase of evolution where the network is rapidly expanding throughout the entire world to form a diffuse net covering the surface, with a few ganglia that coordinate some kinds of activity but do not (yet?) exercise coordinated information processing and decision-making for the entire world “organism” as a brain does. The organization of these large-scale networks at the level of the USA (Fig. 4) resembles more this ganglionic stage.

In more detail, Figure 2A shows the nervous system of Hydra, a small marine organism in the same phylum (Coelenterata) as jellyfish and corals. Each line element represents a single nerve cell (by Hyman, 1940, as shown in Bonner’s The Evolution of Complexity by Means of Natural Selection). Figure 2B depicts the nervous system of Actinia, a small marine organism also in the phylum Coelenterata. Each line element represents a single nerve cell (after Wolff, 1904, as shown in Kuhlenbeck’s Invertebrates and Origin of Vertebrates). Figure 2C illustrates the nervous system of Planocera, a small marine organism in the phylum Platyhelminthes. Each light line element represents a single nerve cell, whereas the darker lines represent collections of neurons into strands and, near top center, a simple ganglion (after Lang, as shown in Kuhlenbeck). Figure 2D shows the generalized body plan of an insect, which is in the phylum Arthropoda. Each line represents a bundle of neurons which forms a connecting tract between pairs of ganglia; there is also a small brain. Not shown is the extensive network of individual cells extending from this network of connectives (after Dobzhansky et al., 1977, as shown in Shepard, 1983).

Figure 2E depicts the human nervous system. Only the major nerve tracts are shown, since the density of smaller-sized tracts and individual neurons throughout the body is far too great to be depicted on this scale. Figure 2F presents more detailed views of the human nervous system—the central nervous system (left), and the autonomic nervous system (right). Again, smaller-sized nerve bundles and individual neurons are not depicted in this figure, so that the network of connectivity—especially in the brain—is greatly underestimated by this figure alone.

Switching from an evolutionary perspective to a developmental perspective, we expect that the activity of the Global Brain will change its characteristics (its power spectrum, for example) as it develops (Figure 6), again with a concomitant increase in information-processing abilities.

Figure 5
Figure 5: Brain activity, as measured by the EEG (A), compared with computer network activity, as measured by percent utilization of a link between two nodes (B). In both cases the time series reflect the activity of a population of information processors—a population of nerve cells in the case of the EEG, and a population of users connected to the two NSFNET nodes in the case of the utilization curve. Activity of the EEG becomes more broadband in frequency as a person develops from infancy to childhood and adulthood (C), in response to such factors as increasing nerve conduction velocities with increasing myelination and nerve radius, for example. This increase in the richness of the frequency content is associated with increased cognitive ability in humans. Thus we would predict that as the Global Brain develops, time series of its activity would become more broadband. A) Human cortical EEG (voltage as a function of time) from three electrodes on the somatosensory cortex, recorded for 2.5 seconds during a somatosensory discrimination task. The data were filtered at 0.03 Hz and 100 Hz during acquisition, and digitized at 256 Hz. The y-axis range is -195 to +280 microvolts. B) Computer network activity between two nodes of the NSFNET for 1 week. The y-axis ranges from 0 to 80 percent utilization of the link capacity.
Figure 6
Figure 6: Human EEG as a function of development, from infancy to childhood, illustrating the increasing breadth of frequencies with age.

3

The Global Brain and Simulation Modelling

The emergence of a Global Brain opens vast frontiers for simulation modelling.

In the preceding section we examined biological brains to explore what the characteristics of a Global Brain might be—what do these insights tell us about how we might change the way we build or use models?

The Global Brain offers unique input, interaction, and problem-solving capabilities for models. The models we envision are object-oriented, so that nodes can be swapped in and out as the questions and conditions for the model change. These models utilize online data acquisition—from weather stations, user group polls, or electronic news services, for example—to provide the latest information to the model. Already today a number of computational servers—for example, for database access—are available within the World Wide Web and NCSA-Mosaic environment12. These models incorporate qualitative as well as quantitative information, to take advantage of the intuitive and non-qualitative insights for identifying and solving problems. These models are not restricted to the computing power at one’s location, because they can utilize resources across the network. Thus the Global Brain offers the opportunity to construct models which respond more rapidly and more appropriately to highly nonlinear, nonequilibrium situations.

The result is models which:

  1. continually “sense” their environment to provide up-to-date information;

  2. utilize distributed and parallel processing to enhance processing speed and capability; and

  3. develop problem-solving capabilities beyond those of individual humans or existing traditional models, through the same kind of emergent behavior that has been observed for many other complex adaptive systems.

We present now a practical eight-step process for creating these kinds of models which utilize the Global Brain. We then describe each step in detail, with specific examples of how the resources of the Global Brain can be applied.

  1. Define general goals for the solution of the problem.

  2. Acquire qualitative information on the current status of the problem.

  3. Create a conceptual model of the integrated system.

  4. Define sub-areas within the general conceptual model where a quantitative approach appears to be promising.

  5. Obtain current quantitative data and identify models that deal with the solution of any of the sub-problems.

  6. Link data and simulation models to an interdependent, distributed network.

  7. Perform the simulation and sensitivity analysis.

  8. Compare the results with the updated information from (ii) and (v), and evaluate them with respect to the target specified in (i).

Steps (i)–(v) and (viii) are standard steps in model-building; what distinguishes this approach from others is how these steps are implemented, and the inclusion of steps (vi) and (vii)—the utilization of an interdependent, distributed network and the application of sensitivity analysis. Each of these steps is discussed more fully below.

  1. Define general goals for the solution of the problem.

    The definition of goals goes hand-in-hand with the identification of problems.

    There is often a hierarchy of goals and problems, from the more general or extended to the more specific or local. For the formulation of a complex adaptive model such as the genetic-algorithm-based model used by Forrest and Mayer-Kress, one assigns a “relevance” or “priority” weighting factor to each of the goals; these weighting factors reflect the modeller’s policy objectives and thus, the goals for the model. An evaluation function is formulated to integrate each of the individual goals into one global target; this evaluation function corresponds to a fitness function which the model then tries to optimize using the genetic algorithm technique inspired by biological evolution13.

  2. Acquire qualitative information on the current status of the problem.

    Vast amounts of quantitative data are now available, such as that from the satellite-based Earth Observing Systems (EOS). (EOS, for example, will transmit daily an amount of data which would be the equivalent of about 100,000 complete works of Shakespeare.) But we also have to use knowledge that is less quantitative and more based on wisdom and insights that cannot be easily described in terabytes. Future modeling approaches will have to tap into the global wisdom of informal, anecdotal, and descriptive knowledge as well as into the results of extensive quantitative analysis and supercomputer computations.

    Several sources for such qualitative information are already available on the Internet, through news groups, mailing lists, and Internet Relay Chat (IRC) which offer the network user the opportunity for direct personal interaction.

    In News Groups, messages from individual users are posted to an electronic bulletin board and can be retrieved by users through special news software that connects a personal computer or workstation with a news server (i.e., a computer that stores the news items). News Groups are organized by specific topics. A key criterion for the usefulness of net-news, besides transmission speed, is the quality of the user interface. The amount of available information is often overwhelming, and the fraction of items which are pertinent can be very small. Therefore the speed with which one can identify a message’s relevance is crucial, and tools which allow the user to filter out messages according to author and/or topic are highly recommended.

    Mailing lists are a very efficient and flexible way to create the spontaneous, informal exchange of information and ideas within a group of people. While net news groups require a certain amount of administration (there are regulated procedures for setting up new groups, their content is archived, etc.), mailing lists only use electronic mail for the organization of the information exchange. In that way temporary mailing lists can be set up by individual groups of users—for example, as an alternative to conference telephone calls. They have the advantage that an arbitrarily large number of readers can be reached within minutes, so that quasi-interactive discussions are possible. Unlike conference calls, mailing lists automatically provide documentation of the discussion. While small mailing lists can be set up very quickly by just collecting names of users into one mail alias, this is certainly infeasible for large international or global mailing lists. In those cases, automated mailing lists provide most of the administrative work.

    Subscription and cancellation of membership is done by sending a “subscribe” or “unsubscribe” note to a mailing list server. Some of these large mailing lists provide not only discussions among participants but also act as news services, distributing news from such global groups as UPI, RFE, and ClariNet, and sometimes offering detailed news from local sources (e.g. VREME, a Belgrade-based weekly).

    Internet Relay Chat (IRC) is a tool that allows continuous conversation in a virtual lounge to chat about selected topics. It is very efficient for rapid answers to simple questions since there is a good chance that someone out there who knows the answer is listening in on the conversation. Also, in this informal environment, people get to know each other and can create a community which is loosely connected all over the world. Physical location becomes less relevant—the fellow user from Australia or Finland might turn out to be the better expert in some areas than the colleague from MIT or Berkeley, for example. One problem is the continuous demand for attention from the chat-channel, which has led to the development of cyber-friends, programs who take over and chat with the net while the real person is busy doing something else.

    A serious problem with all of these informal information exchanges is the danger of misinformation, by intention or by mistake. Procedures need to be developed to protect users against planted fake news and data, and users need to develop a critical approach to information from the net. This problem, though, is a general problem of public information.

    Network-based systems have the advantages that the author can be questioned publicly and immediately, and the public (as represented by the users) can be solicited for confirmation or alternative opinions.

  3. Create a conceptual model of the integrated system.

    By a conceptual model we mean a mapping of the observed system onto an abstract structure that extracts the “essential” properties of the original system. A conceptual model can be a diagram of the relevant parts or elements in the observed system, or a flowchart of activities or events. In academic or policy circles there is often a gap between informal forms of analysis such as conceptual models and formal, quantitative models. Large-scale computing networks will help to bridge this gap by providing environments in which both forms of analysis are integrated. Recent progress in object-oriented graphical user interfaces (GUIs) such as Diagram might provide assistance in better understanding and modelling conceptually the complexity of an evolving interacting system.

  4. Define sub-areas within the general conceptual model where a quantitative approach appears to be promising.

    Many modelling attempts suffer from the problem that the jump from a very unspecific conceptual analysis to a highly complicated formal model happens too abruptly and is sometimes very difficult to justify in its details. This problem is especially common in disciplines with a tradition that emphasizes quantitative models—for example, in econometrics where routines for mapping concepts onto categories of models have been developed which make it tempting for the user to focus on mathematical details and the implications of the outcome, without spending much effort in questioning the validity of assumptions and approximations. Therefore, we suggest the approach of using conceptual models as far as possible and going to quantitative simulations only as much as necessary. The development of intuitive human-computer interfaces, visualization, and audification of complex structures and processes can be helpful in developing better models and in rapidly detecting problems of the model that would be hidden in a purely quantitative description.

  5. Obtain current quantitative data and identify models that deal with the solution of any of the sub-problems.

    A variety of sources for information on quantitative data and models is already available on the Internet through electronic information servers, electronic library catalogs, anonymous ftp-sites, and network search tools such as Wide Area Information Servers, Gopher, and World Wide Web. Electronic information servers are organized by topics; some examples are “Lexus” for law-related information and the National Environmental Data Referral Service (NEDRES). Electronic library catalogs offer the opportunity to search library holdings from one’s own desk. Increasingly, libraries can provide the publication via electronic media—either by fax or by scanning in the text and mailing it electronically—thus further reducing the time it takes to locate and acquire desired information. Networking between the libraries to optimize service by reducing work duplication poses a considerable challenge.

    Anonymous ftp-sites are storage media on some computer systems which the public can access to obtain quantitative information as well as software, sound, images, etc. The number of these publicly accessible sites is so large that manual searches are basically hopeless. Therefore a number of network search tools have been developed which allow the user to search through the Internet for items that match a search topic. To our knowledge the first such tool was Archie, a network tool that essentially searches through listings of all ftp-archives. More sophisticated hyper-text and hyper-media search tools have recently been developed. The most common ones are Wide Area Information Servers (WAIS), Gopher, and World Wide Web (WWW). Information servers can be indexed for each of these network tools, so that specific information about the structure and content of a specific information source will be reported to the information servers. All of these information servers are linked together, so that information which is accessed through one server is also available on the other servers. Very recently, meta-search tools have been developed, where the search is not done on the ftp-site level but on the level of indexed sites. One example is VERONICA (Very Easy Rodent-Oriented Net-wide Index to Computerized Archives), which searches for information by keyword, across most of the Gopher-server menus in the entire Gopher web. Other network tools include Prospero, Knowbots, and Netfind, which are described in Schwartz et al., A Comparison of Internet Resource Discovery Approaches.

    There are other specialized network and information systems besides Internet.

    For example, there is TIES (Telecom Information Exchange Service) of the United Nations, and a recently installed electronic mail system for the member countries of the Conference on Security and Cooperation in Europe. These systems have evolved largely in parallel and only recently have been integrated into a globally interconnected network of networks. This parallel and decentralized evolution of sub-nets is reminiscent of the evolution of nerve networks in biological brains.

  6. Link data and simulation models to an interdependent, distributed network.

    For a successful global modelling approach it is not sufficient to have a good interface with a local computer and database. One of the main challenges will be the interconnection of all the distributed computational and information units, including the establishment of efficient communication among the researchers who work jointly on a distributed project. Multimedia electronic mail offers a way to go beyond the limitations of conference calls and fax machines, for project collaborators separated geographically. The Sequoia 2000 project of the University of California uses distributed data management tools to make about 100 terabytes of global change data available to researchers on the Internet. Several computational tools are available where this linking can be done in a very intuitive, transparent way—for example, in the form of active diagrams, which are diagrams where the graphic representation and the functional links are efficiently integrated.

    In addition to information servers there are simulations servers. Since modern computer workstations are increasingly used as mail, information access, and local storage tools, computation tends to be done remotely on large computers that are accessed through some network. Software becomes more modularized and object-oriented, so that many codes (either in compiled or in source form) are shared through anonymous ftp sites on the Internet. Relatively generic programs, such as algorithms of a mathematical library, are mainly shared as source codes. Other software is more specific and depends on the type of computer. More recent software has been categorized according to the window-system that is used for the front end; X-windows and NeXTStep are examples.

    As long as these programs run locally and are not dependent on a special environment (special data or code libraries, for example), they can be easily transferred and installed.

    In more complicated cases, the installation of software on a new computer can be considerable work. For those cases, an alternative is to use software that can run a computer “kernel” on a remote machine, with a “front-end” on a local workstation. Mathematical software is now available that allows the organization of the “program” into a “notebook” that contains not only the software but also the corresponding documentation as well as the results of the computation in multimedia format. Multiple notebooks can be launched from a NeXT workstation, for example, where each notebook is linked to a kernel on a computer server anywhere on the network. From the remote kernel one can then access remote data and libraries without the problem of a port to the local machine. Future implementations of distributed models will have to provide similar integrated capabilities.

    There are already simulation servers on the network with traditional command line interfaces; the user connects to the simulation server and then follows an interactive command menu. The desired parameters are inserted, a batch-job is submitted, and the results of the simulation are sent out via electronic mail. One elaborate system for geophysical problems can be accessed together with the corresponding databases. These more traditional simulation servers can be linked to integrated front ends, which make the connections automatically and send the appropriate commands to the server. The concept of having computer servers completely transparent to the user on the network is discussed in a meta-computer context. In that scenario specific programs can be run from any personal computer or workstation connected to the network, and they can be executed on any computer or supercomputer without the intervention of the user who runs the program. For the user the appearance would be that of a single, powerful computer environment.

  7. Perform the simulation and sensitivity analysis.

    In Mayer-Kress’ Chaos and Crises in International Systems, sample simulations utilizing the kind of complex, adaptive modelling we propose here are described. The simulations and the sensitivity analysis draw heavily upon concepts from the theory of nonlinear dynamics and chaos. The modern notion of chaos describes irregular and highly complex structures in time and in space that follow deterministic laws and equations, in contrast to the structureless chaos of traditional equilibrium thermodynamics. We summarize the organization of chaos and order in generic self-organizing systems that are removed from thermal equilibrium and put under some form of stress (an example would be a fluid in a pot on a stove, where the level of stress is given by the rate at which the fluid is heated).

    Close to equilibrium there exists no spatial structure—the dynamics of the individual subsystem is random and without spatial or temporal coherence.

    Beyond a given threshold of external stress the system starts to self-organize and form regular spatial patterns (rolls, hexagons) which create coherent behavior of the subsystems. The order parameters themselves do not evolve in time at this stage. Under increasing stress, however, the order parameters themselves begin to oscillate in an organized manner: we have coherent and ordered dynamics of the subsystems. Further increase of the external stress leads to bifurcations to more complicated temporal behavior, but the system as such is still acting coherently. This continues until the system shows temporal deterministic chaos. The dynamics is now predictable only for a finite time, and the length of the time period of predictability depends on the degree of chaos present in the system, decreasing as the system becomes more chaotic. The spatial coherence of the system is destroyed and independent subsystems emerge which interact and create temporary coherent structures. In a fluid we have turbulent cascades where vortices are created that decay into smaller and smaller vortices. At the limit of extremely high stress we are back to an irregular type of chaos where each of the subsystems can be described as random and incoherent components without stable, coherent structures. This stage has some similarities to the anarchy with which we started close to thermal equilibrium. Thus the notion of chaos covers the range from completely coherent, slightly unpredictable, strongly confined, small-scale motion to highly unpredictable, spatially incoherent motion of individual subsystems.

    Chaos theory, therefore, suggests the use of simulations with local, short-term predictability and thus a high capacity for adaptability. Adaptability requires:

    • simple, low-dimensional models which incorporate intuitive insights as well as more formal, quantitative ones;
    • fast and direct access to, and integration of, current global data;
    • multimedia user interfaces for efficient representation and interpretation of the results;
    • efficient individual archiving and retrieval system;
    • sensitivity analysis to identify potential problem domains.

    By sensitivity analysis we mean applying quantitative techniques from the theory of chaos to characterize regions of parameter space in terms of how sensitive the system is to small perturbations or influences. The collective parameter regimes in which the system is highly sensitive are the prime candidate scenarios for crises, and they can change over time.

    As mentioned above, we know from the study of nonlinear and chaotic systems that only short-term predictions are possible if the system exhibits chaos. Therefore, a typical time scale of five years between formulation and verification of a traditional global model appears to be too long in a world where time scales can be significantly shorter than one year (e.g., the eruption of regional conflicts such as those recently in eastern Europe). Future models will have to be object-oriented with links to other models and information systems, and they will have to be adaptive to changing basic conditions.

  8. Compare the results with the updated information from (ii) and (v), and evaluate them with respect to the targets specified in (i).

Further discussion of this final step in the process is presented in Hasselman (1992) and Forrest & Mayer-Kress (1991). We thus complete our description of the eight-step process we propose for constructing models which utilize the Global Brain. We turn now to an example of how this kind of model could be applied to political crisis management.

4

Sample Application: A Global-Brain-Utilizing Model for Crisis Management

In this section we ask the question: how might a system of globally-connected users and computing resources lead to new approaches of managing man-made or naturally-induced crises on a local, regional, or global scale? Traditional world models such as those related to the Club-of-Rome studies are based on well-defined, closed systems of variables, parameters, and equations. More recent approaches also incorporate the integration of multiple levels and heterogeneous structures of sub-models. We, however, would like to suggest models which process information in a way that is analogous to how our brain models its complex and continuously-changing environment—that is, we seek models which are adaptive, which can evolve, and which apply to nonequilibrium and nonlinear systems. These models are complex in that they can embody a group of processing units from whose collective activity higher levels of processing can emerge (see, for example, the artificial life models in Langton et al., Artificial Life II). Thus the models can have the capability and flexibility of multiple levels of integration. Depending on the context the level of resolution can be adapted to the problem, as in hydrodynamic models, for example, where adaptive grid methods automatically distribute the processing power to areas where high degrees of detail are required.

In crisis management, quantitative models such as those originated by Richardson (1960) and Lanchester (1956) have been used to attempt to understand the arms competition among nations or the dynamics of battle. The models are characterized by a number of global variables (arms expenditures of natures, attrition rates of armies in battle, etc.). Thus, as long as there are well-defined countries and armies, these models have a chance of describing some relevant aspects of the system. For example, the operational planning of NATO forces has utilized Lanchester-type models to estimate the requirements (troops, firepower, logistics, etc.) that are needed in order to achieve a well-specified military goal, such as getting Iraqi troops out of Kuwait. One of the main problems for military planners for situations such as the Bosnian crisis is the lack of well-defined military objectives between the extremes of “Drop enough bombs until the fighting stops” and “Send in enough troops to protect any civilian against any aggressor.” Also, instead of well-defined countries and armies, there are weakly coherent military units, partisans, militias and independent terrorist gangs and robbers, so that the concepts from traditional crisis management models do not apply. The persistence of the Bosnian crisis, for example, despite the efforts by the United Nations (UN), European Economic Community (EC), and United States (US), among others, testifies to the reduced effectiveness of the traditional crisis management models in such situations.

An alternative type of model is one which utilizes the Global Brain and which uses not integrated representations of nations in an arms-race or armies in a battle but individual actors as the elementary units. With the help of supercomputers the simulation of a few million simplified agents should be feasible and should capture the essential elements that can lead to different types of collective behavior. For example, modern computers are fast enough to run scenarios with an ensemble of (simplified) decision patterns for each of the 4.5 million Croats, 2.2 million Bosnians, and 11.5 million Serbs. The quantification probably could be done in many cases through assigned probabilities for specific actions. What one would expect from such simulations is that global properties would emerge which could give new insights into the effects of different crisis management measures on the stability of problematic regions.

Although we discuss here the inclusion of individual (personal) actors and their psychological motivations, in a realistic model one would also include the possibility of integrating actors at different stages, depending on the context. For example, we know that world opinion will play an important role for decisions of state governments, who then will influence decisions by the UN Security Council which will then influence actions of UNPROFOR troops and NATO. Again we can see that in principle it might be important to disintegrate the model of those global actors—for example, the UNPROFOR troops directly interact with local personal actors and they are exposed to many psychological factors (not only fear for their life, but also temptations due to their immense relative wealth and power compared to the local populations). Corruption certainly plays a significant role also. Therefore their behavior might have to be included in a microscopic, agent-based model on the same level as the local fighters. The main point seems to be that they are not acting as a (military) unit, directly implementing decisions of UN institutions but many decisions are made locally, often in direct contradiction to UN decisions14.

The norms model of Axelrod (1986) as well as the educational software SimCity and SimEarth use similar models of individual interacting agents that exhibit global emergent behavior of coherent dynamical structures. In our case these dynamical structures would correspond to collective behavior of aggression, flight, breaking of sanctions, smuggling, etc. The design of various responses to these different collective behaviors can draw upon recent advances in nonlinear mathematics and chaos theory regarding the control of chaos and complex systems. The sensitivity analysis described in step (vii) above, derived also from nonlinear dynamics, can be used to identify sets of conditions under which the system is likely to be extremely unstable, and thus most prone to ignite into a crisis with the occurrence of a trigger event.

The first step in such a model is to define the goals for the solution of the crisis. For the UN in relation to Bosnia, for example, some goals might be:

  • discourage violation of international, humanitarian law
  • minimize suffering of the civilians
  • discourage snipers and use of heavy arms
  • encourage and support the supply of the civilian population with humanitarian aid.

The second step is to acquire qualitative information on the current status of the problem. In the case of the Bosnian situation, for example, traditional information sources such as one’s personal contacts and local library resources can be augmented by news groups and mailing lists that focus on the region, such as Bosnet, Croatian-News, and VREME. In addition to traditional polls conducted by interview, mail, or phone, polls can be taken on the computer network, making it possible to monitor those polls over times and use the inputs (in combination with results from more traditional fact-finding missions) for updating the models.

For example, polls could be taken on issues related to what conditions would be necessary for individuals to choose certain behaviors. The results from the model can then be checked against conditions that would indicate the outbreak of a conflict. An example of a poll related to the crisis in the former Yugoslavia is presented in Figure 7; it details Serbo-Croat statistics acquired by traditional means prior to the outbreak of the civil wars15, but similar polls could be conducted regularly on the computer network now to provide up-to-date information for crisis management models for the region.

Figure 7
Figure 7: Opinion poll taken prior to outbreak of Balkan conflict.

To formulate a conceptual model, which is the third step of this process, some guiding questions in the case of Bosnia might be:

  • Who are the major actors in this crisis?
  • How are these actors influencing each other?
  • What are external parameters that influence the decisions of the actors?

Answering these questions and representing their answers in some structured form (graphical or symbolic) would constitute a conceptual model. A simple example of a conceptual representation of the answers to the first question—Who are the major actors?—is presented in Mayer-Kress’ Global Information Systems and Nonlinear Methods in Crisis Management; it depicts four different ways that the actors in the Balkan conflict could be grouped, broken down into several different categories within each group.

Once a full conceptual model has been established, the next steps are to define sub-areas within the general conceptual model where a quantitative approach appears to be promising, and to design the quantitative models. As mentioned above in the introduction to this section, an example of a quantitative model for the Bosnian situation might be one composed of 18.2 million interacting units with different decision patterns, to represent the 18.2 million Bosnians, Serbs, and Croats; in this case one would expect the simulation to yield different global properties or structures in response to different scenarios.

The next step is to link the simulation models and data sources to an interdependent, distributed network. The data sources include the qualitative sources such as news groups and mailing lists and quantitative sources such as online weather or economic updates, in addition to more traditional data sources such as information from references or previous studies. A pilot project version of a crisis detection and management model was developed by EarthStation.

It used a hierarchical network in which each node corresponds to an “object” such as:

  • a simulation tool;
  • a chart or image, which may be annotated with sound messages (news clips), or which itself can act as a background for new networks;
  • a program that connects to another unit, such as a multimedia device or another computer on which a different type of program can be launched;
  • a network on a lower level.

The links that connected these objects could be easily adapted in the graphic, object-oriented programming environment of the project. The models were constantly updated with news information from the network as well as newspapers and news broadcasts, and issues were dropped which seemed to no longer be strongly coupled to the global dynamics. Thus one big advantage such object-oriented, network-linked systems have compared to classical world-model programs is that the contents and the structure of the simulation and visualization tools can be easily updated as the state of the real world changes and evolves.

Once the data and models are linked to a network, the simulation is performed along with sensitivity analysis. The pilot project discussed above implemented a type of sensitivity analysis on a simulation of a three-nation-Richardson arms-race model. The model was one of the nodes of the network, and a program to link to a Silicon Graphics computer was another node, to provide visualization of the outcomes. A large number of scenarios were simulated, and the results were summarized on multidimensional graphs whose iso-surfaces could be interpreted as crisis surfaces, where either the arms-race among the three nations became unbounded or the different possible alliances break down.

Another advantage that tools from nonlinear dynamics and complexity science can offer the simulation is the possibility of including much information that is now lost with typical statistical methods. For example, in standard polling approaches such as the one used for the Serbo-Croat poll in Figure 7, much of the information that is gathered in the interviews is lost—namely, all the correlations between the answers to different questions by the same person. Instead of integrated simple statistics one could have a mapping of a sample of the population onto a highly structured (possibly low-dimensional) manifold embedded in a high-dimensional feature space. Exploration of these manifolds (for example, with genetic algorithms) can be used to design very specific, integrated pathways to sustainable solutions of crises. Even with very modest quantities of data with limited accuracy, it is possible that such an approach can be more successful than traditional crisis management models.

5

Implications of a Global Brain

Our purpose here is to present the compelling analogy between the fast-developing information and communications network and the biological brain, and to stimulate discussion of the possibility for the emergence and utilization of a Global Brain, so that we may be better prepared to harness its capabilities and protect against its possible misuse. The information we present here on the human brain, the Internet, and the analogy between the two is not complete, nor are the predictions on its characteristics, its emergence, or its potential for modelling. This work is meant as a starting point or framework for discussion. In the spirit of discussion, we close with some questions regarding the implications of a Global Brain.

  1. If we follow Russell’s arguments and assume that a Global Brain might be desirable in terms of enhancing our planet’s ability to sustain itself, would we want to hasten the emergence of a Global Brain, and if so, how could it be done?
  2. Would watching the development of the Global Brain help us understand better how our own brains developed?
  3. Would some form of global self-awareness or consciousness emerge, just as self-awareness and consciousness emerges in biological systems whose brains have reached a certain level of complexity?
  4. In terms of crisis management, could we design an alternative to economic or military incentives if we pursued the brain analogy further? That is, what is the “incentive” or “driving force” for the various parts of the brain to work together and thus increase the chances of survival of the whole organism?
  5. Would we be able to sense the Global Brain? Since individual neurons are considered not to have an awareness of the perceptions and thoughts which they help create, would we, as the individual information-processing units in this Global Brain, be capable of sensing the higher forms of information processing that might emerge?

With regard to this last question, we mentioned earlier that coherent oscillations have been found in the brain during perception, and Russell wrote that synchronicity is what is observed at a lower level of organization when there is a higher level of organization:

Returning briefly to the case of a cell in your body, let us consider how it might, if it were aware, likewise experience a form of synchronicity. It might notice that the blood always seems to supply the oxygen and nutrition it needs when it needs them, simultaneously removing waste products as they build up. Such a cell might well marvel at the incredible chain of coincidences that keep it alive and provide spontaneous support to most of its desires. Everything would probably seem to work out just right, its prayers continually answered. It might even suppose the existence of some individual answering agency or god.

We, however, looking at the situation from the perspective of the whole organism, know that what the cell perceives as a chain of “curious coincidences” could be ascribed, in fact, to the high synergy that comes from the whole body functioning as a single living system. The cell may not be directly “aware” of the body as a living being, but it would nevertheless benefit from the high synergy that results from this wholeness. Furthermore, the healthier the body is, the more supportive coincidences the cell would notice. (pp. 212–216)

So, in this light, we would predict that synchronous activity in the Internet would be a sign of emergence of a Global Brain. What “synchronous activity” means in this context is not clear. Perhaps it would be some kind of synchronized activity in Internet usage curves such as the one in Figure 5B, analogous to the coherent oscillations in brain EEG (Fig. 5A). Or, perhaps it is synchrony in terms of coincidences in ideas or events, in the political, mathematical, social, scientific, and/or economic realms16. If we follow this “cell-in-a-body” analogy, it appears that the only way that individual Internet user “cells” would know that the higher level of organization represented by the Global Brain exists is through improved efficiency or synergy in the cells' inputs and outputs.

Though we do not yet understand fully (if at all) the implications of a Global Brain, we can begin now to utilize its existing capabilities. We think that the integration of nonlinear science of complex, adaptive systems with modern computers and information networks will create a new era of modeling in many different areas, especially in the domain of global change and crisis management. We think that these methods will provide powerful tools and thereby also the potential for misuse that could affect us all. Thus we wish to encourage early discussion of these possible developments in as wide a community as possible.

Acknowledgements

We would like to acknowledge helpful comments from T. Anastasio, H. Arrow, and P. Diehl. Cathleen Barczys acknowledges Kristina Lerman for her critical review of the manuscript and for discussions related to the Balkan crisis.

Footnotes

  1. In the context of science fiction, this image has also been used, for example, in Gibson’s Neuromancer.
  2. Russell’s figure refers to the number of atoms in the non-water-molecules of a simple cell, which include macromolecules such as proteins, DNA, and RNA, and smaller molecules such as carbohydrates, amino acids, lipids, precursors, and ions. In addition to these molecules there are water molecules, which number 4 × 1010 i the E. Coli cell, for example.
  3. The current (beginning 1994) growth rate of Internet users is much higher than the global population growth rate—90% for Internet versus 1.7% for population. This difference in rates is so great that with a simple linear extrapolation, each human would also be an Internet user by the year 2001.
  4. The frequency range at which this synchronization occurs varies across different species and cortical regions, but is typically within the range of 20–100 Hz. The associated time scale, which can be loosely related to the round trip time of a signal from one neuron to another and back, is about 25 ms for a 40 Hz oscillation, for example.
  5. By real time we would in this context understand reliable information exchange within the order of minutes to hours as it is the case in direct e-mail exchanges. Discussion threads emerge and continue at that rate for a time span of the order of days or weeks. This corresponds to roughly ten to a hundred exchanged messages. Much shorter time scales are present in teleconferencing, much longer time scales exist in the context of ongoing collaborations.
  6. Many of these connections are quite informal, like discussions in the hallway or during a conference. We believe that the Internet helps to provide greater equity for Third World countries where researchers might participate in discussions on USENET or present their papers at conferences remotely. For example, the transparencies of the presentation can be prepared in hyper-media format, transferred to a computer at the conference site, and presented by a local operator, following the instructions of the presenter by (see, for example, Mayer-Kress et al., The Internet as Tool for Addressing Global Problems).
  7. The results from ping do not reflect the “transfer rates” but depends on many factors that are often dependent on local net-administration. For the user, however, it is an important measure, since it reflects the “sluggishness” of the response from the keyboard, say. And this response is essential for interactive, remote computing.
  8. A major difference between the conections within the brain and in the Internet is probably related to the speed at which a signal can be switched between nodes: Synapses are relatively slow and so we have the estimate of 2–4 synapses between arbitrary neurons. On the Internet it took our experiment about 12 “hops” between Japan and Australia, 19 hops between Japan and US, and 25 hops between Japan and Germany.
  9. Since only the relative size of the basins of attraction is changed, there is still a finite probability that it will be restored—for example, in specific situations in which associations with this pattern will be activated. Reports come to mind where at old age or in extreme stress situations or events are recalled that seemed to be long forgotten.
  10. Within the US one can observe a transition that might fall into this category: The introduction of NCSA-Mosaic has triggered a significant transition in the growth of the NSFNET Backbone Traffic.
  11. Although we focus in this paper on computer/human intelligence it is perfectly feasible that at some point on can also use sophisticated animal/computer interfaces, for example for global environmental change monitoring and crisis anticipation. For example, at SeaWorld San Diego we have discussed a computer interface based on video-tracking which would allow killer whales to operate in a point-and-click environment such as NCSA-Mosaic.
  12. NCSA-Mosaic is available for MS Windows, Mac, and X-Windows from ftp.ncsa.uiuc.edu. A similar WWW interface for NeXTStep is OmniWeb, available at http://www.omnigroup.com/Software/OmniWeb.
  13. We do not talk about “optimization” in this context since, due to the interactions between all the elements in the system the shape of the “fitness landscape” will constantly evolve in time, so that optimization strategies cannot be applied. Strategies that do not incorporate feedback about this changing fitness landscape often lead to decisions that actually move away from the target state. This kind of feedback and the capacity for rapid adaptation to new situations seems to be a general property of complex systems—such as ecological systems, for example.
  14. See, for example, the assassination of Bosnian Deputy Prime Minister Hakija Turajlic while he was under UN-PROFOR protection (UPI, Jan. 8, 1993).
  15. This poll was taken during the earlier stages of the crisis, when the Croats were attempting to separate themselves from Serbian influence and before the Bosnians made their attempt; thus, the poll covered only Serbian and Croatian opinions.
  16. Specific topics or problems (threads) appear in many bulletin boards, create a strong “synchronous” activity burst with typical time scales of the order of days or weeks, and then decay and disappear. An example where such a thread led to a coherent global activity over a period of eight months is the cracking of “RSA 129”, a number theory problem in cryptography that was thought to be unsolvable. It was recently solved on the Internet by a joint and distributed effort of MIT students and some 600 volunteers from more than 20 countries.

References

  • R. Abraham, A. Keith, M. Koebbe, G. Mayer-Kress: Double Cusp Models, Public Opinion, and International Security. International Journal for Bifurcations and Chaos, 1(2) (1991) 417–430.

  • D. Atkins: RSA-129. sci.math:57148 sci.crypt:22643 alt.security:14258 alt.security.pgp:11677 alt.security.ripem:650 comp.security.misc:9410 alt.privacy:14463. 27 April 1994 04:06:25 GMT.

  • R. M. Axelrod: An Evolutionary Approach to Norms, in The American Political Science Review, Vol. 80, no. 4 (December 1986).

  • S. Bankes: private communication to the 2050-MailingList, Tuesday, 3 May 1994, 12:10:27 PDT.

  • C. Barczys: Spatio-Temporal Dynamics of Human EEG During Somatosensory Perception, Ph.D. Dissertation, University of California at Berkeley, 1993.

  • J. T. Bonner: The Evolution of Complexity by Means of Natural Selection. Princeton University Press, Princeton, NJ, 1998.

  • V. Braitenberg, A. Schuez: “Anatomy of the Cortex: Statistics and Geometry.” Berlin: New York: Springer-Verlag, 1991.

  • C. Braun, G. Mayer-Kress, W. Miltner: Wavelet based measures of short-term coherence in 40 Hz oscillations of human EEG during associative conditioning, to be published.

  • D. Campbell, G. Mayer-Kress: Chaos and Politics: Simulations of Nonlinear Dynamical Models of International Arms Races, Proceedings of the United Nations University Symposium “The Impact of Chaos on Science and Society”, Tokyo, Japan, 15–17 April 1991.

  • K. Claffy, H.-W. Braun, and G. Polyzos, January 1993: Traffic Characteristics of the T1 NSFNET Backbone, SDSC Report GA-A21019, UCSD Report CS92-237, Proceedings of INFOCOM'93.

  • Paul A. Fishwick and B. P. Zeigler: “A Multimodel Methodology for Qualitative Model Engineering.” ACM Transactions on Modeling and Computer Simulation. Volume 2, Issue 1, 1992, pp. 52–81.

  • S. Forrest, G. Mayer-Kress: Using Genetic Algorithms in Nonlinear Dynamical Systems and International Security Models, in: The Genetic Algorithms Handbook, L. Davis, (ed.), Van Nostrand Reinhold, New York 1991.

  • W. J. Freeman: A linear distributed feedback model for prepyriform cortex. Experimental Neurology 10: 525–547 (1964).

    Spatial properties of an EEG event in the olfactory bulb and cortex. Electroencephalography and Clinical Neurophysiology, 44, 586–605 (1978).

    The Physiology of Perception. Scientific American, 264, No. 2, 78–85 (1991).

  • W. Gibson: Neuromancer. New York, Ace Books (1984).

  • R. Grossarth-Maticek: Postkommunistische Krisen und psychokulturelle Lpsungsmodelle am Beispiel des serbo-kroatischen Konflikts. Preprint, Institut für präventive Medizin. 4/1992.

  • S. Grossmann, G. Mayer-Kress: “The Role of Chaotic Dynamics in Arms Race Models”. Nature 337, 701–704 (1989).

  • H. Haken: Synergetics: An Introduction. Springer, Berlin (1977).

    Advanced Synergetics. Springer, Berlin (1983).

  • K. Hasselman: “Wieviel is der Wald wert?” Spiegel Gespräch mit K. Hasselman. Der Spiegel 41/1992, 268.

  • A. Hübler, D. Pines: “Prediction and Adaptation in an Evolving Chaotic Environment.” Technical report CCSR-93-2, to be published in Complexity: From Metaphor to Reality. Proc. of a Conf. on Integrative Themes in Complex Adaptive Systems, eds. G. Cowan, D. Pines, D. Meltzer. Addison-Wesley, 1993.

    “Modeling and Control of Complex Systems: Paradigms and Applications” in Modeling Complex Phenomena. L. Lam (ed.), Springer, New York (1992).

  • W. J. Kaufmann III, L. L. Smarr: “Supercomputing and the Transformation of Science.” Scientific American Library, New York (1993).

  • H. Kuhlenbeck: “Invertebrates and Origin of Vertebrates.” Vol. 2 of The Central Nervous System of Vertebrates, S. Karger AG, Basel, Switzerland. Distributed by Academic Press Inc., NY, NY (1967).

  • F. W. Lanchester, in: The World of Mathematics. J. R. Newman, Ed. vol 4, Simon and Schuster, New York (1956), 2138–2157.

  • C. G. Langton, C. Taylor, J. D. Farmer, S. Rasmussen, eds. Artificial Life II. Addison Wesley (1992).

  • G. Mayer-Kress: Nonlinear Dynamics and Chaos in Arms Race Models. Proc. Third Woodward Conference: “Modeling Complex Systems,” Lui Lam (ed.), San Jose, April 12–13, 1991.

    EarthStation, in: “Out of Control,” Ars Electronica 1991, K. Gerbel (ed.), Landesverlag Linz, Linz (1991).

    Chaos and Crises in International Systems, Technical Report CCSR-92-15, to appear in proceedings of SHAPE Technology Symposium on Crisis Management, Mons, Belgium, March 19–20, 1992.

    Global Information Systems and Nonlinear Methods in Crisis Management, in: 1992 Lectures in Complex Systems, L. Nadel and D. L. Stein (eds.), Santa Fe Institute Studies in the Sciences of Complexity, Lecture Volume V, Addison-Wesley Publishing Company, Reading, MA, 531–552.

    with P. Diehl, H. Arrow: The United Nations and Conflict Management in a Complex World. http://jaguar.ccsr.uiuc.edu/People/gmk/Papers/UNCMCW/UNCMCW-5394.html.

    with B. Bender, J. Bazik: The Internet as Tool for Addressing Global Problems. http://jaguar.ccsr.uiuc.edu/People/gmk/Papers/HungerConf/HungerConf.html. Presentation given via the Internet and telephone at the Hunger Research Briefing and Exchange, Brown University, April 1994.

  • Matrix Information and Directory Services (MIDS), mids@tic.com, Austin TX. The figure is available as 33_in_year_2001.gif from gopher://ietf.cnri.reston.va.us/11/isoc.and.ietf/charts/metrics-gifs.

  • D. L. Meadows, et al. Dynamics of Growth in a Finite World. Wright-Allen Press, 1974. Now distributed by Productivity Press, Cambridge, MA.

    with D. H. Meadows and J. Randers: Beyond the Limits. Chelsea Green Publishing, Post Mills, VT (1992).

  • S. Milgram: “The Small World Problem.” Psychology Today 22, 61–67 (1967).

  • J. Nicolis, G Mayer-Kress, G. Haubs: Non-Uniform Chaotic Dynamics with Implications to Information Processing. Z. Naturforsch. 38a, 1157–1169 (1983).

  • E. Niedermeyer: “Maturation of the EEG: Development of Waking and Sleep Patterns.” In Electroencephalography: Basic Principles, Clinical Applications and Related Fields, E. Niedermeyer and F. Lopes da Silva (eds.), Urban & Schwarzenberg, Baltimore, MD, 1987, 133–158.

  • E. Ott, C. Grebogi, J. Yorke: Controlling Chaos, Phys Rev Let, 1990, 64 N11:1196–1199.

  • S. W. Ranson and S. L. Clark: The Anatomy of the Nervous System: Its Development and Function. W. B. Saunders, 1959.

  • E. M. Reid: Electropolis: Communication and Community on Internet Relay Chat. University of Melbourne, Department of History, 1991 (igc.apc.org:pub/ELECTROPOLIS/ELECTROPOLIS).

  • L. F. Richardson: Arms and Insecurity. Boxwood, Pittsburgh, 1960.

  • P. Russell: The Global Brain: Speculations on the Evolutionary Leap to Planetary Consciousness. Houghton Mifflin, Boston, MA (1983).

  • Santa Fe Institute Studies in the Sciences of Complexity. Includes a series of volumes containing lectures, lecture notes, references, and proceedings. Since complexity science is a newly emerging field, and since research in the area is broadly dispersed among many disciplines, this series is an excellent source for publications on complex systems.

  • M. F. Schwartz, A. Emtage, B. Kahle, B. C. Neuman: “A Comparison of Internet Resource Discovery Approaches.” Computing Systems 5.4 (1992) (requests: brewster@Think.COM).

  • M. F. Schwartz, J. S. Quarterman: A Measurement Study of Hanges in Service-Level Researchability in the Global Internet. Technical Report CU-CS-649-93, May 1993.

  • C. Skarda and W. J. Freeman: “How brains make chaos in order to make sense of the world.” Behavioral and Brain Sciences 10 No. 2, 161–173 (1987).

  • W. Singer: “Synchronization of cortical activity and its putative role in information processing and learning.” Annual Review of Physiology 55, 349–374 (1993).

  • L. Smarr: personal communication. See also http://www.ccsr.uiuc.edu/People/gmk/Projects/WebStats/webstats.html.

  • M. Stonebraker: An Overview of the Sequoia 2000 Project. Sequoia 2000 Techincal Report 91/5. (requests: claire@postgres.berkeley.edu)

  • G. A. Thibodeau and K. T. Patton: Anatomy and Physiology. Mosby, St. Louis, MO (1993).

  • J. L. van Hemmen, L S. Ioffe, R. Kühn, M. Vaas: “Hebbian unlearning of spatio-temporal patterns.” Physica A 163, 386–392 (1990).

  • M. M. Waldrop: Complexity: The Emerging Science at the Edge of Order and Chaos. Simon & Schuster, NY (1992).

  • J. L. Wilson: The SimCity Planning Commission Handbook. Berkeley: OSborne McGraw-Hill (1990).

    The SimEarth Bible. Berkeley: Osborne McGraw-Hill (1991).

Gottfried Mayer-Kress

https://www.organism.earth/library/docs/gottfried-mayer-kress/global-brain-as-emergent-structure-cover.webp

×
Document Options
Find out more