Table of Contents
1
Introduction
2
Thermodynamics and Dissipative Structures
3
An Example of a Dissipative Structure: The TriMolecular Model
4
Evolution—Structural Stability
5
Biological Applications: Aggregation
6
Social Insects
7
The Formation of Dissipative Structures—the Stochastic Description
8
Dissipative Structures and Social System
9
Conclusions
Acknowledgments
Appendix
Population Dynamics and Evolution
Summary
In most of the phenomena studied in classical physics, fluctuations play only a minor role. This is the case in the whole domain of classical equilibrium thermodynamics based on Boltzmann’s ordering principle. On the other hand, the study of nonlinear systems under conditions far from equilibrium leads to new situations in which fluctuations play a central role. It is the fluctuations that can force the system to leave a given macroscopic state and lead it on to a new state which has a different spatiotemporal structure.
The study of dissipative structures illustrates precisely this type of behavior. Contrary to equilibrium structures, dissipative structures occur at a sufficient distance from thermodynamic equilibrium when the system is described by equations containing an appropriate feedback.
The thermodynamic theory of dissipative structures will be briefly sketched. Recent results show the importance of these structures in numerous problems of chemistry and biochemistry. Dissipative structures lead to a whole spectrum of characteristic dimensions linked to chemical reactions and transport phenomena. Furthermore, examples show that the formation of these structures can be accompanied by symmetry breaking and the appearance of new forms and shapes. Examples will also be presented from the domains of biological aggregation and animal societies.
Dissipative structures can be considered as giant fluctuations; therefore, their evolution over time contains an essential stochastic element. A stochastic equation introduced recently by Nicolis and the author permits the discussion of both the deterministic and the stochastic aspects of these phenomena. In particular, such equations permit the study of the nucleation of a new dissipative structure. They also make possible a discussion of the relation between the three levels of description represented by the scheme
Determinism and fluctuations play a complementary role in our description. It is interesting to apply the formalism to a description of the structure of societies in which the dialectic between “mass and minority” (F. Perroux, 1964) plays an essential role.
1
Introduction
Both physics and chemistry are at present passing through a period of rapid growth resulting in particular from the integration of the concept of structure into the framework of theoretical physics. In addition, a more precise interpretation of the notions of irreversibility and process has now been given. As a result, a renewed dialogue between researchers in the physical sciences and those interested in the human sciences has become possible.
In the context of modern science, the possibility of such a dialogue was raised right from the first formulation of modern science by Newton and Galileo. It is fascinating to read the chapter “Generalization of the Newtonian Paradigm for Natural Sciences and Human Sciences” in the outstanding monograph by Gusdorf (1971). But this dialogue encountered insurmountable difficulties. Too big a gap separates rational mechanics and the study of simple motions from the specific problems posed by biology or history. This gap is particularly marked in the concept of time. Rational mechanics knows only reversible time, whereas the direction of time plays a fundamental role in the biological and human sciences. It is true that in the nineteenth century, the problem of the direction of time was incorporated in physics through the second law of thermodynamics. Still, the contrast between the idea of evolution in physics and that in biology or sociology is striking. In physics, the increase of entropy expressed by the second law of thermodynamics shows a tendency toward a progressive “disorganization” of the system. On the other hand, biological or social evolution is accompanied by a progressive structuration such as that introduced by the division of labor in the history of human societies.
Despite such difficulties, numerous references to physics may be found in the works of specialists in the human sciences. One of the chapters in the discussions between LéviStrauss and G. Charbonnier (Charbonnier, 1969) is entitled “Clocks and Steam Engines” and the treatise Le Système Social by Henri Janne (1963) starts from the concept of social “force.” Thus, the vocabulary of classical physics has spread to the human sciences, at least in a metaphorical sense.
Auguste Comte (see Aron, 1967, p. 82) distinguished between “analytic” and “synthetic” sciences. To the latter belong biology and sociology. In biology, it is impossible to explain an organ, or a function, without referring to the whole living being, and this remark, suitably transposed, applies equally to sociology. The consideration of the whole is essential in both cases. Recently, the study of dissipative structures (see, in particular, Glansdorff and Prigogine, 1971; Prigogine, 1972, Prigogine et al., 1972) has brought the study of such “totalities” within the framework of an extended thermodynamics. We shall return to this concept of dissipative structures in Section 2.
Let us simply recall that classical thermodynamics permits the interpretation of equilibrium structures, those that appear, for example, in an isolated system after a sufficiently long time. A crystal is a typical example of such an equilibrium structure. The formation of such structures is dominated by Boltzmann’s ordering principle (which gives the population of the different dynamic states in a system at thermodynamic equilibrium). The situation changes radically when instead of a closed system one considers an open system exchanging matter and energy with the outside environment. In this case, providing the reservoirs of energy and matter are sufficiently large to remain unchanged, the system can tend to a constant regime corresponding to a nonequilibrium stationary state. While an isolated system at equilibrium is associated with “equilibrium” structures such as the abovementioned crystal, an open system “out of equilibrium” may be associated with dissipative structures (when specific constraints, to be discussed later, are satisfied). Then, Boltzmann’s ordering principle is no longer applicable. Dissipative structures are associated with an entirely different ordering principle, which may be called order through fluctuation. In fact, such structures arise from the amplification of fluctuations resulting from an instability of the “thermodynamic branch.”
As we shall see in Section 2, dissipative structures present precisely the global aspect, the aspect of totality, which Comte ascribed to the object of the synthetic sciences.
In order to be able to take form, a dissipative structure requires a nonlinear mechanism to function. It is this mechanism which is responsible for the amplification mechanism of the fluctuation. Dissipative structures thus form a bridge between function and structure. One may even consider such sociologists as Comte, Durkheim, or Spencer as forerunners of the concept of dissipative structures. In his treatise on the division of social labor, Durkheim (1973, p. 244) writes, for example:
The division of labor progresses the further the more individuals there are who are in sufficiently close contact with each other to be able to act and react upon each other. If we agree to call this closeness and the active transactions resulting from it, dynamical or moral density, then we may say that the progress made by the division of labor is due directly to the moral or dynamic density of society.
The distinction between the ordering principle of Boltzmann and that by fluctuations implies a fundamental difference in the role of the fluctuations. In the domain in which Boltzmann’s principle is applicable, the fluctuations play a subordinate role. Consider, for example, a volume \(V\) within which we have \(N\) particles. Let us study a small volume \(\Delta V\) within \(V\) (see Figure 1). We expect that the average number of particles \(\bar{n}\) in \(\Delta V\) will be $$ \frac{\bar{n}}{N} = \frac{\Delta V}{V} \tag{1} $$ Of course there will be fluctuations. The fluctuations permit us, however, to specify the mean result (1) but can be neglected in many situations (they are of the order of \(\bar{n}^{1/2}\)). On the contrary, in the formation of dissipative structures, it is the fluctuations that drive the system to a new average state. In other words, instead of being simply a corrective element, the fluctuations become the essential element in the dynamics of such systems. Here again, the analogy with characteristic situations in biology and sociology is striking.
Many authors have indeed stressed the double role of “chance and necessity” in the human sciences and it is precisely these two elements which play a role in the phenomena dominated by the principle of order through fluctuation.
In the present chapter, we shall first illustrate the concepts of dissipative structure and order through fluctuation by presenting some simple examples borrowed from physical chemistry and from the study of social insects. Subsequently, we shall consider the statistical description of such systems and shall briefly recall the nonlinear stochastic equations introduced recently by Nicolis and the author (see Nicolis and Prigogine 1971; Nicolis et al., 1974; Prigogine et al., 1974). A more detailed discussion of selforganization in nonequilibrium systems may be found in the monograph by Nicolis and Prigogine (1976).
2
Thermodynamics and Dissipative Structures
If we consider first an isolated system, the second law of thermodynamics tells us that the entropy production within the system, due to irreversible phenomena, is positive (Clausius, 1857): $$ dS = d_iS > 0 \tag{2.1a} $$ if the system undergoes an irreversible transformation, and $$ dS = d_iS = 0 \tag{2.1b} $$ if the system is at equilibrium. When there are exchanges with the outside world, (2.1a, b) must be completed by a term of entropy flux \(d_eS\) $$ dS = d_eS + d_iS \tag{2.2} $$ Only \(d_iS\) has a welldefined sign. Identifying entropy with disorder (Boltzmann, 1972), we see that an isolated system can only evolve toward greater disorder. For an open system, however, the competition between \(d_eS\) and \(d_iS\) permits the system, subject to certain conditions which are made precise later, to adopt a new, structured form.
The entropy production can be expressed simply in terms of thermodynamic “forces” \(X_i\) and “rates” of irreversible phenomena \(J_i\) (Prigogine, 1967). For example, \(X_i\) may be temperature gradients, chemical “affinities,” or the like. The corresponding “rates” are then heat flux and chemical reaction rate. We have $$ \frac{d_iS}{dt} = \sum_i J_iX_i \tag{2.3} $$ At thermodynamic equilibrium one has simultaneously $$ J_i = 0 X_i = 0 \tag{2.4} $$ whereas around equilibrium, in the socalled linear domain of thermodynamics, we have a linear relation between fluxes and forces: $$ J_i = \sum_j L_{ij}X_j \tag{2.5} $$ Onsager (1931) has shown that the coefficients \(L_{ij}\) form a symmetric matrix (Onsager’s “reciprocity” relations).
In addition, the author has shown that in this linear domain the entropy production can only decrease in time and will attain its minimum value for the stationary state compatible with the imposed conditions (Progogine, 1967). The linear domain (2.5) thus extends in this way the equilibrium behavior (2.4). To obtain a new structuration and a behavior which is radically different from that of equilibrium, we must go beyond the domain of linear thermodynamics.
There exists in all systems a thermodynamic threshold above which the system can exhibit “selforganization” if it is the seat of appropriate irreversible phenomena. Let us consider more closely an example of nonlinear chemical networks which can lead to selforganization.
3
An Example of a Dissipative Structure:
The TriMolecular Model
The behavior and nature of a dissipative structure that can appear beyond a certain critical distance from equilibrium may be multiple: temporal organization (limit cycle), stationary inhomogeneous structure, spatiotemporal organization in a wave form, and localized structures.
A model chemical reaction, somewhat unrealistic from the experimental point of view, but offering both great wealth of behavior and facility of analysis, has been developed by the Brussels group (Nicolis and Auchmuty), 1974; Lefever 1968; HerschkowitzKaufman, 1973). $$ \begin{align*} A &\rightleftharpoons X \tag{3.1a} \\ B + X &\rightleftharpoons Y + D \tag{3.1b} \\ 2X + Y &\rightleftharpoons 3X \tag{3.1c} \\ X &\rightleftharpoons E \tag{3.1d} \end{align*} $$ The trimolecular step (3.1c) can be considered as the result of several very rapid bimolecular steps. The concentrations of the products A, B, D, E are maintained constant and are the constraints that permit the system to be driven away from equilibrium.
If we set the kinetic constants equal to 1 for the sake of simplified writing, the equations for the evolution of the concentrations \(X\), \(Y\) in a unidimensional environment are as follows: $$ \frac{\partial X}{\partial t} = A + X^2Y  (B+1) X + D_X \frac{\partial ^2 X}{\partial r^2} \tag{3.2a} $$ $$ \frac{\partial Y}{\partial t} = BX  X^2Y + D_Y \frac{\partial ^2 X}{\partial r^2} \tag{3.2b} $$ where \(0 \leqslant r \leqslant \ell \).
These nonlinear partial differential equations translate the effect of the chemical reactions and of the diffusion. \(D_X\) and \(D_Y\) are the diffusion coefficients (Fick’s law).
The system can assume a single homogeneous stationary state $$ X_{st} = A, \quad \quad Y_{st} = B/A \tag{3.3} $$ The boundary conditions are fixed in the stationary state $$ X(0) = X(\ell) = A, \quad \quad Y(0) = Y(\ell) = B/A \tag{3.4} $$ The study of the system of linearized equations around this stationary state shows that the latter may become unstable if \(A\), \(B\), \(D_X\), and \(D_Y\) satisfy certain relations.
Different types of behavior are possible (see also Chapter 4 by Holling):

In the case for which \(D_X\) and \(D_Y\) are large, the system may be considered as homogeneous. It may present an instability of the limit cycle type: the concentrations \(X\) and \(Y\) oscillate around the stationary state. The evolution of the system in phase space is shown in Figure 2. Thus, \(S\) marks the point corresponding to the unstable stationary state (\(X_S\), \(Y_S\)). Whatever its initial state, the system tends, in the course of time, toward a single welldefined periodic solution whose characteristics (period ad amplitude) are imposed by the differential equation. These oscillations are stable and very different from the type of periodic behavior presented by a LotkaVolterra system. In the latter case, the oscillations are not stable with respect to the fluctuations (for a detailed analysis see Glansdorff and Prigogine, 1971). There exists an infinity of orbits surrounding the stationary state. Analysis shows that there is no “restoring” force for any particular orbit, and any little perturbation can change the system from one orbit to another. This behavior differs radically from that of a limit cycle when only one stable periodic solution exists.
When \(D_X\) and \(D_Y\) are not large enough, the system acquires a spatiotemporal regime corresponding to the propagation of concentration waves or of stationary chemical waves (see HerschkowitzKaufman and Nicolis, 1972; Auchmuty and Nicolis, 1875, 1976; and Nicolis and Prigogine, 1974). One observes (Figure 3) the existence of a stable periodic regime which corresponds to the alternation of a rapid phase of chemical waves propagating toward the center and a slow phase tending to homogenize the system.1

The system can evolve toward a new stable stationary state but where \(X\) and \(Y\) are distributed inhomogeneously. Figure 4 shows an example of an inhomogeneous distribution of \(X\) as a function of space. Among the properties of the solutions beyond instability it is interesting to note the possibility of the spontaneous formation of polarity in a system under the effect of a perturbation (see Babloyantz and Hiernaux, 1975). This observation is particularly interesting for explaining the appearance of inhomogeneities during the development of an embryo from an initially unfertilized, homogeneous egg. Figure 5 represents the inhomogeneous distribution of \(X\) in polar coordinates obtained after perturbing the stationary homogeneous state. According to the exact location of the initial perturbation, the gradient of \(X\) will be oriented in one sense or the other.
The existence of localized structures has been demonstrated with this mode. When the initial product \(A\) diffuses through the system, then the equation for \(A\) must be added to (3.2): $$ \frac{\partial A}{\partial t} = A + D_A \frac{\partial 2 A}{\partial r^2} \tag{3.5} $$ Figure 6 shows the stationary dissipative structure engendered under these conditions beyond a critical point of instability. One notices that this time the spatial organization is limited to a small region of the possible domain. Outside these frontiers, the concentration distributions correspond to the thermodynamic solution.
The dissipative structure can appear as a “totality” with its dimensions imposed by its own mechanism. Conversely, the dimensions of the system play an essential role in the formation of dissipative structures. A sufficiently small system will always be dominated by the boundary conditions(Nazarea, 1974). In order for the “nonlinearities” to be able to lead to a choice between various solutions, it is necessary to go beyond some critical spatial dimensions (Hanson, 1974). It is only then that the system acquires some autonomy with respect to the outside world.
Experimental observations have confirmed the existence of these remarkable properties associated with dissipative structures, be they biochemical, organic, or mineral systems (see Faraday Symposium 9, 1974, especially Boiteux and Hess, 1974).
4
Evolution—Structural Stability
Until now we have examined the problem of the organization of physicochemical systems without discussing the possibility of a change in the laws of chemical kinetics. Indeed we have supposed that the mechanism (3.1) remains the same on the thermodynamic branch or in the structured domain. We shall now examine the problem of stability when fluctuations may modify the kinetics, for example, through the formation of new substances. A similar situation exists in biological evolution, which describes the appearance of new species. Also, in the social domain, behavioral fluctuations can lead to a modification of social structures.
All these problems are related to the theory of structural stability in which the stability of the equations is studied with respect to small perturbations leading to “new” forms of kinetics. If a new substance appears after such a perturbation, its concentration can either diminish (and the system returns to its initial behavior) or be amplified, which then results in the appearance of a new macroscopic mechanism.
Let us briefly clarify these notions (see Prigogine et al., 1972) before discussing specific examples. First let us consider a sequence of polymerization reactions $$ \begin{align*} X_1 + X_1 &\rightarrow X_2 \\ X_2 + X_1 &\rightarrow X_3 \\ X_3 + X_1 &\rightarrow X_4 \\ \vdots \qquad&\qquad \vdots \end{align*} $$ To these steps we may add terms representing different types of catalytic effects. For example, the concentration of \(X_2\) may, in addition, be favored by the autocatalytic process $$ X_1 + X_1 + X_2 \rightarrow 2X_2 $$
The system contains \(N\) chemical substances \(\{X_i\}\), \(i = 1, 2,\ldots,N\). The evolution in time is given by $$ \dot{X}_i = F_i^e(X_1, X_2, \dots, X_N) + \bar{F}_i(X_1, X_2, \dots, X_N) \tag{4.1} $$ where \(F_i^e\) represents the flux of matter, and \(\bar{F}_i\) describes the reactions taking place inside the system. The \(F_i^e\) are maintained constant and uniform in the system.
The stability of equation (4.1) can be studied by linearizing around the stationary state. The concentrations of the different species are developed around this state $$ X = X_S + \tilde{X} e^{\omega t} $$ where \(X_S\) is the value at the stationary state and \(\tilde{X}\) is the initial perturbation. Substituting \(X\) into the system of equations, we obtain a secular equation for \(\omega\) whose order is equal to the number of variables: $$ a_0 \omega ^N + a_1 \omega ^{N1} + \ldots + a_N = 0 \tag{4.2} $$ The solution of this equation tells us how the system will evolve: if \(\omega\) has an imaginary part, the system will oscillate; if the real part of \(\omega\), \(\omega _r\), is positive, the perturbation will be amplified; and if \(\omega _r\) is negative, it will regress.
Let us suppose that the stationary solution of (4.1) is stable. Then all the solutions of the secular equation must have a negative real part. The fluctuations only modify the concentrations of the different species in the system.
However, another type of fluctuation can occur, one leading not to an alteration in the concentration of the existing \(X_i\), but to the formation of new substances \(Y_i(i=1,\ldots,m)\). In place of (4.1), we have therefore a new system with \(m\) supplementary differential equations for the new substances \(Y_i\). The order of the secular determinant giving the \(\omega _i\) becomes \(N+m\). If we take \(m=1\), the secular equation of degree \(N + 1\) has the form of either $$ \epsilon \omega ^{N+1} + a_0(\epsilon)\omega ^N + \ldots + a_N(\epsilon) = 0 \tag{4.3} $$ or $$ a_0^\prime (\epsilon) \omega ^{N+1} + a_1^\prime (\epsilon) \omega ^N + \ldots + \epsilon a ^\prime _{N+1} (\epsilon) = 0 \tag{4.4} $$ with $$ \lim_{\epsilon \to 0} \ a ^\prime _j (\epsilon) = a_j; \qquad j = 0, 1, \ldots, N \tag{4.5} $$ For \(\epsilon \neq 0\), but small, the \(N+1\) roots of the new system \(\{X_i\} + Y\) must have \(N\) roots almost equal to those of the equation without the new substance \(Y\) (Andronov et al., 1966). In particular, the real parts must have the same sign in both cases. The stability of the stationary state can only be compromised by the \(N+1\)th root. Let us indicate the form of the supplementary root \(\omega_{N+1}\). From (4.3) it would follow $$ \omega_{N+1} \simeq a ^\prime _0 / \epsilon \tag{4.6} $$ while from (4.4) we would have $$ \omega_{N+1} \simeq  \epsilon a ^\prime _{N+1} / a ^\prime _N \tag{4.7} $$ If \( \epsilon \gt 0\), \(\omega_{N+1}\) could have a positive real part according to the sign of \(a ^\prime _0\), \(a ^\prime _N\), \(a ^\prime _{N+1}\). We arrive, therefore, at the important conclusion that the addition of a new substance \(Y\) can destroy the previously existing stability of the system. The characteristic of the problem of structural stability calls for the study of stability with respect to the appearance of substances that were initially absent from the original scheme (4.1).
Let us now examine more closely the respective situations to which the secular equations (4.3) and (4.4) correspond. We can easily verify that the equation of evolution of \(Y\), corresponding to a secular equation of the form of (4.3), must be of the form $$ \epsilon \dot{Y} = G( \{X_i\}, Y, \epsilon) \tag{4.8} $$ Furthermore, the presence of \(Y\) modifies the evolution equations of the \(\{X_i\}\), which become, instead of (4.1), $$ \dot{X}_i = F_i^e + F_i ( \{X_i\}, Y, \epsilon) \tag{4.9} $$ with the obvious conditions $$ G( \{X_i\}, Y, 0) = 0 \tag{4.10} $$ $$ F_i( \{X_i\}, Y, 0) \equiv \bar{F}_i (\{X_i\}) \tag{4.11} $$ The equations for \(Y\) do not contain a flux term, because this substance is produced by the \(\{X_i\}\). On the contrary, the secular equation (4.4) imposes kinetic equations of the form $$ \dot{Y} = G_1 (\{X_i\}) + \epsilon G_2 ( \{X_i\}, Y, \epsilon ) \tag{4.12} $$ $$ \dot{X}_i = F_i^e + F_{1i} ( \{X_i\} ) + \epsilon F _{2i} ( \{X_i\}, Y, \epsilon ) \tag{4.13} $$ The essential difference between (4.8) and (4.12) is that in (4.8) a new, shorter time scale appears because of the fluctuation. Indeed, the rate of formation of \(Y\) (4.8) is proportional to \(1/\epsilon\), the parameter \(\epsilon\) being considered small. The fluctuation leads to an “acceleration” of the polymerization process. When there are different mechanisms competing in the system, it is clear why it should be the structural instability of type (4.8) and not (4.12) that plays an essential role (Eigen, 1971). These concepts have been applied to the interpretation of prebiotic evolution. Errors of transcription in the chemical kinetics lead to the production of new substances possessing a greater catalytic activity and new functions. Different prebiotic models (Eigen, 1971; Prigogine et al., 1972; Babloyantz, 1972; and Goldbeter, 1973), have made it possible to show that:
 A thermodynamic threshold exists for relatively simple polymerization mechanisms, leading from one stationary state, characterized by low polymer density, to another stationary state with high polymer density.
 Above this threshold, there is the wealth of possible behavior characteristic of dissipative structures (limit cycle, chemical waves, etc.).
 When the appearance of new chemical substances corresponds to a new function, the specific dissipation (e.g., per unit mass) of the system increases at the point of instability. The new regime corresponds to a higher level of interaction between system and environment. This behavior has been called evolutionary feedback (Babloyants, 1972). Indeed, in increasing the dissipation, the class of fluctuations leading to instabilities is widened.
These considerations have also been applied to ecological evolution. (See the Appendix to this chapter and Allen 1975, 1976.)
5
Biological Applications: Aggregation
The concepts which we have developed may now be illustrated by some biological examples chosen for their close relation to social phenomena. Let us begin with the phenomenon of aggregation in slime molds, a species of amoeba. The life cycle of these microorganisms is represented in Figure 7. When the food has been used up, the amoeba start aggregating by moving toward attractive centers that seem to form spontaneously. A mobile mass containing from \(10\) to \(10^5\) cells—the pseudoplasmodium—forms. This mass changes shape and becomes a body composed of two structures: a foot, or base, whose cells are rich in cellulose; and a round mass above, rich in polysaccharides. This is a true differentiation or structuration. This phenomenon develops during a period of between 20 and 50 hours
Here, we shall study only the aggregation process, whose evolution is represented in Figure 8. It has been shown that it is the attraction exerted by a chemical substance, acrasin, on the amoebas which is at the origin of the aggregation: this type of behavior is called chemotaxis. It has been possible to establish that the chemical identity of acrasin is a cyclic compound CAMP, a substance that plays an important role in many biochemical processes. During the aggregation, there are no cell divisions, and the total number of amoebas is conserved. The density variations of the amoebas may be expressed in terms of two types of displacements: one term for random motion represented by Fick’s law, and another for the chemotactic displacement, related to the gradient of acrasin. Therefore, the evolution equation for \(a(r,t)\) may be written $$ \frac{\partial a}{\partial t} =  \triangledown \cdot (D_1 \triangledown _\rho) + \triangledown \cdot (D_2 \triangledown _a) \tag{5.1} $$ where \(\rho\) is the density of acrasin, \(D_1\) the chemotactic coefficient, and \(D_2\) the diffusion coefficient. It is generally supposed that \(D_1\) is a term of the form $$ \frac{\delta \cdot a(r,t)}{\rho} \tag{5.2} $$ where \(\delta\) is some constant. In addition to chemotaxis we have an enzymatic reaction. The slime molds release into the medium an enzyme, acrasinase (\(\eta\)), which destroys acrasin. The corresponding reaction is $$ \rho + \eta \overset{k_1}{ \underset{k_{1}}{\rightleftharpoons} } \; C \; \overset{k_2}{\rightleftharpoons} \eta + \epsilon \tag{5.3} $$ where \(C\) is an intermediate complex and \(\epsilon\) the product of the degradation. If we suppose that the total concentration of the enzyme (free and in the complex) is a constant, the problem of aggregation reduces to two pairs of equations (Keller and Segel, 1970): $$ \frac{\partial a}{\partial t} =  \triangledown \cdot ( D_1 \triangledown _\rho ) + \triangledown \cdot ( D_2 \triangledown _a ) \tag{5.4} $$ $$ \frac{\partial a}{\partial t} =  k (\rho) \rho + af(\rho) + D_{\rho} \triangledown _{\rho} ^2 \tag{5.5} $$ where \(D_\rho\) is the diffusion coefficient of the acrasin; \(k( \rho ) = ( \eta _0 k_2 k ) / (1 + k_\rho)\); \(k = k_1 / ( k_{1} + k_2 )\); and \(af(p)\) is the amount of acrasin produced by the amoebas.
These equations always admit a stationary homogeneous solution corresponding to the thermodynamic branch. One can determine the conditions which would render this state unstable. Instability is favored if the cells undergo a sufficient augmentation of their sensitivity to a given gradient of acrasin, that is, if \(\partial\) increases sufficiently. An increase in the rate of production of the acrasin [the term \(\triangledown ^2 _\rho\) of (5.5)] or of its concentration in the medium, contributes to the destabilization of the homogeneous stationary state. A certain critical wavelength exists which essentially determines the spatial distribution of the aggregates. The principal predictions of the model have been found to be in agreement with experiment (Keller and Segel, 1970).2
We have here an excellent example of order by fluctuation. The fluctuations in the amoeba distribution destroy the uniform configuration and lead the system finally to a new inhomogeneous distribution. The analogy with the formation of towns starting from a uniform population is obvious.
6
Social Insects
The relations between individuals of a species vary considerably within the animal kingdom. Certain groups limit themselves to sexual behavior or to the struggle for the defense of their territory. Among insects, social organization attains a maximum complexity with the Hymenoptera and the termites: the survival of an individual is practically impossible outside the group (Wilson, 1971). The interactions between individuals are physical: sound, vision, touch, and the transmission of chemical signals. The regulation of the castes, nest construction, formation of paths, and the transport of material or of prey are different aspects of the order reigning in the colony. In the following, we discuss two striking examples from our point of view (see also Deneubourg and Nicolis, 1976).
Collective Movements of Ants
Ants, like insects in general, synthesize a great many chemical substances (pheromones), which regulate their behavior. The “path” pheromones mark on the ground the direction of food sources or of the nest. The substance laid down can diffuse into the neighboring space. A “tunnel” of pheromone is thus created, centered on the axis of displacement of the insect who laid it. His fellows have a tendency to follow the same direction at the place where the density of pheromone molecules reaches a maximum. With certain groups, such as soldier ants, one may observe the collective movement of several thousand individuals. Macroscopic structures appear and vary in form from species to species (Rettenmeyer, 1963, Hangartner, 1967). (See Figure 9.)
We formulate now the equations of change. Let \(C(r,\theta)\) and \(H(r,\theta\) be the insect and pheromoone concentrations, respectively, expressed in polar coordinates \(r, \theta\). Suppose that the nest is at \(r=0\). The following hypotheses are used in writing the kinetic equation of the pheromone:
 The ants emit a quantity \(\alpha\) of pheromone per unit of time.
 \(H\) decomposes at a rate proportional to its density: \(\beta H\).
 Its propagation in the medium obeys Fick’s law where \(D_H\) is the diffusion coefficient.
In this way we obtain $$ \frac{\partial H}{\partial t} = \alpha C  \beta h + D_H \left[ \frac{1}{r} \frac{\partial}{\partial r} \left(r \frac{\partial H}{\partial r}\right) + \frac{1}{r^2} \frac{\partial^2H}{\partial\theta^2} \right] \tag{6.1a} $$ Similarly, the kinetic equation for \(C\) is $$ \frac{\partial C}{\partial t} = H_rC + \frac{D_\theta}{r^2} \frac{\partial^2C}{\partial\theta^2} + \frac{\gamma}{r^2} \frac{\partial}{\partial\theta} \left( C \frac{\partial H}{\partial\theta} \right) \tag{6.1b} $$ The first term on the righthand side in equation (6.1b) contains the radial component of the collective movement. As described earlier, the orientation in the pheromone gradient occurs perpendicularly to the principal direction. The angular dependence has two components: one corresponding to diffusion according to Fick’s law (second term on the righthand side) and the other to the attraction of the pheromone. \(D_\theta\) is the diffusion coefficient of \(C\), which depends on \(\theta\), and \(\gamma\) is the chemotactic coefficient.
There exists a critical value \(\gamma_c\) for which a stationary homogeneous solution become unstable. The system then evolves to an inhomogeneous stationary state depending on \(\theta\), with a critical wavelength linked to the different parameters of the model. Accordingly, different branching structures will appear according to these parameters, as observed in different ant societies.
Construction of a Termite Nest
Social insects are also characterized by coordinated behavior such as nest construction. The scale of such constructions far exceeds the size of an individual. It is interesting that for certain species the population of a colony attains several million individuals and the total weight of a termite nest can sometimes attain several tons. In spite of this size, such a structure can result from a simple behavioral pattern of the insects. GallaisHamonno and Chauvin (1972) have simulated the construction of the dome of an ant nest.
In the first stage of the construction of a termite nest the insects raise a group of pillars and walls, which, if sufficiently close, are joined to form arches. Afterwards, the space between the pillars is blocked. Grassé (1959), in particular, has studied this first stage, and his observations have led him to formulate the theory of “stigmergy,” which expresses the “interaction” between insects and work. Summarizing the observations of Grassé, we may conclude that there exist two phases: (1) an uncoordinated phase; and (2) a coordinated phase. The uncoordinated phase is characterized by the random deposition of building material. Many small deposits are thus distributed on the surface available to the insects. When one of these deposits becomes sufficiently large, the second phase starts. On the aggregation of matter the termites now deposit even more material, but in a preferential way. A pillar or a wall grows according to the initial disposition of the deposit. If these units are isolated, construction stops; but if they are close to each other, an arch will result (see Figure 10).
Different types of stimulus, principally chemical but also mechanical, intervene. Let us consider how these facts can be expressed by using a simple model: the existence of a deposit at a specific point stimulates the insects to accumulate there more building material. This is an autocatalytic reaction. A model containing this autocatalytic factor, together with the random displacement of insects, is formally similar to the examples in Section 3.
The termites, in manipulating the construction material, give to it a particular scent, which diffuses in the atmosphere and attracts the insects toward the points of highest density, where deposits have already been made. Let \(C\) be the concentration of insects carrying material (\(P\)). The deposition of the material is favored by the quantity of \(P\) present, by means of \(H\) (scent), and is supposed to be proportional to \(C\). In addition, we suppose that \(P\) loses its attractive capacity at a speed proportional to its density. We then have for \(P\) $$ \frac{\partial P}{\partial t} = k_1C  k_2P \tag{6.2} $$ The equation for the evolution of the scent \(H\) contains a production term proportional to the density \(P\). Its decomposition rate is of the form \(k_4H\). Fick’s law represents its spatial propagation. $$ \frac{\partial H}{\partial t} = k_3P  k_4H + D_H \frac{\partial^2 H}{\partial r^2} \tag{6.3} $$ where \(D_H\) is the diffusion coefficient. We also have to consider a similar equation for \(C\). This includes a flow \(\Phi\) of insects carrying material \(k_1C\). In addition, we include diffusion and motion directed toward the sources of the scent.
This leads to the equation $$ \frac{\partial C}{\partial t} = \Phi  k_1 C + D \frac{\partial ^2 C}{\partial r ^2} + \gamma \frac{\partial}{\partial r} \left( C \frac{\partial H}{\partial r} \right) \tag{6.4} $$ where \(\gamma\) is the chemotactic coefficient.
The concept of the instability of the homogeneity and the role of the fluctuations are here again well illustrated. The uncoordinated phase, to use Grassé’s terminology, corresponds to the homogeneous solution of equation (6.1), (6.3), and (6.4). Fro ma slightly larger deposit somewhere, or in other words from a sufficiently large fluctuation, a pillar or a wall can appear. This stage corresponds to the amplification of the fluctuation. Order therefore appears through fluctuations. The regularity of the structure spreads and is characterized by a wavelength which is a function of the different parameters of the model.
This model leads to a certain number of conclusions of experimental interest. It becomes possible to perform numerical experiments reproducing Grassé’s observations for diverse conditions, such as different insect or scent densities. The role of the “fluctuations,” which are given here by the initial deposits, can also be studied by introducing decoys into the system (Grassé, 1959).
The mathematical aspects of works of art, particularly architecture, have been emphasized by many authors. It is remarkable to find this aspect already in constructions built by social insects.
7
The Formation of Dissipative Structures—the Stochastic Description
So far, we have discussed dissipative structures according to a deterministic description. This method is valid as long as the fluctuations, which is to say the deviations from average values, remain small. Equilibrium statistical mechanics tells us that the order of magnitude of the fluctuations in a system with \(N\) degrees of freedom is \(N^{1/2}\) (see Glansdorff and Prigogine, 1971; and McQuarrie, 1967). The relative importance of the fluctuations therefore tends to become zero for \(N \to \infty\).
Near an instability, however, the importance of the fluctuations can become crucial. In order to take into account these fluctuations, the simplest procedure is to suppose that the chemical reactions lead to a Markov process of “birth and death” in the space defined by the particle numbers \(X_i\) of the different chemical species \(i\). This makes it possible to establish a “master” equation which gives the evolution of the probability \(P\) of finding given values of the particle numbers at time \(t\). For example, let us consider the two monomolecular reactions $$ A \overset{k_1}{\to} X \overset{k_2}{\to} F $$ Let \(P(A, X, F, t)\) be the probability of the values \(A\), \(X\), \(F\) of the particle numbers at time \(t\). In order to establish a master equation, one must consider all the transitions that could lead, during the interval \(t \to t + \Delta t\), from one state \(A\), \(X\), \(F\) to another, and vice versa. Take the first reaction, $$ A \overset{k_1}{\to} X $$ In order to have the state \((A, X, F)\) at time \(t + \Delta t\), it is necessary that the state have been at time \(t\) \((A + 1, X  1, F)\); in consequence $$ \small \begin{align*} P(A, X, F, t+\Delta t) &= \text{(prob. of reaction)} \times P(A+1, X1, F, t)\\ &+ \text{(prob. of nonreaction)} \times P(A,X,F,t) \tag{7.1} \end{align*} $$ The probability of a reaction toward the state \((A,X,F)\) is given by $$ \text{probability of reaction} = k_1 A \Delta t \tag{7.2} $$ from which follows $$ \small \begin{align*} P(A,X,F,t + \Delta t) &= k_1 (A+1) \Delta t P(A+1, X1, F, t) \\ &+ (1K _1 A \Delta t) P (A,X,F,t) \tag{7.3} \end{align*} $$ The equation of evolution of \(P(A,X,F,t)\) results by adding the contributions of both reactions and taking the limit \(\Delta t \to 0\): $$ \displaylines{ \frac{d P}{d t} = k_1 (A+1) P (A+1, X1, F, t)  k_1 AP(A,X,F,t) \\+ k_2 (X+1) P (A,X+1,F1,t)  k_2 XP (A,X,F,t) } \tag {7.4} $$ or in a more general form, $$ \frac{dP}{dt} \left( \{ X_i \} , t \right) = \sum_{ \{ X _i ^\prime \} } W \left( \{ X _i ^\prime \} , \{ X _i \} \right) P \left( \{ X _i ^\prime \} , t \right) \tag{7.5} $$ \(P\) is the probability function and \(W\) the probability of transition, per unit time, between states \(\{ X _i ^\prime \}\) and \(\{ X_i \}\).
Solving equation (7.5) for small fluctuations around the equilibrium state shows that such fluctuations obey a Poisson distribution. This implies that the mean square deviation \(\langle \delta X ^2 \rangle\) is equal to the mean value: $$ \langle \delta X ^2 \rangle = \langle X \rangle \tag{7.6} $$
However, equation (7.5) also allows us to study the behavior of fluctuations in systems maintained outside equilibrium, by imposing fixed values on the concentrations of the initial and final products. It is the necessary that there be a clear separation of the time scales of the “fluctuating system” and the “outside world” (Nicolis and Babloyantz, 1969). We note, however, that this approach considers the fluctuations to be spread homogeneously throughout the whole reacting volume. The results of this work have been surprising. We have shown that it is only in the case of linear systems that the small fluctuations can be described by Poisson’s formula (see Nicolis and Prigogine, 1971; Nicolis et al., 1974; and Prigogine et al., 1975). For nonlinear systems far from equilibrium, the distribution law of the fluctuations of the reacting substances is not that of Poisson. How can this be explained? Recent studies (Mazo, 1975) have shown that there is an unexpected aspect that has to be considered. The fluctuations are local events, and one must consider a supplementary parameter scaling the extension of the fluctuations. This will be a new characteristic length determined by the intrinsic dynamics of the system and independent of the dimensions of the reacting volume. Thus, there is an essential difference in the behavior of the fluctuations depending on their spatial extension. Only fluctuations of sufficiently small dimensions obey Poisson statistics. This is a very important result because it implies that, conversely, only fluctuations of a sufficient extension can attain enough importance to compromise the stability of the macroscopic state considered.
Thus, our recently developed theory leads quite naturally to the notion of a critical fluctuation as a prerequisite for the appearance of an instability.
These conclusions can be illustrated with the help of a simple model (see Figure 11). Consider a volume \(V\) inside which chemical reactions are taking place. Let \(\Delta V\) be a small subvolume of this system.
Besides the chemical reactions, this small subvolume is linked to the big volume \(V\), the environment, by exchanges of matter. Let \(\mathscr{D}\) be the corresponding coefficient of transport, and \(X_\text{in}\) and \(X_\text{ex}\) the charracteristic composition in \(\Delta V\) and in the region \(\Delta \epsilon\) at the interface of \(V\) and \(\Delta V\). A master equation for the total system may be written $$ \begin{align*} & \frac{d}{dt} P (X_\text{in}, X_\text{ex}, t) \\ &= R(X_\text{in}) + R(X_\text{ex}) \\ &+ \mathscr{D} [ (X_\text{in} + 1) P (X_\text{in} 1, X_\text{ex} + 1, t)  X_\text{ex} P (X_\text{in}, X_\text{ex}, t) ] \\ &+ \mathscr{D} [ (X_\text{in} + 1) P (X_\text{in} +1, X_\text{ex}  1, t)  X_\text{in} P (X_\text{in}, X_\text{ex}, t) ] \end{align*} \tag{7.7} $$ where \(R(X_\text{in})\) and \(R(X_\text{ex})\) represent the contributions of the chemical reactions, while the other terms are linked to the transfer of matter between \(\Delta V\) and its environment.
By considering the following hypotheses:
 the chemical reaction responsible for the instability occurs in the same way in \(V\) as in \(\Delta V\);
 the interaction with the surrounding system occurs according to the average state of the outside medium;
 the transfers between \(\Delta V\) and the exterior depend on the instantaneous state of \(\Delta V\);
we are led to an equation concerning only the probabilities inside the subvolume, \(P(X,t)\), \((X \equiv X_\text{in})\): $$ \begin{align*} \frac{dP(X,t)}{dt} = R(X) &+ \mathscr{D} \langle X \rangle [P(X1,t)  P (X,t)] \\ &+ \mathscr{D} [(X+1) P(X+1,t)  XP(X,t)] \end{align*} \tag{7.8} $$ with $$ \langle X \rangle = \sum_{X=0} ^\infty X P (X,t) \tag{7.9} $$ This nonlinear master equation expresses the competition between the chemical reactions, which tend to augment fluctuations, and the transfers of matter, which tend to damp them by homogenization. It is only when these two types of terms are of the same order of magnitude that an instability can manifest itself.
Let us now examine more closely what the implications of the new approach are for a system which crosses the threshold of instability. At the critical point separating the stable region from a regime which amplifies the fluctuations, the transfer parameter \(\mathscr{D}\) will be linked to the chemical parameters on the one hand and to the dimension of the subvolume \(\Delta V\) on the other. A qualitative estimate of this gives3 $$ \mathscr{D} = \frac{D}{\ell^2} \tag{7.10} $$ where \(D\) is Fick’s diffusion coefficient and \(\ell\) a characteristic length of \(\Delta V\).
By introducing (7.10) into the condition for instability, a relation can be established between the size of the subsystem considered and the rate of amplification of the fluctuations.
This calculation has been performed for a particular chemical example: the establishment of a limit cycle in the trimolecular mechanism (Nicolis and Prigogine, 1971). The results are shown in Figure 12, which clearly shows the competitive action of the chemical parameters and the dimension of the perturbation on the stability of the system. Three domains, having different stability properties, appear:
 A stable region where all fluctuations will be damped, whatever their dimension.
 A region where \(k \gt k_c\) but where the length is shorter than the critical length; the fluctuations here are also damped.
 A region where both \(k \gt k_c\) and \(\ell \gt \ell_c\); the fluctuations are amplified and invade the whole system.
The result of this work is to show that dissipative structures form by a nucleation process. Fluctuations on a sufficiently small scale are always damped by the medium. Conversely, once a fluctuation attains a size beyond a critical dimension, it triggers an instability.
Although this theory is very recent, there are several arguments which point to its validity. First, certain calculations performed by a computer have shown the role played by the dimension of the fluctuations (Mazo, 1975; Portnow, 1974). Furthermore, experiments confirm, at least qualitatively, the existence of a critical size of the fluctuations in the triggering of chemical or hydrodynamic instabilities (Nitzon et al, 1974; Pacault et al., 1975).
From the point of view of basic principles, the properties of this new class of stochastic equations are very remarkable, because they show the restraining role of the environment.4 Clearly, it is an essential factor in the possible sociological applications and it is precisely to these questions that we shall turn in the final section of this chapter.
8
Dissipative Structures and Social System
The theory of dissipative structures lends itself to a description of the selforganization of matter in conditions far from thermodynamic equilibrium. These structures have a coherent character linking their mechanism (“chemical reactions”) to their spatiotemporal organization. Furthermore, we have seen that such systems present both a deterministic character, described by kinetic equations of type (3.2) or (6.2)–(6.4) and stochastic character (fluctuations) described by nonlinear stochastic equations such as (7.5). This theory also links the three levels of description
The theory of dissipative structures has been applied with great success to biological problems (see Sections 5 and 6). It must be recalled that even in the simplest cells the normal metabolic processes imply several thousand complex chemical reactions. Therefore, out of absolute necessity all these processes must be coordinated. These coordinating mechanisms constitute an extremely sophisticated functional order. Thus, biological order is both functional and spatiotemporal order.
It is therefore tempting to apply these concepts to problems of social structure. There, as in the case of biological structures, the functional aspect is associated with specific structures. From one particular point of view we are in an even more favorable situation than in biology, because life is a very ancient phenomenon and its origin must imply a very considerable sequence of successive instabilities. In contrast, we are at least partially informed on the development of societies, thanks to archeological and ethnological evidence (see, e.g., LeroiGourhan, as quoted by Janne, 1963; and Rachet, 1969). In particular, we have considerable information concerning the history of technology and of the tools of primitive societies and about the way in which material progress has been reflected in the organization of society.
Mathematical models have often been introduced in the study of social and ecological phenomena. We may cite here, for example, the model of ecological competition of Volterra and Lotka (see Goël et al., 1971) or the kinetic theory of traffic flow (Prigogine and Herman, 1971). It is also well known that in spite of its limitations, the theory of Markov chains has found numerous applications in the social domain (Weidlich, 1974).
The important point is that we can now go further. The perfecting of the mathematical tools [theory of bifurcations, structural instabilities (see Section 4), and nonlinear stochastic equations] permits us to discuss in a more precise way some basic concepts introduced by sociologists.
As an example, we shall consider the notion of the “quantum of action” to which Henri Janne (1963), in his monograph on sociology, attaches great importance. He writes (p. 42):
It is convenient here to introduce the notion of a “quantum of action.” The quantum of action of a factor must be sufficient for the factor to be taken into consideration. Below a certain threshold, the factor has no effect (insufficient quantum of action). It can be “dominant” when its action renders all other factors negligible … . The dominant quantum of action joins the game of classical causality.
The concept introduced by Janne is analogous to the critical fluctuation which we have discussed in Section 7. With fluctuations below the critical threshold, the system returns to its initial state. Beyond the threshold, it evolves to a new structure. The appearance of a critical fluctuation leads in the sense to a deterministic evolution (whence Henri Janne’s reference to a classical causality).
Althusser (Althusser and Balibar, 1973) has expressed very well the necessity of clarifying the “epistemological” significance of the new concepts introduced by the founders of modern sociology. Speaking of theoretical problems introduce by Marxist ideas, he says (p. 61),
By what concept can one think of the new type of determination, only recently identified, where for example the phenomena occurring in a region are determined by the structure of that region? More generally, by means of what concept, or group of concepts, can one conceive the determination of the elements of a structure and the structural relations between them, as well as all the effects of these relations, in terms of the efficiency of this structure? And, a fortiori, by what concept, or concepts, can one imagine the determination of a subordinate structure by a dominant one? In other words, how can one define the concept of a structural causality?
The origin of modern sociology has often been attributed to those thinkers of the nineteenth century who defined and forged the concepts on which theoretical sociology is based. In contrast, other sociologists attribute it to the founders of social statistics, such as Quetelet (see Aron, 1967).
The development of appropriate statistical mathematics capable of handling the complexity of social problems could certainly serve as a bridge between these two complementary ways of envisaging sociology. It is in this perspective that I should like to present some remarks:
 Social phenomena are described by nonlinear equations. This results directly from their relational or social character. This relational character appears under different titles in the works of the founders of sociological thought. Tarde (1890) speaks of imitation, Durkheim (1973) of solidarity. The mathematical transcription of this element leads precisely to this nonlinear aspect. We have already cited the nonlinear VolterraLotka equations used currently in ecology. Similarly, in the dynamics of phenomena of consumer choice appear different contributions, some “linear,” corresponding to “individual” decisions, and others “nonlinear,” corresponding to decisions taken under the influence of the environment (friends, mass media, etc.). Finally, a wellunderstood example is that of vehicular traffic flow, where the interaction between drivers leads precisely to the nonlinearity in the integrodifferential equations for the distribution function of the velocities of the drivers.
 The coherent behavior of a society has been underlined many times; in fact, the relation between structure and function is so apparent that it seems unnecessary to stress it further.
 Several research workers, notably Gregory Bateson (cited by Janne, 1963, p. 117), have felt the importance of emphasizing change in the description of social systems as coherent systems characterized by the structurefunction relation. Bateson introduces the notion of dynamic equilibrium, with the help of which he shows that “any social system, in spite of its static appearances, contains at least small amplitude changes, appearing continuously and compensating one another.” This observation is to be compared to the existence of dysfunctions underlined by Janne (1963, p. 111). In our description, these phenomena correspond to the existence of fluctuations inherent to the statistical description. The deterministic description refers only to averages.
 The social fact finds its expression in constraints imposed upon an individual. This constraint appears in our theory as the nonlinear terms in the stochastic equation (7.7). It tends to stabilize the system with respect to certain fluctuations, such as the dysfunctions mentioned in (c) above.
In summary, our description includes two complementary aspects. These aspects correspond essentially to the dialectic between mass and minority, to use the expression introduced by F. Perroux (1964). The first aspect is that of an average behavior (the “average” man of Quetelet), the second is the existence of fluctuations which, when they exceed a critical level, influence that average because they drive the system to a new average state.
We believe that our model contains certain of the indispensable elements for the building of a theoretical sociology which cannot neglect either of these two aspects without seriously altering the significance of the social system.
In the following, we shall make some remarks which illustrate the importance of the preceding considerations. We do not seek here to formulate a precise mathematical model of a given social activity. Any such model would necessarily imply a detailed discussion of the parameters in play and would fall outside the range of this chapter. We include, however, in the Appendix a summary of some recent work by P. M. Allen.
A first point is the problem of the very existence of societies. Is there a limit to complexity? The question has been discussed many times in the literature. An excellent exposition is given in the monograph by May (1973). The more elements there are in interaction, the higher the degree of the secular equation determining the characteristic frequencies of the system (see Section 4). The greater will therefore also be the chances of this equation’s having at least one positive root and hence indicating instability.
Several authors have suggested that historical evolution selects certain particular types of systems that are stable. It has nevertheless proven difficult to give a quantitative form to such a suggestion. Our approach leads to a different answer: a sufficiently complex system will generally be in a metastable state
. The value of the threshold of metastability depends on the size of the coefficient appearing in the stochastic equation (1.7). This coefficient, as we have seen, is a measure of the coupling of the fluctuating system with the outside world. This point of view seems to be in agreement with the one held by sociologists who conclude that a society has a limited power of integration. If the perturbation exceeds that power of integration, the social system is destroyed or gives way to a new organization.The existence of the constraint is indispensable in order to distinguish between an “average state” (including periods of development) and fluctuations leading to a new state. One may think that in a complex society the possibilities for instability (resulting, e.g., from new inventions) always exist. However, only certain inventions will go beyond the individual domain to a domain of integration with society. One has only to think of the wheel, used in the preColumbian epoque as a toy, but not as a means of transport.
The formalism that we have obtained leads quite naturally to a preliminary classification of societies according to the following two parameters:
 “complexity,” measured by the number of interacting functional elements;
 social pressure, measured by the parameter \(\mathscr{D}\).
It is of particular interest to consider the two limiting cases (b) and (c) in Table 1. Case (b) corresponds to a simple, “conformist” society. one thinks here of the archaic social systems which LéviStrauss has compared with “clocks” (Charbonnier, 1969). The opposite case is that of a complex system with a feeble coherence corresponding to historical societies which LéviStrauss compares to “steam engines.” One should notice that the nonhistoric nature, or “crystallinity,” of certain archaic societies corresponds, from this point of view, to active repression of the fluctuations
.Complexity  Social Pressure  Stability 

    (a) ? 
  +  (b) Stable 
+    (c) Unstable 
+  +  (d) ? 
All other things being equal, the repression is stronger, the smaller the fluctuating group in which it acts (see Section 7). This is probably related to the remark of Gurvitch (quoted by Janne, 1963, p. 344): the family (a relatively small unit) is a conformist element in a society containing small fluctuations. In contrast, society as a whole constitutes the greatest dimension of fluctuation, and in consequence is subject to the least constraint (except at certain times, such as during a war). Accordingly, ehtnologists have been able to identify a great number of distinct societies
. Does this mean that the evolution of societies is not subject to any general rule? In 1922, Lotka formulated his law of maximum energy flow (Lotka, 1956). In thermodynamic terms, this corresponds to a law of increase of entropy production per individual. This law seems to agree with the laws of technological evolution. As LeroiGourhan (quoted by Janne, 1963, p. 288) has written: “In the technical domain, the only features which will be transmitted are those which represent an improvement in the procedures. One may adopt a language which is less supple, a religion which is less developed, but one will never exchange a plough for a hoe.” The plough leads necessarily to an augmentation of the exploitation of natural resources and in consequence a greater energy consumption per individual. It is interesting to compare this tendency with the entropy production which appears in the early stages of embryonic life (Zotin, 1972). The increase of entropy production in turn renders possible the appearance of new instabilities. We have already pointed to the evolutionary feedback in Section 4 of this chapter.This increase in entropy production is related to the effect of structural instabilities discussed in Section 4. There is a close analogy between the “invention” of new techniques and the structural instability leading to new chemical mechanisms. Of course, there is no question of classifying societies according to a single criterion such as their energy or entropy production. This is merely a characteristic of evolution, but a very important one because of its universality. In contrast, our approach clearly shows the rather oversimplified nature of theories of “progress” (linear progress, cycles, etc.).
The ideas of “infrastructure” and “superstructure” have given rise to interminable discussions (see, e.g., Aron, 1967). It seems worthwhile, therefore, to indicate that within the framework of our formalism, these ideas take on a very direct meaning. A structural instability may result from the occurrence of a new function arising from a fluctuation. With such a fluctuation, one may associate a modification of the infrastructure. The relation between the spacetime functionstructure will be modified if the fluctuation leads the system to a new dissipative structure. From this point of view, the spacetime structure appears as the “superstructure.”
Of course, the very possibility of fluctuations depends on the restraining character of a society, and therefore of the “superstructure.” The notions of average stage and of fluctuations can only be defined with respect to one another.
As we have already indicated, it is necessary to separate the development periods from the periods of instability which lead to new structures. The problem of forecasting is entirely different in the two cases. In the former, it suffices essentially to study deterministic laws, whereas in the latter this is certainly not so. It has often been said that the life of the average man in Europe in the eighteenth century was very similar to that of the average man in the developing countries today. In the eighteenth century, however, fluctuations, triggered by the development of the sciences, were already growing. It is in the nineteenth century that we see these fluctuations attaining the “average” state and constituting a force which modified the destiny of European societies in their entirety. It is not surprising, therefore, that it was at this moment that the problem of time, of history, became the central theme of epistemology. Auguste Comte summed this up by predicting, “Our present century will be principally characterized by the irrevocable preponderance of history in philosophy, in politics, and even in poetry.”
9
Conclusions
Bergson (1963, p. 503) made the following statement: “The further we penetrate the analysis of the nature of time, the more we understand that duration signifies invention, creation of forms, and the continual elaboration of what is absolutely new.”
We recognize that we are beginning to clarify these notions of “invention” and “elaboration of what is absolutely new” by the mechanism of successive instabilities caused by critical fluctuations (Prigogine, 1973). The discovery of such mechanisms, which play such an essential role in a vast domain stretching from physics to sociology, is obviously a preliminary step toward some harmonization of the points of view developed in these different sciences.
Acknowledgments
This chapter was written with the active participation of our groups at Brussels and Austin, and especially of Professor Nicolis and Drs. Lefever, Allen, and Deneubourg. It was finalized during a stay at the General Motors Research Center, and I wish to thank Dr. R. Herman and A. Butterworth for stimulating discussions, and Dr. P. Chenea for his interest.
Appendix
Population Dynamics and Evolution
Peter Allen
The principle of order through fluctuation applies to systems through which energy and matter flow and whose macroscopic variables obey nonlinear equations. An ecosystem described by the equations of population dynamics corresponds to just such a situation, and it may now be proposed that biological evolution through mutation and selective advantage is yet another example of the principle of order through fluctuation (Allen, 1975, 1976). The equations of population dynamics describe the change of average genotype densities as a result of births and deaths of individuals, while it is supposed that the appearances of new genotypes as a result of spontaneous mutation play the role of fluctuations and are rare events compared to normal births.
In real systems, population densities tend asymptotically toward stable solutions, because the density fluctuations inherent in the statistical description will ensure that any solution which is not stable, that is, does not possess a “restoring force,” will not be maintained. We assume therefore that just before a mutation occurs, the population densities have either attained a stable steady state or are described by a stable limit cycle. The mutant population, initially very small, will only result in an evolutionary step if its presence compromises the stability of the previous state.
We assume a general equation of change for the population densities; it takes the form $$ \displaylines{ \frac{dX_i}{dt} = F_i (X_1, \ldots , X_n, X_{n+1}, \ldots , X_{n+\Delta} ), \\ i = 1, 2, \ldots , n , n+1 , \ldots , n + \Delta } \tag{A.1} $$ in which the genotypes existing before the occurrence of the mutation are \(X_1 , X_2, \ldots , X_n \) and the mutants are \(X_{n+1}, X_{n+2}, \ldots , X_{n+\Delta}\). (A mutant allele may result in more than one new genotype.) The stability of equation (A.1) is found by solving $$ \text{det} \lvert A_{ij}  \delta _{ij} \lambda \rvert = 0 \tag{A.2} $$ where \(A_{ij} = \partial F_i / \partial X_j\) at the state \(X_1 ^0, X_2 ^0, \ldots, X_n ^0\) and \(X_{n+1} = X_{n+2} = \ldots = X_{n+\Delta} = 0\). However, the terms \(\partial F_i / \partial X_j\) where \(i = n + 1, \ldots , n + \Delta\) and \(j = 1, 2, \ldots, n\) are zero because we have excluded, by the way we constructed our theory, the steady production of the mutants by the preexisting types. Thus, there can be no term in the mutant density equations which depends only on the genotypes \(X_1, X_2, \ldots , X_n\). Inserting this result into (A.2), we have $$ \text{det} \lvert A_{ij}  \delta _{ij} \lambda \rvert = \text{det} \lvert A_{k\ell}  \delta _{k\ell} \lambda \rvert \times \text{det} \lvert A_{pq}  \delta _{pq} \lambda \rvert = 0 \tag{A.3} $$ where \(i, j = 1, 2, \ldots , n + \Delta\); \(k, \ell = 1, 2, \ldots , n\); \(p, q = n+1, n+2, \ldots , n+ \Delta\).
The first determinant on the righthand side, however, evaluated at the state \(X_1^0, X_2^0, \ldots , X_n^0\) and \(X_{n+1} = X_{n+2} = \ldots = X_{n+\Delta} = 0\).
Let us briefly illustrate the application of this result to a predatorprey ecosystem. We shall not treat explicitly the case of sexual reproduction, which has been discussed elsewhere and serves only to complicate the equations without changing the result in a qualitative way.
Let us consider a prey \(X_1\), with birth rate \(k_1\), which in the absence of the predator would grow logistically to a density \(N\), and a predator \(Y\) with a death rate \(d\). The interaction between the two is characterized by \(s_1X_1Y\): $$ \frac{dX}{dt} = k_1 X_1 (NX_1)  s_1X_1Y, \quad \frac{dY}{dt} = dY + s_1X_1Y $$ This goes to the stable steady state, \(X_1^0 = d/s_1\); \(Y_1^0 = k_1 / s_1 (N  d/s_1)\). Let us suppose that a small quantity of a new prey type \(X_2\) appears in our system. Our criterion gives us the condition that \(X_2\) must fulfill if it is not to be rejected by the system. We have $$ \frac{dX_2}{dt} = k_2X_2(N  X_1^0  X_2)  s_2X_2Y $$ and hence $$ k_2 (N  X_1^0  X_2)  k_2X_2  s_2 Y^0  \lambda = 0 $$ Substituting into this \(X_1^0 = d/s_1\); \(Y^0  k_1/s_1 (Nd/s_1)\); \(X_2 = 0\); we find that \(\lambda\) will be positive if $$ \left( N  \frac{d}{s_1} \right) \quad \left( k_2  \frac{s_2 k_1}{s_1} \right) \gt 0 $$ Since \(N  d/s_1 \gt 0\), we must have $$ \frac{k_2}{s_2} \gt \frac{k_1}{s_1} \tag{A.6} $$ Thus, only mutants fulfilling this condition can be amplified by the system and a prey evolution alone will lead to a steady increase in \(k/s\).
A similar analysis for the predators shows that \(d/s\) will decrease with each evolutionary step. Taking the evolution of both predator and prey, we see that the coefficient \(s\) has no welldefined direction of drift but that \(k\) increases and \(d\) decreases. This tells us that the expression $$ \frac{Y}{X} = \frac{k}{d} \left( N  \frac{d}{s} \right) $$ which is the ratio of predator to prey at a given steady state, will increase as evolution proceeds.
Another very recent application of the criterion described above considers species with a choice of a number of different resources which biological evolution could lead them to exploit. The criterion leads to the prediction that a rich environment, where each resource is available in large quantities, will cause species to evolve into “specialists,” whereas a poor environment, where each type of resource is only present in small quantities, will lead to “generalists.” In order to test this prediction, the distribution of finches on the Galapagos Islands was studies, and the prediction is borne out by the number of finch species occupying identical vegetation zones on large and very small islands. Other applications are under active consideration.
The sources of “innovation” need not necessarily be genetic, but may also refer to changes in behavior in a species with imitative mechanisms. The adoption of new techniques by means of this type of evolution does not require the destruction of the less adapted types and thus represents a possibly faster channel of evolution. Associated with this mode is the tendency to evolve cooperative groups, characterized by the division of labor, hierarchical relationship and “castes,” as well as by mechanisms of population regulation, and even by altruism. For example, we find that division of labor and castes appears in insect societies as the result of evolution of large colonies existing in a rich medium. The competitive unit, subject to selection, in this case is not the individual, but the group
. The possibility of its existence and of cooperative mechanisms coming into play depends to a large extent on the transport properties and communication channels of the medium. In Sections 5 and 6, the use of chemotaxis by amoebas and insect societies was discussed, but groups of higher animals may use a large variety of techniques ranging from visual and audible to chemical signals. The use of language in the human domain signifies a further decisive step in this direction.The application of the principle of order through fluctuation to ecosystems of interacting populations leads to a criterion that must be fulfilled for evolution to occur, and hence to the possibility of predicting, in certain configurations, the longterm thrust of biological evolution.
Footnotes

Nicolis and Auchmuty (1974) have shown that chemical waves have to be considered as a superposition of stationary waves. There exists in general no welldefined velocity of propagation. ↩

Propagating waves have been observed during aggregation by Gerisch and Hess (1974). Their mechanism has been discussed recently by Goldbeter (1975). ↩

For the particular case of dilute gases Nicolis et al. (1974) have introduced \(\mathscr{D} = D / \ell \ell _c\) where \(\ell _c\) is the mean free path in the gas. ↩

This approach also spotlights the weaknesses of the traditional methods (Poisson processes, Markov chains) when applied to nonlinear processes far from equilibrium (MalekMansour and Nicolis, 1975). These classical methods do not lend themselves to modeling of the characteristic phenomena occurring within a society (see Section 8). ↩
References

Allen, P. M. (1975). “Darwinian Evolution and a PredatorPrey Ecology,” Bull. Math. Biol., 37, 389–405.
(1976). “Evolution, Population Dynamics and Stability,” Proc. Natl. Acad. Sci., 73, 665–668.

Althusser, L., and Balibar, E. (1973). “L’objet du Capital.” In Lire le Capital (F. Maspero, ed.), Vol. II, p. 61. Paris: Maspero.

Andronovo, a., Vitha, A., and Chaikin, S. (1966). Theory of Oscillations (Engl. transl. by S. Lefschetz). Princeton, N.J.: Princeton Univ. Press.

Aron, R. (1967). Les étapes de la pensée sociologique. Paris: Galliimard.

Auchmuty, J. F. G., and Nicolis, G. (1975). “Bifurcation Analysis of NonLinear ReactionDiffusion Equations,” Bull. Math. Biol., 37, 323.
(1976). “Bifurcation Analysis of NonLinear ReactionDiffusion Equations, II: Chemical Oscillations,” Bull. Math. Biol., in the press.

Babloyantz, A. (1972). “Far from Equilibrium Synthesis of ‘Prebiotic Polymers’,” Biopolymers, 11, 2349–2359.
and Hiernaux, J. (1975). “Models for Cell Differentiation and Generation of Polarity in Diffusion Governed Morphogenetic Fields,” Bull. Math. Biol., 37, 367–357.

Bergson, H. (1963). Evolution Créatrice, Eds. du Centennaire. Paris: Presses Universitaires de France.

Boiteux, A., and Hess, B. (1974). “Oscillations in Glycolysis, Cellular Respiration and Communication.” In Physical Chemistry of Oscillatory Phenomena (Faraday Symposium 9). London: Faraday Division of The Chemical Society.

Boltzmann, L. (1872). “Weitere Studien über das Wärmegleichgewicht unter Gasmolekülen,” Ber. Akad. Wiss. Wien, 66, 275.

Charbonnier, G. (1969). Conversations with Claude LéviStrauss. (T. and D. Weightman, transl.) London: Jonathan Cape.

Clausius, R. (1857). “Über die Art der Bewegung welche wir Wärme nennen,” Ann. Physik, 100, 353.

Deneubourg, J. L., and Nicolis, G. (1976). Proc. Natl. Acad. Sci., in the press.

Durkheim, E. (1973). De la Division du Travail Social. Paris: Presses Universitaires de France.

Eigen, M. (1971). “SelfOrganization of Matter and the Evolution of Biological Macromolecules,” Naturwiss., 58, 465–522.

Erneux, T., and HerschkowitzKaufman, M. (1975). “Dissipative Structures in Two Dimensions,” Biophys. Chem., 3, 4, 345.

Faraday Symposium 9 (1974). Physical Chemistry of Oscillatory Phenomena. London: Faraday Division of The Chemical Society.

GallaisHamonno, F., and Chauvin, R. (1972). “Simulation sur ordinateur de la construction du dôme e tdu ramassage des brindilles chez une fourmi (Formica polyctena)” C. R. Acad. Sci. Paris, 275D, 1275.

Gerisch, G., and Hess, B. (1974). “CyclicAMPControlled Oscillations in Suspended Dictyostelium Cells: Their Relation to Morphogenetic Cell Interactions,” Proc. Natl. Acad. Sci., 71, 2118.

Glansdorff, P., and Prigogine, I. (1971). Thermodynamic Theory of Structure, Stability, and Fluctuations. New York: WileyInterscience.

Goël, N. S., Maitra, S. C., and Montroll, E. W. (1971). “On the Volterra and Other Nonlinear Models of Interacting Populations,” Rev. Mod. Phys., 43, 231.

Goldbeter, A. (1973). “Organisation spatiotemporelle dans les systèmes enzymatiques ouverts,” Thèse de doctorat, Université Libre de Bruxelles.
(1975). “Mechanism for oscillatory synthesis of cyclic AMP in Dictyostelium Discoideum,” Nature, 253, 540.

Grassé, P. P. (1959). “La reconstruction du nid et les coordinations interindividuelles chez Bellicositermes natalensis et Cubitermes sp. La théorie de la stimergie: essai d’interprétation du comportement des termites constructeurs,” Insectes Sociaux, 6, 41–83.

Gusdorf, G. (1971). Les Principes de la Pensée au Siècle des Lumières. Paris: Payot.

Hangartner, W. (1967). “Spezifität und Inaktivierung des Spurpheromons von Losius fuliginosus Latr. und Orientierung der Arbeiterinnen im Duftfeld” Z. Vergleichende Physiol. 57 (2), 103.

Hanson, M. P. (1974). “Spatial Structures i Dissipative Systems,” J. Chem. Phys., 60, 3210–3214.

HerschkowitzKaufman, M. (1973). “Quelques aspects du comportement des systèmes chimiques ouverts loin de l’équilibre thermodynamique,” Thèse de doctorat, Université Libre de Bruxelles.
and Nicolis, G. (1972). “Localized Spatial Structures and NonLinear Chemical Waves in Dissipative Systems,” J. Chem. Phys., 56, 1890–1895.

Janne, H. (1963). Le Système Social: Essai de Théorie Générale. Brussels: Editions de l’Institut de Sociologie de l’Université Libre de Bruxelles.

Keller, E., and Segel, L. A. V. (1970). “Initiation of Slime Mold Aggregation Viewed as an Instability.” J. Theoret. Biol., 26, 399.

Lefever, R. (1968). “Stabilité des structures dissipatives,” Bull. Classe Sci. Acad. Roy. Belg., 54, 712.

Lotka, A. J. (1956). Elements of Mathematical Biology. New York: Dover.

MalekMansour, M., and Nicolis, G. (1975). “A Master Equation Description of Local Fluctuation,” J. Stat. Phys., 13, 197.

May, R. (1973). Model Ecosystems. Princeton, N.J.: Princeton Univ. Press.

Mazo, R. (1975). “On the Discrepancy between Results of Nicolis and Saito concerning Fluctuations in Chemical Reactions,” J. Chem. Phys., 62, 10, 4244.

McQuarrie, D. A. (1967). Supplementary Review Series in Applied Probability. London: Methuen.

Nazarea, A. D. (1974). “Critical Length of the TransportDominated Region for Oscillating NonLinear Reactive Processes,” Proc. Natl. Acad. Sci., 71, 3751.

Nicolis, G.
and Babloyantz, A. (1969). “Fluctuations in Open Systems,” J. Chem. Phys. 51, 6, 2632.
and Prigogine, I. (1971). “Fluctuations in NonEquilibrium Systems,” Proc. Natl. Acad. Sci., 68, 2102–2107.
and Prigogine, I. (1974). “Thermodynamic Aspects of SpatioTemporal Dissipative Structures.” In Physical Chemistry of Oscillatory Phenomena (Faraday Symposium 9). London: Faraday Division of The Chemical Society.
and Prigogine, I. (1976). SelfOrganization in Nonequilibrium Systems: From Dissipative Structures to Order through Fluctuations. New York: WileyInterscience (in the press).
with MalekMansour, M., Kitahara, K., and Van Nypelseer, A. (1974). “The Onset of Instabilities in Nonequilibrium Systems.” Phys. Letters, 48A, 217.

Nitzon, A., Ortoleva, P., and Ross, J. (1974). “Stochastic Theory of Metastable SteadyState and Nucleations.” In Physical Chemistry of Oscillatory Phenomena (Faraday Symposium 9). London: Faraday Division of The Chemical Society.

Onsager, L. (1931). “Reciprocal Relations in Irreversible Processes, I,” Phys. Rev., 37, 405.

Pacault, A., de Kepper, P., Hanusse, P., and Rossi, A. (1975). “Etude d’une réaction chimique périodique: Diagramme des états,” C. R. Acad. Sci. Paris, 281C, 215.

Perroux, F. (1964). Industrie et Création Collective, Tome I Paris: Presses Universitaires de France.

Portnow, J. (1974). Discussion Remarks. In Physical Chemistry of Oscillatory Phenomena (Faraday Symposium 9). London: Faraday Division of The Chemical Society.

Prigogine, I. (1967). Thermodynamics of Irreversible Processes. 3rd ed. New York: Wiley Interscience.
(1972) “La Thermodynamique de la Vie,” La Recherche, 3, 547.
(1973). Physique et Métaphysique, lecture given at Académie Royale de Belgique on the occasion of its bicentenary.
and Herman, R. (1971). Kinetic Theory of Vehicular Traffic. New York: American Elsevier.
with Nicolis, G., and Babloyantz, A. (1972). “Thermodynamics of Evolution,” Phys. Today, 25, Nos. 11 and 12.
with Nicolis, G., Herman, R., and Lam, T. (1975). “Stability, Fluctuations and Complexity,” Cooperative Phenomena, 2, 103–109.

Rachet, G. (1969). Archéologie de la Grèce préhistorique, Troie, Mycène, Cnossos. Verviers: Marabout Université.

Rettenmeyer, C. W., “Behavioral Studies of Army Ants,” Kansas Univ. Bull., 44, 281.

Susman, M. (1964). Growth and Development. Englewood Cliffs, N.J.: PrenticeHall.

Tarde, G. (1890). Les Lois de l’Imitation. Paris: Alcan.

Weidlich, W. (1974). “Dynamic of Interactions of Several Groups.” In Cooperative Phenomena (H. Haken, ed.) Amsterdam: NorthHolland.

Wilson, E. O. (1971). The Insect Societies. Cambridge, Mass.: Harvard Univ. Press.

Zotin, A. I. (1972). Thermodynamic Aspects of a Developmental Biology. Basel: S. Karger.