On Self-Organizing Systems and Their Environments

May 5, 1959

An adaptation of an address given at The Interdisciplinary Symposium on Self-Organizing Systems in Chicago, Illinois. Von Förster argues self-organizing systems don’t exist in isolation but require an environment to draw energy and order from. He defines measures of order and mechanisms whereby order arises, including via internal "demons" that decrease system entropy and external "demons" that increase maximum possible entropy. Overall, some noise helps systems remain adaptable.

Originally published in Self-Organizing Systems. M.C. Yovits and S. Cameron (eds.), Pergamon Press, London, pp. 31–50 (1960).

Topics
Mentions

I am somewhat hesitant to make the introductory remarks of my presentation, because I am afraid I may hurt the feelings of those who so generously sponsored this conference on self-organizing systems. On the other hand, I believe, I may have a suggestion on how to answer Dr. Weyl’s question which he asked in his pertinent and thought-provoking introduction: “What makes a self-organizing system?” Thus, I hope you will forgive me if I open my paper by presenting the following thesis: “There are no such things as self-organizing systems!”

In the face of the title of this conference I have to give a rather strong proof of this thesis, a task which may not be at all too difficult, if there is not a secret purpose behind this meeting to promote a conspiracy to dispose of the Second Law of Thermodynamics. I shall now prove the non-existence of self-organizing systems by reductio ad absurdum of the assumption that there is such a thing as a self-organizing system.

Assume a finite universe, \(U_0\), as small or as large as you wish (see Figure 1a), which is enclosed in an adiabatic shell which separates this finite universe from any “meta-universe” in which it may be immersed. Assume, furthermore, that in this universe, \(U_0\), there is a closed surface which divides this universe into two mutually exclusive parts: the one part is completely occupied with a self-organizing system \(S_0\), while the other part we may call the environment \(E_0\) of this self-organizing system: \(S_0 \& E_0 = U_0\).

Figure 1

I may add that it is irrelevant whether we have our self-organizing system inside or outside the closed surface. However, in Fig. 1 the system is assumed to occupy the interior of the dividing surface. Undoubtedly, if this self-organizing system is permitted to do its job of organizing itself for a little while, its entropy must have decreased during this time: $$ \frac{δS_S}{δt} < 0, $$ otherwise we would not call it a self-organizing system, but just a mechanical \( \frac{δS_S}{δt} = 0 \), or a thermodynamical \( \frac{δS_S}{δt} > 0 \) system. In order to accomplish this, the entropy in the remaining part of our finite universe, i.e. the entropy in the environment, must have increased $$ \frac{δS_E}{δt} > 0, $$ otherwise the Second Law of Thermodynamics is violated. If now some of the processes which contributed to the decrease of entropy of the system are irreversible we will find the entropy of the universe \(U_0\) at a higher level than before our system started to organize itself, hence the state of the universe will be more disorganized than before \( \frac{δS_U}{δt} > 0 \), in other words, the activity of the system was a disorganizing one, and we may justly call such a system a “disorganizing system.”

However, it may be argued that it is unfair to the system to make it responsible for changes in the whole universe and that this apparent inconsistency came about by not only paying attention to the system proper but also including into the consideration the environment of the system. By drawing too large an adiabatic envelope one may include processes not at all relevant to this argument. All right then, let us have the adiabatic system from its environment (Figure 1b). This step will not only invalidate the above argument, but will also enable me to show that if one assumes that this envelope contains the self-organizing system proper, this system turns out to be not only just a disorganizing system but even a self-disorganizing system.

It is clear from my previous example with the large envelope, that here too—if irreversible processes should occur—the entropy of the system now within the envelope must increase, hence, as time goes on, the system would disorganize itself, although in certain regions the entropy may indeed have decreased. One may now insist that we should have wrapped our envelope just around this region, since it appears to be the proper self-organizing part of our system. But again, I could employ that same argument as before, only to a smaller region, and so we could go on for ever, until our would-be self-organizing system has vanished into the eternal happy hunting grounds of the infinitesimal.

In spite of this suggested proof of the non-existence of self-organizing systems, I propose to continue the use of the term “self-organizing system,” whilst being aware of the fact that this term becomes meaningless, unless the system is in close contact with an environment, which possesses available energy and order, and with which our system is in a state of perpetual interaction, such that it somehow manages to “live” on the expenses of this environment.

Although I shall not go into the details of the interesting discussion of the energy flow from the environment into the system and out again, I may briefly mention the two different schools of thought associated with this problem, namely, the one which considers energy flow and signal flow as a strongly linked, single-channel affair (i.e. the message carries also the food, or, if you wish, signal and food are synonymous) while the other viewpoint carefully separates the two, although there exists in this theory a significant interdependence between signal flow and energy availability.

I confess that I do belong to the latter school of thought and i am particularly happy that later in this meeting Mr. Pask, in his paper The Natural History of Networks, will make this point of view much clearer than I will ever be able to do.

What interests me particularly at this moment is not so much the energy from the environment which is digested by the system, but its utilization of environmental order. In other words, the question i would like to answer is: “How much order can our system assimilate from its environment, if any at all?”

Before tackling this question, I have to take two more hurdles, both of which represent problems concerned with the environment. Since you have undoubtedly observed that in my philosophy about self-organizing systems the environment of such systems is a conditio sine qua non I am first of all obliged to show in which sense we may talk about the existence of such an environment. Second, I have to show that, if there exists such an environment, it must possess structure.

The first problem I am going to eliminate is perhaps one of the oldest philosophical problems with which mankind has had to live. This problem arises when we, men, consider ourselves to be self-organizing systems. We may insist that introspection does not permit us to decide whether the world as we see it is “real,” or just a phantasmagory, a dream, an illusion of our fancy. A decision in this dilemma is in so far pertinent to my discussion, since—if the latter alternative should hold true—my original thesis asserting the nonsensicality of the conception of an isolated self-organizing system would pitiably collapse.

I shall now proceed to show the reality of the world as we see it, by reductio ad absurdum of the thesis: this world is only in our imagination and the only reality is the imagining “I”.

Thanks to the artistic assistance of Mr. Pask who so beautifully illustrated this and some of my later assertions (Figures 2, 5, and 6), it will be easy for me to develop my argument.

Figure 2

Assume for the moment that I am the successful business man with the bowler hat in Figure 2, and I insist that I am the sole reality, while everything else appears only in my imagination. I cannot deny that in my imagination there will appear people, scientists, other successful businessmen, etc., as for instance in this conference. Since I find these apparitions in many respects similar to myself, I have to grant them the privilege that they themselves may insist that they are the sole reality and everything else is only a concoction of their imagination. On the other hand, they cannot deny that their fantasies will be populated by people—and one of them may be I, with bowler hat and everything!

With this we have closed the circle of our contradiction: If I assume that I am the sole reality, it turns out that I am the imagination of somebody else, who in turn assumes that he is the sole reality. of course, this paradox is easily resolved, by postulating the reality of the world in which we happily thrive.

Having re-established reality, it may be interesting to note that reality appears as a consistent reference frame for at least two observers. This becomes particularly transparent, if it is realized that my “proof” was exactly modeled after the “Principle of Relativity,” which roughly states that, if a hypothesis which is applicable to a set of objects holds for one object and it holds for another object, then it holds for both objects simultaneously, the hypothesis is acceptable for all objects of the set. Written in terms of symbolic logic, we have: $$ (Ex) [H(a) \& H(x) \rightarrow H(a + x)] \rightarrow (x)H(x) \tag{1} $$

Copernicus could have used this argument to his advantage, by pointing out that if we insist on a geocentric system, \( [H(a)] \), the Venusians, e.g. could insist on a venucentric system \( [(Hx)] \). But since we cannot be both, center and epicycloid at the same time \( [H(a+x)] \), something must be wrong with a planetocentric system.

However, one should not overlook that the above expression, \( R(H) \) is not a tautology, hence it must be a meaningful statement.1 What it does, is to establish a way in which we may talk about the existence of an environment.

Before I can return to my original question of how much order a self-organizing system may assimilate from its environment, I have to show that there is some structure in our environment. This can be done very easily indeed, by pointing out that we are obviously not yet in the dreadful state of Boltzmann’s “Heat-Death.” Hence, presently still the entropy increases, which means that there must be some order—at least now—otherwise we could not lose it.

Let me briefly summarize the points I have made until now:

  1. By a self-organizing system I mean that part of a system that eats energy and order from its environment.
  2. There is a reality of the environment in a sense suggested by the acceptance of the principle of relativity.
  3. The environment has structure.

Let us now turn to our self-organizing systems. What we expect is that the systems are increasing their internal order. In order to describe this process, first, it would be nice if we would be able to define what we mean by “internal,” and second, if we would have some measure of order.

The first problem arises whenever we have to deal with systems which do not come wrapped in a skin. In such cases, it is up to us to define the closed boundary of our system. But this may cause some trouble, because, if we specify a certain region in space as being intuitively the proper place to look for our self-organizing system, it may turn out that this region does not show self-organizing properties at all, and we are forced to make another choice, hoping for more luck this time. It is this kind of difficulty which is encountered, e.g., in connection with the problem of the “localization of functions” in the cerebral cortex.

Of course, we may turn the argument the other way around by saying that we define our boundary at any instant of time as being the envelope of that region in space which shows the desired increase in order. But here we run into some trouble again; because I do not know of any gadget which would indicate whether it is plugged into a self-disorganizing or self-organizing region, thus providing us with a sound operational definition.

Another difficulty may arise from the possibility that these self-organizing regions may not only constantly move in space and change in shape, they may appear and disappear spontaneously here and there, requiring the “ordometer” not only to follow these all-elusive systems, but also to sense the location of their formation.

With this little digression I only wanted to point out that we have to be very cautious in applying the word “inside” in this context, because, even if the position of the observer has been stated, he may have a tough time saying what he sees.

Let us now turn to the other point I mentioned before, namely, trying to find an adequate measure of order. It is my personal feeling that we wish to describe by this term two states of affairs. First, we may wish to account for apparent relationships between elements of a set which would impose some constraints as to the possible arrangements of the elements of this system. As the organization of the system grows, more and more of these relations should become apparent. Second, it seems to me that order has a relative connotation, rather than an absolute one, namely, with respect to the maximum disorder the elements of the set may be able to display. This suggests that it would be convenient if the measure of order would assume values between zero and unity, accounting in the first case for maximum disorder and, in the second case, for maximum order. This eliminates the choice of “neg-entropy” for a measure of order, because neg-entropy always assumes finite values for systems being in complete disorder. However, what Shannon2 has defined as “redundancy” seems to be tailor-made for describing order as I like to think of it. Using Shannon’s definition for redundancy we have: $$ R = 1 - \frac{H}{H_m} \tag{2} $$ whereby \(\frac{H}{H_m}\) is the ratio of the entropy \(H\) of an information source to the maximum value, \(H_m\), it could have while still restricted to the same symbols. Shannon calls this ratio the “relative entropy.” Clearly, this expression fulfills the requirements for a measure of order as I have listed them before. If the system is in its maximum disorder \(H=H_m\), \(R\) becomes zero; while, if the elements of the system are arranged such that, given one element, the position of all other elements are determined, the entropy—or the degree of uncertainty—vanishes, and \(R\) becomes unity, indicating perfect order.

What we expect from a self-organizing system is, of course, that, given some initial value of order in the system, this order is going to increase as time goes on. With our expression (2) we can at once state the criterion for a system to be self-organizing, namely, that the rate of change of \(R\) should be positive: $$ \frac{δR}{δt} > 0 \tag{3} $$

Differentiating eq. (2) with respect to time and using the inequality (3) we have: $$ \frac{δR}{δt} = - \frac{H_m\frac{δH}{δt}-H\frac{δH_m}{δt}}{H_m^2} \tag{4} $$

Since \(H_m^2 > 0\), under all conditions (unless we start out with systems which can only be thought of as being always in perfect order \(H_m = 0\)), we find the condition for a system to be self-organizing expressed in terms of entropies: $$ H \frac{δH_m}{δt} > H_m \frac{δH}{δt} \tag{5} $$

In order to see the significance of this equation let me first briefly discuss two special cases, namely those, where in each case one of the two terms \(H\), \(H_m\) is assumed to remain constant. $$ \text{(a)}\qquad H_m = \text{const.} $$

Let us first consider the case where \(H_m\), the maximum possible entropy of the system, remains constant because it is the case which is usually visualized when we talk about self-organizing systems. If \(H_m\) is supposed to be constant the time derivative of \(H_m\) vanishes, and we have from eq. (5): $$ \text{for}\qquad \frac{δH_m}{δt} = 0 \cdots\cdots \frac{δH}{δt} < 0 \tag{6} $$

This equation simply says that, when time goes on, the entropy of the system should decrease. We knew this already—but now we may ask, how can this be accomplished? Since the entropy of the system is dependent upon the probability distribution of the elements to be found in certain distinguishable states, it is clear that this probability distribution must change such that \(H\) is reduced. We may visualize this, and how this can be accomplished, by paying attention to the factors which determine the probability distribution. One of these factors could be that our elements possess certain properties which would make it more or less likely that an element is found to be in a certain state. Assume, for instance, the state under consideration is “to be in a hole of a certain size.” The probability of elements with sizes larger than the hole to be found in this state is clearly zero. Hence, if the elements are slowly blown up like little balloons, the probability distribution will constantly change. Another factor influencing the probability distribution could be that our elements possess some other properties which determine the conditional probabilities of an element to be found in certain states, given the state of other elements in this system. Again, a change in these conditional probabilities will change the probability distribution, hence the entropy of the system. Since all these changes take place internally I’m going to make an “internal demon” responsible for these changes. he is the one, e.g. being busy blowing up little balloons and thus changing the probability distribution, or shifting conditional probabilities by establishing ties between elements such that \(H\) is going to decrease. Since we have some familiarity with the task of this demon, I shall leave him for a moment and turn now to another one, by discussing the second special case I mentioned before, namely, where \(H\) is supposed to remain constant. $$ \text{(b)}\qquad H = \text{const.} $$

If the entropy of the system is supposed to remain constant, its time derivative will vanish and we will have from eq. (5) $$ \text{for}\qquad \frac{δH}{δt} = 0 \cdots\cdots \frac{δH_m}{δt} > 0 \tag{7} $$

Thus, we obtain the peculiar result that, according to our previous definition of order, we may have a self-organizing system before us, if its possible maximum disorder is increasing. At first glance, it seems that to achieve this may turn out to be a rather trivial affair, because one can easily imagine simple processes where this condition is fulfilled. Take as a simple example a system composed of \(N\) elements which are capable of assuming certain observable states. In most cases a probability distribution for the number of elements in these states can be worked out such that \(H\) is maximized and an expression for \(H_m\) is obtained. Due to the fact that entropy (or, amount of information) is linked with the logarithm of the probabilities, it is not too difficult to show that expressions for \(H_m\) usually follow the general form: $$ H_m = C_1 + C_2 \log_2 N. $$

This suggests immediately a way of increasing \(H_m\), namely, by just increasing the number of elements constituting the system; in other words, a system that grows by incorporating new elements will increase its maximum entropy and, since this fulfills the criterion for a system to be self-organizing (eq. 7), we must, by all fairness, recognize this system as a member of the distinguished family of self-organizing systems.

It may be argued that if just adding elements to a system makes this a self-organizing system, pouring sand into a bucket would make the bucket a self-organizing system. Somehow—to put it mildly—this does not seem to comply with our intuitive esteem for members of our distinguished family. And rightly so, because this argument ignores the premise under which this statement was derived, namely, that during the process of adding new elements to the system the entropy \(H\) of the system is to be kept constant. In the case of the bucket full of sand, this might be a ticklish task, which may conceivably be accomplished, e.g. by placing the newly admitted particles precisely in the same order with respect to some distinguishable states, say position, direction, etc. as those present at the instant of admission of the newcomers. Clearly, this task of increasing \(H_m\) by keeping \(H\) constant asks for superhuman skills and thus we may employ another demon whom I shall call the “external demon,” and whose business it is to admit to the system only those elements, the state of which complies with the conditions of, at least, constant internal entropy. As you certainly have noticed, this demon is a close relative of Maxwell’s demon, only that today these fellows don’t come as good as they used to come, because before 19273 they could watch an arbitrary small hole though which the newcomer had to pass and could test with arbitrary high accuracy his momentum. Today, however, demons watching closely a given hole would be unable to make a reliable momentum test, and vice versa. They are, alas, restricted by Heisenberg’s uncertainty principle.

Having discussed the two special cases where in each case only one demon is at work while the other one is chained, I shall now briefly describe the general situation where both demons are free to move, thus turning to our general eq. (5) which expressed the criterion for a system to be self-organizing in terms of the two entropies \(H\) and \(H_m\). For convenience this equation may be repeated here, indicating at the same time the assignments for the two demons \(D_i\) and \(D_e\): $$ H \times \frac{δH_m}{δt} > H_m \times \frac{δH}{δt} \tag{5} $$ $$ \text{where:} \\[10pt] \begin{align} H \quad &\text{= Internal demon’s results} \\[10pt] \frac{δH_m}{δt} \quad &\text{= External demon’s efforts} \\[10pt] H_m \quad &\text{= External demon’s results} \\[10pt] \frac{δH}{δt} \quad &\text{= Internal demon’s efforts} \end{align} $$

From this equation we can now easily see that, if the two demons are permitted to work together, they will have a disproportionately easier life compared to when they were forced to work alone. First, it is not necessary that \(D_i\) is always decreasing the instantaneous entropy \(H\), or \(D_e\) is always increasing the maximum possible entropy \(H_m\); it is only necessary that the product of \(D_i\)’s results with \(D_e\)s efforts is larger than the product of \(D_e\)’s results with \(D_i\)’s efforts. Second, if either \(H\) or \(H_m\) is large, \(D_e\) or \(D_i\) respectively can take it easy, because their efforts will be multiplied by the appropriate factors. This shows, in a relevant way, the interdependence of these demons. Because, if \(D_i\) was very busy in building up a large \(H\), \(D_e\) can afford to be lazy, because his efforts will be multiplied by \(D_i\)’s results, and vice versa. On the other hand, if \(D_e\) remains lazy too long, \(D_i\) will have nothing to build on and his output will diminish, forcing \(D_e\) to resume his activity lest the system ceases to be a self-organizing system.

In addition to this entropic coupling of the two demons, there is also an energetic interaction between the two which is caused by the energy requirements of the internal demon who is supposed to accomplish the shifts in the probability distribution of the elements comprising the system. This requires some energy, as we may remember from our previous example, where somebody has to blow up the little balloons. Since this energy has been taken from the environment, it will affect the activities of the external demon who may be confronted with a problem when he attempts to supply the system with choice-entropy he must gather from an energetically depleted environment.

In concluding the brief exposition of my demonology, a simple diagram may illustrate the double linkage between the internal and the external demon which makes them entropically \((H)\) and energetically \((E)\) interdependent.

For anyone who wants to approach this subject from the point of view of a physicist, and who is conditioned to think in terms of thermodynamics and statistical mechanics, it is impossible not to refer to the beautiful little monograph by Erwin Schrödinger: What is Life.[4] Those of you who are familiar with this book may remember that Schrödinger admires particularly two remarkable features of living organisms. One is the incredible high order of the genes, the “hereditary code-scripts” as he calls them, and the other one is the marvelous stability of these organized units whose delicate structures remain almost untouched despite their exposure to thermal agitation by being immersed—e.g. in the case of mammals—into a thermostat, set to about 310°K (37°C / 98°F).

In the course of his absorbing discussion, Schrödinger draws our attention to two different basic “mechanisms” by which orderly events can be produced: “The statistical mechanism which produces order from disorder and the … [other] one producing ‘order from order’.”

While the former mechanism, the “order from disorder” principle, is merely referring to “statistical laws” or, as Schrödinger puts it, to “the magnificent order of exact physical law coming forth from atomic and molecular disorder,” the latter mechanism, the “order from order” principle, is, again in his words: “the real clue to the understanding of life.” Already earlier in his book Schrödinger develops this principle very clearly and states: “What an organism feeds upon is negative entropy.” I think my demons would agree with this, and I do too.

However, by reading recently through Schrödinger’s booklet I wondered how it could happen that his keen eyes escaped what I would consider a “second clue” to the understanding of life, or—if it is fair to say—of self-organizing systems. Although the principle I have in mind may, at first glance, be mistaken for Schrödinger’s “order from disorder” principle, it has in fact nothing in common with it. Hence, in order to stress the difference between the two, I shall call the principle I am going to introduce to you presently the “order from noise” principle. Thus, in my restaurant, self-organizing systems do not only feed upon order, they will also find noise on the menu.

Let me briefly explain what I mean by saying that a self-organizing system feeds upon noise by using an almost trivial, but nevertheless amusing example.

Assume I get myself a large sheet of permanent magnetic material which is strongly magnetized perpendicular to the surface, and I cut from this sheet a large number of little squares (Fig. 3a). These little squares I glue to all the surfaces of small cubes made of light, unmagnetic material, having the same size as my squares (Fig. 3b). Depending upon the choice of which sides of the cubes have the magnetic north pole pointing to the outside (Family I), one can produce precisely ten different families of cubes as indicated in Fig. 4.

Figure 3a: Magnetized square.
Figure 3b: Cube, family I.
Figure 4: Ten different families of cubes (see text).

Suppose now I take a large number of cubes, say, of family I, which is characterized by all sides having north poles pointing to the outside (or family I′ with all south poles), put them into a large box which is also filled with tiny glass pebbles in order to make these cubes float under friction and start shaking this box. Certainly, nothing very striking is going to happen: since the cubes are all repelling each other, they will tend to distribute themselves in the available space such that none of them will come too close to its fellow-cube. If, by putting the cubes into the box, no particular ordering principle was observed, the entropy of the system will remain constant, or, at worst, increase a small amount.

In order to make this game a little more amusing, suppose now I collect a population of cubes where only half of the elements are again members belonging to family I (or I′) while the other half are members of family II (or II′) which is characterized by having only one side of different magnetism pointing to the outside. If this population is put into my box and I go on shaking, clearly, those cubes with the single different pole pointing to the outside will tend, with overwhelming probability, to mate with members of the other family, until my cubes have almost all paired up. Since the conditional probabilities of finding a member of family II, given the locus of a member of family I, has very much increased, the entropy of the system has gone down, hence we have more order after the shaking than before. It is easy to show (see Appendix) that in this case the amount of order in our system went up from zero to $$ R_∞ = \frac{1}{\log_2 (\text{en})}, $$ if one started out with a population density of \(n\) cubes per unit volume.

I grant you, that this increase in orderliness is not impressive at all, particularly if the population density is high. All right then, let’s take a population made up entirely of members belonging to family IVB, which is characterized by opposite polarity of the two pairs of those three sides which join in two opposite corners. I put these cubes into my box and you shake it. After some time we open the box and, instead of seeing a heap of cubes piled up somewhere in the box (Fig. 5), you may not believe your eyes, but an incredibly ordered structure will emerge, which, I fancy, may pass the grade to be displayed in an exhibition of surrealistic art (Fig. 6).

Figure 5: Before.
Figure 6: After.

If I would have left you ignorant with respect to my magnetic-surface trick and you would ask me, what is it that put these cubes into this remarkable order, I would keep a straight face and would answer: The shaking, of course—and some little demons in the box.

With this example, I hope, I have sufficiently illustrated the principle I called “order from noise,” because no order was fed to the system, just cheap undirected energy; however, thanks to the little demons in the box, in the long run only those components of the noise were selected which contributed to the increase of order in the system. The occurrence of a mutation e.g. would be a pertinent analogy in the case of gametes being the systems of consideration.

Hence, I would name two mechanisms as important clues to the understanding of self-organizing systems, one we may call the “order from order” principle as Schrödinger suggested, and the other one the “order from noise” principle, both of which require the co-operation of our demons who are created along with the elements of our system, being manifest in some of the intrinsic structural properties of these elements.

I may be accused of having presented an almost trivial case in the attempt to develop my order from noise principle. I agree. However, I am convinced that I would maintain a much stronger position if I would not have given away my neat little trick with the magnetized surfaces. Thus, I am very grateful to the sponsors of this conference that they invited Dr. Auerbach[5] who, later in this meeting, will tell us about his beautiful experiments in vitro of the reorganization of cells into predetermined organs after the cells have been completely separated and mixed. If Dr. Auerbach happens to know the trick by which this is accomplished, I hope he does not give it away. Because, if he would remain silent, I would recover my thesis that without having some knowledge of the mechanisms involved, my example was not too trivial after all, and self-organizing systems still remain miraculous things.

Appendix

Figure 7

The entropy of a system of given size consisting of \(N\) indistinguishable elements will be computed taking only the spatial distribution of elements into consideration. We start by subdividing the space into \(Z\) cells of equal size and count the number of cells \(Z_i\) lodging \(i\) elements (see Fig. 7a). Clearly we have $$ \begin{align} \sum Z_i &= Z \tag{i} \\[7px] \sum iZ_i &= N \tag{ii} \end{align} $$ The number of distinguishable variations of having a different number of elements in the cells is $$ P = \frac{Z!}{\prod Z_i!} \tag{iii} $$ whence we obtain the entropy of the system for a large number of cells and elements: $$ H = \ln P = Z \ln Z - \sum Z_i \ln Z_i \tag{iv} $$ In the case of maximum entropy \(\bar{H}\) we must have $$ δH = 0 \tag{v} $$ observing also the conditions expressed in eqs. (i) and (ii). Applying the method of the Lagrange multipliers we have from (iv) and (v) with (i) and (ii): $$ \begin{align} \sum (\ln Z_i + 1) δZ_i = 0 && \\ \sum iδZ_i = 0 &&\beta \\ \sum δZ_i = 0 &&-(1 + \ln \alpha) \end{align} $$ multiplying with the factors indicated and summing up the three equations we note that this sum vanishes if each term vanishes identically. Hence: $$ \ln Z_i + 1 + i\beta - 1 - \ln \alpha = 0 \tag{vi} $$ whence we obtain that distribution which maximizes \(H\): $$ Z_i = \alpha \text{e}^{-i\beta} \tag{vii} $$ The two undetermined multipliers \(\alpha\) and \(\beta\) can be evaluated from eqs. (i) and (ii): $$ \alpha \sum e^{-i\beta} = Z \tag{viii} $$ $$ \alpha \sum ie^{-i\beta} = N \tag{ix} $$ Remembering that $$ - \frac{δ}{δ\beta} \sum e^{-i\beta} = \sum ie^{-i\beta} $$ we obtain from (viii) and (ix) after some manipulation: $$ \alpha = Z( 1-e^{ \frac{-1}{n} } ) \approx \frac{Z}{n} \tag{x} $$ $$ \beta = \ln \Bigl( 1 + \frac{1}{n} \Bigr) \approx \frac{1}{n} \tag{xi} $$ where \(n\), the mean cell population or density \(N/Z\) is assumed to be large in order to obtain the simple approximations. In other words, cells are assumed to be large enough to lodge plenty of elements.

Having determined the multipliers \(\alpha\) and \(\beta\), we have arrived at the most probable distribution which, after eq. (vii) now reads: $$ Z_i = \frac{Z}{n}e^{-1/n} \tag{xii} $$ From eq. (iv) we obtain at once the maximum entropy: $$ \bar{H} = Z \ln (en). \tag{xiii} $$ Clearly, if the elements are supposed to be able to fuse into pairs (Fig. 7b), we have $$ \bar{H^{\prime}} = Z \ln (en/2). \tag{xiv} $$ Equating \(\bar{H}\) with \(H_m\) and \(\bar{H^{\prime}}\) with \(H\), we have for the amount of order after fusion: $$ R = 1 - \frac{z \ln (en)}{Z \ln (en/2)} = \frac{1}{\log_2(en)} \tag{xv} $$

Discussion

Lederman
(U. of Chicago)

I wonder if it is true that in your definition of order you are really aiming at conditional probabilities rather than just an order in a given system, because for a given number of elements in your system, under your definition of order, the order would be higher in a system in which the information content was actually smaller than for other systems.

von Förster

Perfectly right. What I tried to do here in setting a measure of order, was by suggesting redundancy as a measure. It is easy to handle. From this I can derive two statements with respect to \(H_{\text{max}}\) and with respect to \(H\). Of course, I don’t mean this is a universal interpretation of order in general. It is only a suggestion which may be useful or may not be useful.

Lederman

I think it is a good suggestion but it is an especially good suggestion if you think of it in terms of some sort of conditional probability. It would be more meaningful if you think of the conditional probabilities as changing so that one of the elements is singled out for a given environmental state as a high probability.

von Förster

Yes, if you change \(H\), there are several ways one can do it. One can change the conditional probability. One can change also the probability distribution which is perhaps easier. That is perfectly correct.

Now the question is, of course, in which way can this be achieved? It can be achieved, I think, if there is some internal structure of those entities which are to be organized.

Lederman

I believe you can achieve that result from your original mathematical statement of the problem in terms of H and \(H_{\text{max}}\), in the sense that you can increase the order of your system by decreasing the noise in the system which increases \(H_{\text{max}}\).

von Förster

That is right. But there is the possibility that we will not be able to go beyond a certain level. On the other hand, I think it is favorable to have some noise in the system. If a system is going to freeze into a particular state, it is inadaptable and this final state may be altogether wrong. It will be incapable of adjusting itself to something that is a more appropriate situation.

Lederman

That is right, but I think the parallelism between your mathematical approach and the model you gave in terms of the magnets organizing themselves, that in the mathematical approach you can increase the information content of the system by decreasing the noise and similarly in your system where you saw the magnets organizing themselves into some sort of structure you were also decreasing the noise in the system before you reached the point where you could say ah-ha, there is order in that system.

von Förster

Yes, that is right.

Mayo
(Loyola Univ.)

How can noise contribute to human learning? Isn’t noise equivalent to nonsense?

von Förster

Oh, absolutely, yes. Well, the distinction between noise and nonsense, of course, is a very interesting one. It is referring usually to a reference frame. I believe that, for instance, if you would like to teach a dog, it would be advisable not only to do one and the same thing over and over again. I think what should be done in teaching or training, say, an animal, is to allow the system to remain adaptable, to ingrain the information in a way where the system has to test in every particular situation a hypothesis whether it is working or not. This can only be obtained if the nature into which the system is immersed is not absolutely deterministic but has some fluctuations. These fluctuations can be interpreted in many different forms. They can be interpreted as noise, as nonsense, as things depending upon the particular frame of reference we talk about.

For instance, when I am teaching a class, and I want to have something remembered by the students particularly well, I usually come up with an error and they point out, “You made an error, sir.” I say, “Oh yes, I made an error,” but they remember this much better than if I would not have made an error. And that is why I am convinced that an environment with a reasonable amount of noise may not be too bad if you would really like to achieve learning.

Reid
(Montreal
Neur. Inst.)

I would like to hear Dr. von Förster’s comment on the thermodynamics of self-organizing systems.

von Förster

tems. I think Prigogine and others have approached the open system problem. I myself am very interested in many different angles of the thermodynamics of self-organizing systems because it is a completely new field.

If your system contains only a thousand, ten thousand or a hundred thousand particles, one runs into difficulties with the definition of temperature. For instance, in a chromosome or a gene, you may have a complex molecule involving about 106 particles. Now, how valid is the thermodynamics of 106 particles or the theory which was originally developed for 1023 particles? If this reduction of about 1017 is valid in the sense that you can still talk about “temperature” there is one way you may talk about it. There is, of course, the approach to which you may switch, and that is information theory. However, there is one problem left and that is, you don’t have a Boltzmann’s constant in information theory and that is, alas, a major trouble.

Footnotes

  1. This was observed by Wittgenstein, although he applied this consideration to the principle of mathematical induction. However, the close relation between the induction and the relativity principle seems to be quite evident. I would even venture to say that the principle of mathematical induction is the relativity principle in number theory.

  2. G. A. Pask, The natural history of networks. This volume, p. 232

  3. C. Shannon and W. Weaver, The Mathematical Theory of Communication, p. 25, University of Illinois Press, Urbana, Illinois (1949).

  4. E. Schrödinger, What is Life? pp. 72, 80, 80, 82, Macmillan, New York (1947).

  5. R. Auerbach, Organization and reorganization of embryonic cells. This volume, p. 101.

Heinz von Förster

https://www.organism.earth/library/docs/heinz-von-förster/headshot-square.webp

An image of the subject.

×
Document Options
Find out more