Incomplete Nature - LightNovelsOnl.com
You're reading novel online at LightNovelsOnl.com. Please use the follow button to get notifications about your favorite novels and its latest chapters so you can come back anytime and won't miss anything.
This s.h.i.+ft from a largely philosophical to a more descriptive usage of the term emergence has been both salutary and misleading. One consequence is that the meanings of the term have diversified and proliferated so that what it refers to in one field may be quite different than in another, with each of the a.s.sociated concepts of novelty, unpredictability, ascent in scale, synergy, and so on, serving as the princ.i.p.al feature being highlighted. But a more troublesome consequence of identifying exemplars of emergent relations.h.i.+ps in physical and computational processes is that it suggests that all examples of phenomena might be accounted for as members of this (albeit diverse) cla.s.s of processes. This motivates a tendency to presume that although these specific forms do not exhibit obvious ententional properties, such properties can nevertheless be explained, or else explained away, in these terms. As we will see in the following chapters, this presumption is premature, and in any case it still evades questions about the origins and physical nature of ententional phenomena. Nevertheless, by unambiguously modeling the possibility of the emergence of global order from lower-level chaos, this work provides an important "intuition pump"22 to help conceive of how this additional emergent step might be possible. As we will see, although this conception of emergent processes does not directly address ententional issues, it may provide a critical missing link from mechanism to teleology.
PROCESSES AND PARTS.
One of the most significant consequences of the growing interest in self-organizing physical processes and complex computational systems is that it has s.h.i.+fted attention to dynamical rather than structural characterizations of emergence. However, process approaches don't easily fit into the cla.s.sic categories of emergence theories. This is because they require rethinking two of the central concepts typically used to define emergence: the part/whole (mereological) distinction and the supervenience relation. If we dispense with these, will we still have something we can call emergence? Actually, this may be the only way to save the concept.
At the end of a paper discussing his process approach to emergence, Mark Bickhard boldly a.s.serts: "Mental states do not exist, any more than do flame states-both are processes."23 This may be a bit too extreme, but it drives home a crucial point: these phenomena consist in the special character of the transformations between states, not in the const.i.tution of things at any slice in time. So, trying to build a theory of the emergence of living or mental processes based on a synchronic conception of emergence theory is bound to fail from the beginning. The phenomena we are interested in explaining are intrinsically historical and dynamic. Being alive does not merely consist in being composed in a particular way. It consists in changing in a particular way. If this process of change stops, life stops, and unless all molecular processes are stopped as well (say by quick-freezing), the cells and molecules distinctive to it immediately begin to degrade. We can of course dissect organisms and cells, and isolate and study the molecules, molecular complexes, and chemical reactions that they consist of; but as important as this is to the understanding of the living process, it is the organization of that process-involving all those components and component processes-that is what we ultimately hope to be able to reconstruct once we've understood a significant fraction of these details. Yes, the process involves those component parts and subordinate processes; but as was pointed out in the last chapter, these tens of thousands of molecules in vast interaction networks reciprocally synthesize and organize each other. None are parts, as we would understand the concept from an engineering perspective. In an organism, the very notion of a part is process-dependent.
This should be even more obvious with respect to mind. No one would seriously suggest that the process of thought could be explained solely in terms of neural structures or neurotransmitter molecules. Of course, these are critical to the account, but without understanding the processes that they are enmeshed within, we are merely doing anatomy. What matters is the process, and this is hierarchically complex, though not mereological in any simple sense. Nevertheless, we find it hard to resist part/whole simplifications.
Consider functional brain imaging, such as positron emission tomography (PET) or functional magnetic resonance imaging (fMRI). Although these are some of the most exciting new techniques available, and have provided an unparalleled window into brain function, they also come with an intuitive bias: they translate the correlates of cognitive processes into stationary patterns. These techniques identify regions with particularly elevated metabolism, compared to control states, during a given mental task. There are good reasons to believe that local oxygen and glucose consumption are correlated with local signal-processing demands, and that this is a clue to the degree to which different brain areas are involved in the handling of different kinds of cognitive tasks. But it would be a serious mistake to imagine that the function in question is in any sense "located" in the identified "hot spots," or to believe that a metabolic "snapshot"24 would be in any sense a simple correlate to what is involved even at a gross level in the performance of that activity.
Even the simplest conscious cognitive act inevitably involves rapidly s.h.i.+fting, fleeting, and low-level metabolic changes across dozens of diverse brain regions-some increasing and some decreasing in metabolism at different moments in the process-in order to be completed. Thus an averaged scan, collapsed to a single image, inevitably erases information about this pattern of change. And as two excerpts from pieces of music can a.n.a.logously demonstrate, regions whose metabolism is constant in different tasks (e.g., control and test) can have the same energy and yet strikingly different "melodies." Knowing the neuronal and molecular substrates, their potential interactions, and the locations of intense activities (indicated by metabolic change) are essential bits of evidence, just far from sufficient for an explanation. Ultimately, we will at least need to develop methods to track the second-by-second regional changes, and eventually even the systemic activities of thousands of interacting neurons, in order to have a methodology appropriate to the challenge. In other words, what we need to know are the relations.h.i.+ps between brain system processes and cognitive processes.
A process conception of emergence both resolves certain conceptual dilemmas a.s.sociated with cla.s.sic synchronic conceptions of emergence and poses new conceptual problems that were previously irrelevant. As noted earlier, Bickhard undermined the force of some of the most devastating criticisms of emergence theories by pointing out that the particulate a.s.sumptions of substance metaphysics are incompatible with the ultimate inseparability of process and substance. If we recast the problem in terms of process and organization instead of parts and wholes, the concept of emergence no longer suffers from the criticisms of causal redundancy and explanatory circularity. Because there are no material ent.i.ties that are not also processes, and because processes are defined by their organization, we must acknowledge the possibility that organization itself is a fundamental determinant of physical causality. At different levels of scale and compositionality, different organizational possibilities exist. And although there are material properties that are directly inherited from lower-order component properties, it is clear that the production of some forms of process organization is only expressed by dynamical regularities at that level. So the emergence of such level-specific forms of dynamical regularity creates the foundation for level-specific forms of physical influence.
Of course, this does not in itself solve the problem of explaining the nature of the transitions we are interested in. It only argues for their plausibility. As Bickhard himself notes, the s.h.i.+ft to a process perspective merely clears some serious roadblocks standing in the way of understanding emergent phenomena. We still need to explain what kind of change in process organization actually const.i.tutes any given emergent transition.
This also signals a paradigm s.h.i.+ft in what we take to be an emergent phenomenon. In fact, a process approach forces us to abandon the two pillars of cla.s.sic emergence theory: supervenience and mereology. For example, the philosopher Timothy O'Connor argues that composition is not well defined for processes.25 This is because components may be continually changing (as in, say, a vortex or an organism) while the emergent attributes are persistent. Although the component elements in such processes are critical, the dynamics of their transitions are more so, and it is this pattern of change that is critical.
Cla.s.sically, particles are conceived as specific individuated things, with specific location in s.p.a.ce and time, specific extension into these dimensions, and specific properties. As we have seen, quantum physics appears to undermine the lowest level of particulateness, and thus denies us the ability to ultimately ground emergence on this concept, even if at higher levels we feel pretty confident that individual objects and events can be disambiguated from one another. Thus individual atoms, molecules, cells, and bodies seem fairly unambiguously distinguishable and can be a.s.signed unambiguous causal properties with respect to each other. And yet, even cla.s.sic conceptions of emergence have an irreducible temporal aspect, because in most cases there was a time before and after an emergent transition. Many specific emergent features of the world have not always been around. Depending on what emergences we consider-for example, Morowitz's list of cosmic emergence events, or maybe just the origins of life and mind-pretty much all of them are a.s.sociated with a before and after.
The role of temporal transition is relevant for the a.n.a.logue of fusion as it applies to the molecules const.i.tuting an organism. The existence of these molecules is not explained without at least implicitly describing their origins in a prior phase of the process. The majority of biomolecules exhibit process-dependent properties in the sense that they are reciprocally producers and products, means and ends, in a network of synthetic pathways. This means that although there appears to be a supervenience relations.h.i.+p between the molecules and certain organism-level traits (e.g., collagen molecules contributing elasticity to skin), this appearance is misleading. Supervenience is defined by the ontological priority of the components compared to the higher-level features they generate. But in this case, this hierarchic ontological dependency is tangled in what Douglas Hofstadter has called "strange loops."26 Over the course of evolution, it is likely that there were modifications to essentially every molecular component found in a living organism. These modifications were produced by changes in the synthetic processes, which in turn were produced by changes in the molecules involved, which in their turn were produced by changes in the synthetic processes, and so on. Even though biomolecules are synthesized anew in each organism, the synthetic processes that produced them owe their existence to being the object of natural selection in long-past epochs. Processes dependent on processes dependent on processes.
Bickhard's and O'Connor's defenses of a process conception of emergence only clear away a.s.sumptions that have been blocking progress. To accept this critique is to be enjoined to provide an alternative to these more cla.s.sic conceptions of emergence that nonetheless achieves what they did not. So we must begin to consider how one would construct a theory of emergent processes that articulates the dynamical alternatives to the cla.s.sic defining properties of mereological organization, supervenience, and causal inflection. These criteria define what the emergence of a new level of process entails. It specifically requires developing new criteria for defining hierarchically distinct dynamic regimes and their relations.h.i.+p to lower-level dynamic regimes. Although the relations.h.i.+p is in this sense loosely a.n.a.logous to a supervenience relations.h.i.+p, in that higher-level dynamical regimes require lower-level dynamical regimes, it cannot be understood in terms of simple part/whole compositionality. Nor is the existence of lower-level properties explainable irrespective of higher-level properties. Higher-order processes are more accurately "nested" within lower-level dynamics as distinctive special cases. Thus, for example, the circular flow of a whirlpool is a special case of laminar flow of liquid and occurs as a local disturbance of this general flow pattern.
What must be explained, then, is how the organization of a process can be a locus of a distinctive mode of causality that is independent of the patterns of causal interaction among individual components of that process. Beneath this question lies another that is even more basic: Isn't organization merely a description of some relations.h.i.+ps between components? Indeed, isn't it inevitably a simplification and an abstraction? We must still ask whether there is something about the organization of a process that is causally relevant over and above lower-level processes and their causal influences. If organization is not an intrinsic property, but rather merely a comparative a.s.sessment or idealization, it is rendered physically epiphenomenal.
This is relevant to the adequacy of reductionism for a.n.a.lyzing processes whose components are ultimately processes as well. Cla.s.sic conceptions of emergence were specifically formulated as critiques of reductionism and eliminativism, and in particular they contested the priority of the very small. But framing emergence in dynamical terms does not necessarily favor either bottom-up or top-down priority of causal influence. As in the case of the interdependence of organization and compositionality of the biomolecules that const.i.tute an organism, reductionistic a.n.a.lysis is both informative and yet only fully explanatory in the context of an evolved organism and its relevant environments. Reductive a.n.a.lysis that a.s.sumes this larger systemic context as a kind of una.n.a.lyzed boundary condition is in fact the typical case in the biological sciences. This is not simple reductionism, since the emergent features are already a.s.sumed.
In conclusion, when it comes to the two cla.s.ses of phenomena that we would be most willing to call emergent in a robust sense-life and mentality-the dependency between the dynamical regularities that bring them into being at different levels of scale is neither supervenient nor mereological. The nested and interdependent forms of process organization that characterize these ententional phenomena are not compositional in any clear sense, and do not yield to simple decompositional a.n.a.lysis. Processes are not decomposable to other simpler processes. Or, to put this in simpler terms: processes don't have other processes as their parts. This may explain the failure of cla.s.sic emergence theories to provide a coherent account of the phenomena they were intended to explain. Although the exploration of non-linear and far-from-equilibrium dynamical systems has recently provided the impetus to reframe emergence concepts in process terms, it has not provided an account of the ententional properties that originally inspired the development of emergence theories.
This is seen by many as their strength, since the elimination of ententional a.s.sumptions is often thought to be a fundamental methodological imperative in the natural sciences. If this is in fact a complete and comprehensive paradigm within which emergent phenomena can be explained, then indeed the cla.s.sic project will have failed. For although it is possible to explain much of the complexity of spontaneous form production and hierarchical discontinuities of dynamical levels within natural and living systems in this way, their ententional properties must be treated as mere descriptive heuristics.
There is, however, another way to view the contribution of these dynamical systems insights. The s.h.i.+ft to a dynamical perspective provides a way to explain the spontaneous generation of organization, despite the relentless dissolution of orderliness that is implicit in the second law of thermodynamics. Although dynamical order is not in itself ententional in any sense, all ententional phenomena are defined with respect to order, and so depend on the presence of organized processes for their existence. Perhaps the emergence of dynamical regularity from a context of far-from-equilibrium thermodynamics can serve both as a stepping stone contributing to the emergence of ententional processes and as a model of the logic of emergent transitions in general.
6.
CONSTRAINT.
. . . while, in the past, biologists have tended to think of organization as something extra, something added to the elementary variables, the modern theory, based on the logic of communication, regards organization as a restriction or constraint.
-W. ROSS ASHBY, 1962.
HABITS.
What emerges? New laws of physics? New forms of matter? Clearly, even if there is considerable diversity of opinion about what should be considered emergent, most if not all of the candidate examples of emergent transitions involve some radical and all-but-discontinuous reorganization of physical processes, but without a departure from what we might describe as physics-as-usual. Consider the two transitions that will most occupy us in the remainder of this book: the transitions from non-life to life and from insentient mechanism to mind. Each involves an unprecedented alteration in the pattern of the way things change-one that inverts some previous nearly exceptionless pattern of causal influences. Of course, a causal pattern isn't any specific individual change. It is something like a general tendency that is exhibited by a vast many events. Though it needn't be some new "law" of physics, or even a violation of existing physical laws, in these cases at least there is a consistent and predictable expression of what would be exceedingly unlikely prior to this transition. As in the case of the second law of thermodynamics, a tendency to change in a distinctive asymmetric way can be nearly ubiquitous, and yet at the same time not be universal. As a first pa.s.s at a general definition, then, we can say an emergent transition is a change in conditions that results in a fundamental s.h.i.+ft in the global symmetry of some general causal tendency, despite involving the same matter, energy, and physical laws. But what sort of thing is a global causal tendency that is not a law?
The American philosopher Charles Sanders Peirce, writing at the end of the nineteenth century, argued that, at base, change is spontaneous and singular, and thus intrinsically uncorrelated: a metaphysical a.s.sumption he termed tychism.2 And yet, he also argued that there is a tendency of things to fall into habits. Habit was Peirce's general term referring to regularities of behavior arising in both physical and organic contexts. The term could thus apply equally to the tendency for hurricanes to circle counterclockwise in the northern hemisphere and the yearly migration of Monarch b.u.t.terflies between the Yucatan and New England. Peirce did not offer an explicit account of why otherwise uncorrelated events might ever tend to become correlated with one another, and thus habitual, but he postulated the existence of a universal habit of habit formation, considering it to be fundamental to the nature of things. In an extreme framing of this principle, he even suggested that what we think of as fundamental physical laws might themselves have evolved from less regular, less habitual tendencies to change. He characterized this most fundamental property of nature with the phrase, "habits tend to beget habits." One way to characterize the point of this chapter is that it is an attempt to give this notion a more precise physical characterization.
This is a somewhat archaic use of the term habit, as compared to our modern colloquial use, which typically characterizes repet.i.tive human or animal behaviors. But even this connotation is relevant, since we often consider something to be a habit even though it only periodically exhibits a precisely repet.i.tive characteristic. Thus a person's smoking habit or a habit of clearing one's throat when talking may only periodically, and somewhat irregularly, be expressed. A "habit" in this sense is a behavioral bias or predisposition that may or may not be overtly expressed, but is always a tendency. This is an important caveat, because it will allow us to consider a much more general cla.s.s of only quasi-regular phenomena that don't easily correspond to notions of order, pattern, or organization. Peirce, too, had a statistical notion of habit in mind. For example, he metaphorically described the invariant regularities of inorganic matter as processes that are "hidebound with habit" in comparison to organic matter.
Peirce's concept of habit was the foundation for his somewhat iconoclastic defense of metaphysical realism. Realism is a cla.s.sic metaphysical viewpoint which argues that ideal forms, regular tendencies, types of things, and general properties of phenomena-not just specific singular events, objects, or effects-are important in determining what happens in the world. Realism is generally traced to the philosophy of Plato, who famously argued that the material world only imperfectly reflects the influence of the ideal universal forms. These forms were at once both perfect and unchanging, while their physical manifestations were inevitably flawed and impermanent. Though Peirce's version of realism did not postulate the existence of a realm of ideal forms, he nevertheless argued that physical regularities of a general type-habits-can influence the creation of other physical regularities of the same or different type, despite being embodied in diverse substrates. He argued that regularity itself, rather than any specific materially regular feature, is what is causally most relevant.
The problem of the physical influence of general types dates back to the late Middle Ages and the question of whether general terms referring to types of things rather than individual things (sometimes referred to as "tokens" of a type) actually refer to anything "real," since no two examples of things of the same type are truly identical. The issue was whether types could have any real influence in the world except as a consequence of the specific features of individual exemplars of the type. If being a type of thing only mattered to an observer interested in cl.u.s.tering phenomenal details into cla.s.ses or collections for cognitive or communicative convenience, then being of that type would effectively be epiphenomenal, and of no physical consequence. But more to the point, can we really a.s.sume that the laws of physics and chemistry are any more than idealized descriptions, since they apply across individual instances? And what about properties of things like hardness or opacity?
Ironically, realism claimed that general properties, laws, and physical dispositions to change in such-and-such a way are fundamental facts about reality irrespective of anyone's having considered them, whereas the view that these generalizations are merely conveniences of thought, abstracted from observation and otherwise epiphenomenal in the world of physical cause and effects, was called nominalism.
Nominalism arose as a critique of realism. In its most extreme form, nominalism claims that general types of physical properties, like being hard, liquid, reflective, living, or general dispositions of behavior, are all ultimately epiphenomenal. Accordingly, they contribute nothing to explain causal properties and merely provide a way of mentally cl.u.s.tering individually diverse physical interactions. They are according to this view mere projections of our descriptive simplifications. However, each utterance of such a term, that is, the name of a type, is itself an instance of a discrete physical phenomenon capable of having a discrete effect on the world, even if its general content corresponds to no real thing. This is the source of the term describing this view, where nom- refers to "name." In effect, then, only the name of the general type is a real part of the world. Peirce characterized the core issue in terms of "whether laws and general types are figments of the mind or are real." A general type, in Peirce's metaphysics, is not merely an aspect of mind, but is "the most important element of being." It is the ultimate locus of physical causality so discovering these general features should be "the end and aim of knowledge."3 The concept of emergence a.s.sumes that new general types of properties and physical dispositions-new causal habits of nature-can arise de novo. Hence, this cla.s.sic unresolved debate is fundamental to questions of emergence.
Realism and nominalism were serious issues for Scholastic thinkers struggling to make sense of theological problems, because ideals and norms, such as characterize the true, the good, and the beautiful, were not specific instances but general properties. The debate is nearly as old as Western philosophy. The existence of Plato's ideal forms and properties and their presumed physical influence was challenged even by Aristotle, who argued that there is a necessary link between matter and its formal properties. While not denying the efficacy of formal influences (implicit in his notions of formal cause), Aristotle nevertheless argued that the forms, regular tendencies, and general properties were always embodied materially and specifically. Even the end-directed tendencies (final causes) of living organisms were literally embodied in their material const.i.tution. His concept of an entelechy was for him inseparable from the material of that body. This compromise resisted serious challenges until the late medieval period.
The beginning of a serious challenge to realism can probably be traced to William of Occam. Occam's interest in developing principles of precise logic led him to argue that although different material objects exhibit characteristic forms and properties, the similarities among cla.s.ses of these are solely the result of how the human mind ("soul" in his terminology) applies the same mental sign (i.e., distinct concept) to aspects of these objects, due to limitations in mental capacity. Like the words themselves, he argues, these mental signs are distinct and individual, but the general aspects and properties of things to which they refer do not exist.
Basically, Occam considered it mistaken to a.s.sume that universal properties exist independent of specific individual instances. Mental signs are themselves singular individual ent.i.ties, and every instance to which these are applied is a singular and distinct individual case. There is no general feature that can be understood without being singularly embodied. There are only things to which these signs apply. So, while mental signs apply plurally, there is no unambiguous general property in the world that corresponds to a given mental sign. Most scholars point out that Occam did not deny the possibility that there might be actual universal properties in the world, but nevertheless felt that the existence of words referring to them did not guarantee their existence. The words may only reflect the finiteness and limitations of the mind. Although this does not directly follow from his famous exhortation to eliminate redundant hypotheses in favor of the simplest or fewest-memorialized in what came to be called Occam's razor-it is consistent with it.
The reductionistic a.s.sumptions characteristic of modern science exemplify an a.n.a.logous denial of any simple notion of general realism by treating all properties of things as reducible to properties of their parts and their interactions. Curiously, however, modern science remains agnostic about this a.s.sumption in other respects. For example, the acceptance of general physical laws as determining the way things can change and physically affect each other can be taken to imply that at least these general laws have some kind of real efficacy, independent of the physical events that exhibit them.
It would take us too far afield to review and critique the many subtleties of argument that have been proposed on both sides of this long and complex philosophical debate, and yet it should be obvious that some form of realism must hold for an emergentist view of the physical world to be accurate. For an emergence theory to offer anything more than a descriptive reframing of events, it must explain how general regularities and dispositions of causal interaction can be responsible for which particular causal interactions occur and which don't, over and above any particular physical antecedent state of things. This is an issue that ultimately cannot be ignored. As we've seen above, the force of the critique of the efficacy of general types derives from a particular set of Platonic a.s.sumptions about what const.i.tutes something general, and so this apparent dilemma may dissolve if these a.s.sumptions can be dispensed with.
REDUNDANCY.
To reexamine these a.s.sumptions of realism, let's consider what is implied by Peirce's concept of habit. Habits and patterns are an expression of redundancy. Redundancy is a defining attribute of dynamic organization, such as the spiraling of galaxies, the symmetries defining geometric forms such as polygons, and the global regularities of physical processes such as are expressed by the second law of thermodynamics. Each is a form of organization in which some process or object is characterized by the repet.i.tion of similar attributes. But this characterization of organization in terms of repet.i.tion or similarity brings up another long-standing philosophical debate. Consider an individual whirlpool in a stream. What distinguishes its individuality and continuity?
Obviously, the individuality of the whirlpool is not determined by its material const.i.tution. The whirlpool is composed of different water molecules, moment to moment, and none remain more than a few seconds. What about the pattern of the movement of water molecules through the whirlpool? An individual whirlpool is easily distinguished from the surrounding flowing water because of its roughly circular symmetry and the general movement of water around a center of rotation. And it is the persistence of both this circular symmetry and the locus of the rotational center that we implicitly refer to when describing this flow pattern as the same whirlpool moment to moment. But of course, on closer inspection, an actual whirlpool is slightly irregular, and this irregularity of shape is constantly changing. Moreover, if we were to explore the flow of water through an individual whirlpool (e.g., by injecting a stream of dye just upstream or tracking the movement of individual molecules within the flow), we would probably discover that some of the water even pa.s.ses more or less directly through the whirlpool and does not follow a circular or spiraling path at all. And if we were able to examine it in even more precise submicroscopic molecular detail, it is likely that we would find that no individual water molecule ever actually follows a precise, circularly symmetric path. Each will be differently deviant in its trajectory from this ideal pattern.
Is there anything that is constant-anything that we can use to define something as the same whirlpool moment to moment, other than a general tendency that we notice only because we ignore these many details? Indeed, it might be argued that defining a whirlpool as something stable and persistent is merely an artifact of having failed to attend to these many differences. There really is nothing quite the same, and nothing exactly regular in the whirlpool, from one moment to the next. So, is the regularity on which we base its individuality merely a simplification due to descriptive abstraction?
This example is of course generalizable. No two instances of a general category of dynamical organization-whether two whirlpools in water, two members of the same species, or even "identical" twins-are ever truly identical. Furthermore, similarities of an even more general sort, such as characterize the commonalities between whirlpools in water and in spiral galaxies, may only have very abstract characteristics in common. In all cases, similarity is a.s.sessed by ignoring differences at some level of detail, and in some cases by ignoring almost all differences, including vast differences of scale (as in the spiraling of water and of stars). So describing both of these dynamical objects as exemplars of the same general type of phenomenon is an account that only makes sense at a very high level of abstraction. Does this mean that there is nothing in common about the causes of these similar-appearing processes? Is the rough circular symmetry of these two processes only a commonality in the mind of an observer, but otherwise of no causal consequence? Does the effect of the symmetry of global angular momentum in each have any independent causal relevance over and above the molecular collisions and cohesion in the one and the center of gravitational attraction in the other? Or is this apparent commonality also merely a descriptive feature imposed by external a.n.a.lysis and descriptive simplification? In other words, shouldn't we stop thinking of the spiraling as being "out there" when it is just a case of observational simplification? Isn't this just a result of what we ignore?
Of course, a description is not a physical attribute of the thing being described. Comparison to some ideal type of geometric object, such as a regular spiral, is merely something that goes on in the mind. If these similarities are only descriptive glosses, then indeed invoking them as causes of anything is physically irrelevant because it is redundant. What matters are all the many individual physical interactions, not the vague similarities we notice. a.n.a.logously, then, if emergence is defined in terms of descriptive abstractions alone-regularities and patterns that are merely projections of our ideal types, mental categories, and formal models-it would seem as though we are identifying emergence in merely descriptive (and thus epistemological) terms, but not causal (ontological) terms. Emergence in this sense would only exist in the mind of the beholder.
When we attribute some general property or regularity to some natural phenomenon, like spiral motion, we have certainly made an a.s.sessment based on our experience. The regularities that we focus on to distinguish different types of things are indeed constructions of our perceptual and cognitive apparatus. Our abilities are only sensitive to very few of its physical characteristics, and usually we are only attentive to a fraction of the details that we can discern. Does this necessarily imply that these similarities and regularities are somehow mere figments of mind? Can they be mental abstractions and physically relevant causal features of the world at the same time? One reason to think that mental abstraction does not imply physical epiphenomenalism is that mental processes are themselves also dependent on physical processes. To even perceive regularity or pattern this act of observation must itself be grounded in a habit of mind, so to speak. In other words, to attribute physical regularity to some perceived or measured phenomenon presumes a prior mental regularity or habit with respect to which the physical regularity is a.s.sessed. So, unless we grant mentality some other undefined and mysterious means of observational abstraction, claiming that regularities, similarities, and general types are only in the mind but not in the world simply pa.s.ses the buck, so to speak. Some general tendency must already exist in order to attribute a general tendency to something else.
MORE SIMILAR = LESS DIFFERENT.
Even if we grant that general tendencies of mind must already exist in order to posit the existence of general tendencies outside the mind, we still haven't made any progress toward escaping this conceptual cul-de-sac. This is because comparison and abstraction are not physical processes. To make physical sense of ententional phenomena, we must s.h.i.+ft our focus from what is similar or regularly present to focus on those attributes that are not expressed and those states that are not realized. This may at first seem both unnecessary and a mere semantic trick. In fact, despite the considerable clumsiness of this way of thinking about dynamical organization, it will turn out to allow us to dispense with the problem of comparison and similarity, and will help us articulate a physical a.n.a.logue to the concept of mental abstraction.
The general logic is as follows: If not all possible states are realized, variety in the ways things can differ is reduced. Difference is the opposite of similarity. So, for a finite constellation of events or objects, any reduction of difference is an increase in similarity. Similarity understood in this negative sense-as simply fewer total differences-can be defined irrespective of any form or model and without even specifying which differences are reduced. A comparison of specifically delineated differences is not necessary, only the fact of some reduction. It is in this respect merely a quant.i.tative rather than a qualitative determination of similarity, and consequently it lacks the formal and aesthetic aspects of our everyday conception of similarity.
To ill.u.s.trate, consider this list of negative attributes of two distinct objects: neither fits through the hole in a doughnut; neither floats on water; neither dissolves in water; neither moves itself spontaneously; neither lets light pa.s.s through it; neither melts ice when placed in contact with it; neither can be penetrated by a toothpick; and neither makes an impression when placed on a wet clay surface. Now, ask yourself, could a child throw both? Most likely. They don't have to exhibit these causal incapacities for the same reasons, but because of what they don't do, there are also things that both can likely do or can have done to them.
Does a.s.sessing each of these differences involve observation? Are these just ways of a.s.sessing similarity? In the trivial example above, notice that each negative attribute could be the result of a distinct individual physical interaction. Each consequence would thus be something that fails to occur in that physical interaction. This means that a machine could be devised in which each of these causal interactions was applied to randomly selected objects. The objects that fail all tests could then get sorted into a container. The highly probable result is that any of these objects could be thrown by a child. No observer is necessary to create this collection of objects of "throwable" type. And having the general property of throwability would only be one of an innumerable number of properties these objects would share in common. All would be determined by what didn't happen in this selection process.
As this example demonstrates, being of a similar general type need not be a property imposed by extrinsic observation, description, or comparison to some ideal model or exemplar. It can simply be the result of what doesn't result from individual physical interactions. And yet what doesn't occur can be highly predictive of what can occur. An observational abstraction isn't necessary to discern that all these objects possess this same property of throwability, because this commonality does not require that these objects have been a.s.sessed by any positive attributes. Only what didn't occur. The collection of throwable objects is just everything that is left over. They need have nothing else in common than that they were not eliminated. Their physical differences didn't make a difference in these interactions.
Had we started this thought experiment with a large collection of randomly chosen objects and subjected them to this series of interactions, the resulting objects would have been a small subset of the original collection, and the differences in properties among the objects in that subset would have been only a fraction of the differences exhibited by the initial collection. To say this in other terms: the variety of properties exhibited was significantly constrained in the resultant set of objects, compared to this initial variety. This concept of constraint can provide a negative approach to realism,4 though it might also be paradoxically described as a nominalism of absences, since it is determined by discrete interaction effects that don't occur.
Along these lines, the introductory quote from W. Ross Ashby-a pioneering figure in the history of cybernetics-offers a negative way to define concepts like order, organization, and habit. The concept of constraint is, in effect, a complementary concept to order, habit, and organization, because it determines a similarity cla.s.s by exclusion. Paying attention to the critical role played by constraints in the determination of causal processes offers us a figure/background reversal that will turn out to be critical to addressing some of the more problematic issues standing in the way of developing a scientific theory of emergence. In this way, we avoid a.s.suming that abstract properties have physical potency, and yet do not altogether abandon the notion that certain general properties can produce other general properties as causal consequences. This is because the concept of constraint does not treat organization as though it is something added to a process or to an ensemble of elements. It is not something over and above these const.i.tuents and their relations.h.i.+ps to one another. And yet it neither demotes organization to mere descriptive status nor does it confuse organization with the specifics of the components and their particular singular relations.h.i.+ps to one another. Constraints are what is not there but could have been, irrespective of whether this is registered by any act of observation.
In statistical mechanics, constraint is a technical term used to describe some reduction of degrees of freedom of change or a restriction on the variation of properties that is less than what is possible. In colloquial terms, constraint tends to be understood as an external limitation, reflecting some extrinsically imposed factor that reduces possibilities or options. So, for example, railcars are constrained in their movement by the location of tracks, farmers are constrained in what can be grown by the local climate, and citizens are constrained in their behaviors by laws. In each case, there are reduced degrees of freedom that could potentially be realized in the absence of these constraints.
In a 1968 paper ent.i.tled "Life's Irreducible Structure," the philosopher-scientist Michael Polanyi argued that the crucial difference between life and chemistry is the constraint that DNA imposes on the range of chemical reactions that tend to occur in an organism. Genetic information introduces these constraints by virtue of the ways that it generates molecules which either act as catalysts to facilitate certain chemical reactions or serve as structural elements which minimize certain other molecular interactions. Polanyi argues that this formal influence is what separates life from non-living chemistry, and distinguishes both man-made machines and living organisms from other physical systems. For this reason, he argues that this formal aspect cannot be reduced to chemistry. This is an important insight, but it falls short of resolving the issue of life's ententional features in one important respect: it treats the source of constraint as extrinsic. Indeed, this is critical to his argument for irreducibility; but as the a.n.a.logy to machines indicates, treating this influence as external to the chemistry pa.s.ses the explanatory buck to DNA as an instruction manual and ultimately to natural selection as its author. If this constraint is the defining feature of life, but must be created and imposed extrinsically, then life's ententional features are inherited from an extrinsic homunculus, which in this case doesn't exist.
Constraints can also be intrinsic. The etymology of the term-which derives from the Latin, and roughly means "that which binds together"-does not imply anything about the cause of that restriction or reduction of variety; for example, whether it is a property that is intrinsic or extrinsic to whatever it is that is thereby constrained. The term constraint thus denotes the property of being restricted or being less variable than possible, all other things being equal, and irrespective of why it is restricted. This latter usage has become common in statistical and engineering applications, such as when describing the range of values for some measured feature of a collection of objects, a set of observations, or a sample of informant answers. Used in an intrinsic sense, we might say that the errors in a set of measurements fall within certain constraints or tolerances. Or we might comment that the limitations of a given measuring technique impose constraints on the accuracy of the results. In this sense, constraint is a property of a collection or ensemble of some sort, but a negative property. It is a way of referring to what is not exhibited, but could have been, at least under some circ.u.mstances. As we will see, distinguis.h.i.+ng between an imposed restriction and a restriction that develops because of intrinsic properties and processes is critically important to the project of making sense of ententional properties, and avoiding any implicit realist a.s.sumptions that compromise the concepts of order and organization.
As critics of emergence have long argued, we tend to categorize different forms of pattern and organization in terms of possessing abstract formal properties. Since such properties are not intrinsic to the collection of interacting components involved but are instead based on comparisons to some abstract ideal forms, they are indeed causally epiphenomenal. We can now counter that general formal properties do exist independent of external observed comparison, but they are not positive attributes in this traditional sense. They are, so to speak, correlates of the physical simplification that follows from constraint. Thinking in terms of constraint in this way changes how we must understand the concepts of order, organization, and pattern-and thus emergence.
Organization is a general property that can be more or less specifically described, but it is typically defined with respect to some model. When we describe the organization of stars in a galaxy, it is with respect to being globular or spiral in shape, and when we characterize a human organization, such as the United Nations, it is with respect to a charter and purpose. Similarly, describing something as exhibiting order typically appeals to some abstract ideal, such as arranging a deck of playing cards according to numerical values and suits. Even just saying that events follow a pattern implies that we observe regularly repeated similar attributes. Modern science and philosophy have long rejected the Platonic notion that the objects and events in the world are merely flawed instantiations of ideal forms, in favor of the recognition that form and organization refer to idealizations abstracted from observation. Thus it is accurate to a.s.sume that scientific descriptions framed in terms of order and form are to be understood heuristically. This is even often applied to that ubiquitous asymmetry of change, the second law of thermodynamics, which characterizes the predictable pattern of change toward increasing disorder. When we describe an organized room as "ordered" and a messy room as "disordered," we are in effect privileging one arrangement of objects over another, and characterizing the tendency to spontaneously fall into messiness as a loss of something. Statistical mechanics simply recognizes that there are a great many more arrangements considered messy, according to human preferences, than there are orderly ones, and that random change tends to inevitably produce the more probable results. So describing a high-entropy state as disordered implicitly suggests that we cannot recognize any pattern.
By abandoning descriptive notions of form, however, and thinking of regularity and organization in terms of possible features being excluded-real possibilities not actualized-two things are gained. First, individuation: possible features not expressed and degrees of freedom not exhibited are specific individual facts. They are precisely measurable in s.p.a.ce and time, in just the same way as are specific trajectories and velocities of moving atoms. Second, extension: constraints can have definite extension across both s.p.a.ce and time, and across whole ensembles of elements and interactions. But although the specific absences that const.i.tute a constraint do not suffer the epiphenomenality of descriptive notions of organization, they are nevertheless explicitly not anything that is present. This requires that we show how what is absent is responsible for the causal power of organization and the asymmetric dynamics of a physical or living process. Beginning to explain how this might be possible is the major task of this chapter. But a full explanation (undertaken in subsequent chapters) will require that we eventually reconceptualize even something as basic as physical work, not to mention concepts like self-organization and evolution-concepts that are typically defined in terms of explicit model relations.h.i.+ps.
There is one other advantage of this approach. Employing the concept of constraint instead of the concept of organization (as Ashby proposes in the epigraph) not only avoids observer-dependent criteria for distinguis.h.i.+ng patterns, it also undermines value-laden notions of what is orderly and not. As in the case of the messiness of a room, order is commonly defined relative to the expectations and aesthetics of an observer. In contrast, constraint can be objectively and unambiguously a.s.sessed. That said, order and constraint are intrinsically related concepts. Irrespective of specific observer preferences, something will tend to be a.s.sessed as being more orderly if it reflects more constraint. We tend to describe things as more ordered if they are more predictable, more symmetric, more correlated, and thus more redundant in some features. To the extent that constraint is reduced variety, there will be more redundancy in attributes. This is the case with respect to any change: when some process is more constrained in some finite variety of values of its parameters or in the number of dimensions in which it can vary, its configurations, states, and paths of change will more often be "near" previous ones in the s.p.a.ce of possibilities, even if there is never exact repet.i.tion.
The advantage of this negative way of a.s.sessing order is that it does not imply any model-based conception of order, regularity, or predictability. It is only something less than varying without limit. But this a.s.sessment encompa.s.ses a wide range of phenomena between those that we normally describe as orderly and those that we describe as entirely chaotic. As we saw earlier, chaos theory provides an important context for demonstrating the usefulness of this figure/background s.h.i.+ft in the a.n.a.lysis of order and organization. Calling a pattern "chaotic" in this theoretical context does not necessarily mean that there is a complete absence of predictability, but rather that there is very little redundancy with which to simplify the description.
This way of characterizing disorder is exemplified by an information-theoretic means of measuring complexity termed Kolmogorov complexity, after the theoretician who first promoted its use, the Russian mathematician Andrey Nikolaevich Kolmogorov (19031987). It can most easily be understood in terms of a method for a.n.a.lyzing or generating a string of numbers. If the same string can be generated by an algorithm that is shorter than that string, it is said to be compressible to that extent. Such an algorithm effectively captures a form of redundancy that is not superficially exemplified in its product. So, for example, even an apparently non-repeating sequence, such as the decimal digits of , can be generated by an algorithm that progressively calculates an approximation of the relations.h.i.+p between the diameter and circ.u.mference of a circle. This is how a short and simple computer algorithm can generate the numbers of this infinite sequence to any length desired. Similarly, a superficially simple non-linear equation5 may produce an irregular non-repeating trajectory when plotted on a graph. This is the case with the Lorenz attractor graph, discussed in the last chapter, and yet iterating successive values of the equation shows that the non-repet.i.tion of values nevertheless is considerably less variable than fully chaotic. But if a sequence of values (such as in a string of numbers or coordinates of a trajectory) cannot be generated by an algorithm that is shorter in length than the sequence itself, that sequence is described as maximally complex. Since there is no algorithm able to extract some regularity from such a sequence, it is in this sense also maximally disordered.
Calling this theoretical field "chaos theory" was in this sense somewhat ironic, since the focus is not on maximal disorder, but on what order could be discovered amidst the apparent disorder. As a result, the field is more commonly and more appropriately described as "complexity theory." But note that the term complexity is given contrasting connotations here. The word "complex" etymologically derives from roots that together mean something like woven, braided, or plaited together, and not merely involving many details. So what may appear chaotic may be merely the result of a simple operation that tends to entwine with itself to the point that any regularity is obscured. Alternatively, being impossible to simplify means that there are only details to contend with, nothing simpler. Irrespective of whether all real phenomena are maximally algorithmically complex-as extreme nominalism would suggest-or are algorithmically compressible without residue-as extreme realism would suggest-a constraint view of orderliness still applies. It is tantamount to allowing sloppy compression. For example, if all we require of the prediction algorithm is that it specifies a numerical sequence in which each number is within 2 of the predicted value for some specified finite sequence length, then many more sequences will be approximately describable by the same algorithm. In some sense, this is a.n.a.logous to the way we psychologically identify similarity, and thus characterize order.
Thinking of Peirce's notion of habit as the expression of constraint is implicit in his doctrine of tychism, which he considers to be the most primitive feature of his ontology. This is critical to his conception of "law," which, although it can be framed in Platonic terms, was for Peirce merely a habit of nature that "evolved" in some sense. From a Peircean point of view, a natural law should be described as an invariant tendency that requires us to explain why it is not tychistic. By translating Peirce's notion of habit into the terms of constraint, his unique reframing of metaphysical realism can be made more precise. Recasting the Realism/Nominalism debate in terms of dynamics and constraints eliminates the need to refer to both abstract generals, like organization, and simple particular objects or events lacking in organization. Both are simplifications due to our representation of things, not things in themselves. What exist are processes of change, constraints exhibited by those processes, and the statistical smoothing and the attractors (dynamical regularities that form due to self-organizing processes) that embody the options left by these constraints.
CONCRETE ABSTRACTION.
To return to the conundrum that began this exploration: Are constraints in the head or also in the world? Do they exist irrespective of observation, and if so, how do they make a difference in the world? To answer this, we need to know if causal interactions in the physical world also, in effect, tend to "ignore" certain details during physical interactions. If so, this negative characterization of regularity and habit would be both a more accurate representation of the underlying physics and ultimately the relevant causal factor. Recall that the nominalist critique of Platonic realism argues that the regularities and similarities that we discern in the world are merely descriptive simplifications-abstractions away from the details-and thus causally irrelevant. However, if certain physical interactions also tend to drop out selective details, then there can also be a purely physical a.n.a.logue to the abstraction process. Constraint is precisely this: the elimination of certain features that could have been present.
But can the concept of constraint account for something like concrete abstraction: a physical a.n.a.logue to mental abstraction? One important clue is that what something doesn't exhibit, it can't impose on something else via interaction. In other words, where there is no difference, it can cause no difference. To the extent that differences in one system can produce differences in the other, those variations not expressed in one will not be transferred to the other during their interaction. Although this may initially sound like there is no causality involved, it's not that simple. As we will see later (in chapter 11), this is because the presence of constraint-the absence of certain potential states-is a critical factor in the capacity to perform work. Thus, it is only because of a restriction or constraint imposed on the release of energy (e.g., the one-directional expansion of an exploding gas in a cylinder) that a change of state can be imposed by one system on another. It is precisely by virtue of what is not enabled, but could otherwise have occurred, that a change can be forced.
So the nature of the constraint (and therefore the absent options) indirectly determines which differences can and cannot make a difference in any interaction. This has two complementary consequences. Whenever existing variations are suppressed or otherwise prevented from making a difference in any interaction, they cannot be a source of causal influence; but whenever new constraints are generated, a specific capacity to do work is also generated.
In ma.s.sively componential processes involving linearly interacting components, such as in a simple near-equilibrium thermodynamic system, low-probability fluctuations will tend to be statistically "washed out" with increase in scale. Microscopic fluctuations will have minimal macroscopic influence. But this means that macroscopic interactions between such systems will largely be insensitive to these microscopic fluctuations, and the system's capacity to do work that alters another system will be determined by this constraint. Alternatively, in ma.s.sively componential dynamical systems where there is the possibility of extensive non-linear interactions, such as in systems that are persistently maintained far from equilibrium, constraints can sometimes amplify to become macroscopically expressed. In these cases, the emergence of these new constraints at the macroscopic level can be a source for new forms of work. We will focus on the first of these dynamical conditions-near-equilibrium thermodynamic processes-in the remainder of this and the next chapter, and will take up the second of these conditions-persistent far-from-equilibrium processes-in chapter 8.
Ascent in scale plays a critical role, because it tends to reduce the contributions of micro fluctuations at the macroscopic level by virtue of the way that the distribution of fluctuations tends to even out. Time-scale differences also play a role. Component micro interactions tend to occur at rates that are many orders of magnitude more rapid than interactions between systems. Perturbed micro components will return to stable near-equilibrium conditions at rates that are orders of magnitude more rapid than the time involved in macro interactions. As a result, the highest-probability micro conditions will tend to correlate with the most redundant micro-interaction effects. To say this in different terms: the detailed micro fluctuations of the processes underlying higher-order system properties mostly cancel each other out over modest distances, so that only the most generalized attractor features end up having causal influence at the higher level. It is as though the details of what is happening at the lower level are filtered out, only allowing the most redundant features to be manifest at the higher level.
This is a property related to Humphreys' concept of fusion-a loss of properties with ascent of scale-but is much more general, and less exotic. A decrease in the diversity of variations able to exert influence on systemic variation with ascent in scale means that some lower-level details are lost. There is nothing surprising about this. Consider again one of the first attempts to describe emergence: Mill's account of chemical properties distinguis.h.i.+ng chemical compounds and those of their component elements (his example of sodium, chlorine, and their ionically bonded compound, table salt). In the compound, the properties of the individual atoms are masked as each is in some sense modified by the bond with the other (in simplified terms, due to transfer of an electron). As a result, elemental properties are effectively suppressed (reduced to extremely low probability), whereas the properties a.s.sociated with their stable (i.e., highly constrained) interactions are vastly more likely to be expressed in any interactions with other molecules.
This feature of physical hierarchies was highlighted in a 1972 Science review article by the n.o.bel Prizewinning physicist Philip Anderson, appropriately t.i.tled "More Is Different," and has been reiterated in various forms by a number of theorists since. The basic argument is that material properties (especially those exhibited in solid-state forms of matter) inevitably exhibit discretely segregated levels of organization, with vast differences in mean levels of energy of interaction and intervals between interaction effects, correlated with vast differences in spatial dimensions. This is particularly striking for solid states of matter, because the atomic-scale interactions have become highly localized in their dynamics compared to atomic-scale interactions in other phases (e.g., liquid or gas). The bonds between atoms are, however, in constant flux, often flip-flopping between alternative orientations in proportion to their relative probabilities, but at rates vastly more rapid than these effects could possibly propagate to neighboring regions of the solid. This allows statistical scale effects to produce a kind of mutual insulation of the dynamical interaction possibilities between adjacent levels. For example, the specific heat of a solid is a reflection of the level of constant jostling of component molecules; but these constant micro fluctuations cycle through millions of states in the time that a solid-to-solid or solid-to-liquid interaction takes place, and so are almost entirely causally irrelevant. So, although we tend to think of solids as made up of unchanging components and liquids as uniform continua, this is an illusion of statistical smoothing.
Following in this tradition, a focus on what gets lost with ascent in scale in physical systems has become a key notion in the n.o.bel Prizewinning physicist Robert Laughlin's conception of emergence. We can employ a rough caricature of Laughlin's concept of protected states to describe this sort of insulation between levels of physical dynamics.6 Thus the reason that the atoms or molecules in a solid, liquid, or gas can largely be treated as simple billiard balllike objects is that each tends to be in a dynamically regular stable ground state (i.e., constrained to an attractor) with a very high probability. And it will immediately return to this state shortly af